Running backstage-backend with readOnlyRootFilesystem=true in kube
See original GitHub issueExpected Behavior
There should be a elegant solution to running backstage backend in an environment with readOnlyRootFilesystem.
Current Behavior
When deploying the backend docker image in Kubernetes, I keep getting the following error when the startup script first tries to inject the config into module-backstage.hash.js:
2021-01-18T22:27:51.341Z backstage info Loaded config from app-config.yaml, app-config.production.yaml, k8s-config.yaml
2021-01-18T22:27:51.345Z backstage info Created UrlReader predicateMux{readers=azure{host=dev.azure.com,authed=false},bitbucket{host=bitbucket.org,authed=false},github{host=github.com,authed=true},gi
tlab{host=gitlab.com,authed=false},fallback=fetch{}}
2021-01-18T22:27:51.468Z catalog info Locations Refresh: Beginning locations refresh
2021-01-18T22:27:51.485Z catalog info Locations Refresh: Visiting 1 locations
2021-01-18T22:27:51.485Z catalog info Locations Refresh: Refreshing location bootstrap:bootstrap
2021-01-18T22:27:51.516Z techdocs info Creating Local publisher for TechDocs
2021-01-18T22:27:51.521Z proxy info [HPM] Proxy created: [Function: filter] -> https://example.com
2021-01-18T22:27:51.521Z proxy info [HPM] Proxy rewrite rule created: "^/api/proxy/test/" ~> "/"
2021-01-18T22:27:51.526Z app info Serving static app content from /usr/src/app/packages/app/dist
2021-01-18T22:27:51.629Z app info Injecting env config into module-backstage.e812caed.js
Backend failed to start up, Error: EACCES: permission denied, open '/usr/src/app/packages/app/dist/static/module-backstage.e812caed.js'
My organization has a global PodSecurityPolicy set with readOnlyRootFilesystem=true which is causing this to happen. Running docker locally or in minikube works with no issue. I have brainstormed many solutions, such as mounting a volume and copying the /usr/src/app files in, but it is not very simple and I would rather do something more easily maintainable.
Possible Solution
Modify whichever files need to be written to in a TMPDIR of some sort which can be mounted as a persistent volume from the kube platform.
Steps to Reproduce
- Set readOnlyRootFilesystem: true in the PodSecurityPolicy in Kubernetes
- Run a pod with the default docker image from the included Dockerfile.
Context
I am deploying this project as a replacement for an old internal infrastructure toolset on my team, and so I do not have fine-grained control over our Kube permissions. I imagine this will be a more common issue as more companies begin picking up this project, so having a good out of the box solution for this issue seems very desirable.
Your Environment
- FROM node:14-buster
- ArgoCD deployment
- Kubernetes Server v1.17.14
Dockerfile (one of many I have I tried)
FROM node:14-buster
WORKDIR /usr/src/app
ADD yarn.lock package.json skeleton.tar ./
RUN yarn install --frozen-lockfile --production --network-timeout 300000 && rm -rf "$(yarn cache dir)"
COPY . .
# I have tried this, does not work with Kubernetes
# VOLUME /usr/src/app
CMD ["node", "packages/backend", "--config", "app-config.yaml", "--config", "app-config.production.yaml"]
Thanks everyone! I’m new with Backstage and a lot of the stuff it runs on, so please forgive me if I messed something simple up.
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (4 by maintainers)

Top Related StackOverflow Question
Yeah I am just using a basic nginx container for my frontend. To completely disable the frontend trying to host the backend, you will need to remove this line in your router,
index.ts(https://github.com/backstage/backstage/blob/master/packages/backend/src/index.ts#L128) or add a boolean to disable it to your config.Then make sure to remove
inject-config.shfrom your Dockerfile entrypoint, that is what is trying to write to the FS. For production you can apply the config during the build step of the frontend instead using --config flag.Thank you! That was my hunch after looking through the source code, glad to know an nginx container should fix the issue. Might take you up on that PR 😉