Kubernetes Operator for Backstage
See original GitHub issueCreate a Kubernetes Operator for installing and managing Backstage deployments.
Feature Suggestion
We use the operator pattern heavily in our Kubernetes clusters. It works great for complex deployments like Prometheus, where we can use CRs to manage the Prometheus configuration. Those CRs can be bundled into other applications (for example in a helm chart), allowing for easily adding to the configuration without needing to manage a massive central configuration file by hand. A good example of how we’d like to use this feature would be in our internal service helm charts:
- Each internal service we create uses a common helm chart that we’ve designed.
- The common helm chart could be extended with a Backstage CR that populates its
app-config.yaml
using values we already have in the helm config. - When we create new services, they’d automatically be added to Backstage as components, with the appropriate metadata from the helm values.
Backstage could benefit from the operator pattern, using Kubernetes custom resources as a way to define the configuration of component types, app config, and other configurable properties. The operator would be responsible for watching these CRs and translating them into Backstage config (with automatic restarting of the backstage containers upon changes to the config).
Possible Implementation
Some determination would need to be made about which CRs to support. Initially it could just support CRs for app config.
A new repository or subdirectory of this repo would need to be created to contain the operator. The operator would need to be scaffolded with operator-sdk
. Then the operator could be implemented, probably as a Golang-based operator.
Context
This is a good example of what can be accomplished with the operator pattern: https://github.com/prometheus-operator/prometheus-operator
Operators are very common in Kubernetes as a way of deploying apps, managing the upgrade process, and managing their configuration.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:16 (13 by maintainers)
There might not really be that much benefit TBH. As long as the native Kubernetes objects can cover the needs I think we can find little benifit.
Just as a thought experiment though:
An instance of Backstage could be setup by creating a
Backstage
Custom Resource (CR). Configuration parameters would then be served directly from the kubernetes API server. This is a statically typed resource which brings the first limitation. As backstage has a dynamic configuration schema depending on what plugins are registered, we cannot define the schema upfront. Because of this we would have to fall back to a dynamic schema in kunernetes and thus loose the benefit here.Another approach would be to register plugins in Kubernetes CRs as well and have the operator push these information into backstage, but in practoce this would not work as the application framework requires pretty custom code to wire stuff up.
Entities in the catalog could be stored as
Entity
CR. Backstage could then have a kubernetes catalog processor to collect them.To me, at least, it would feel odd to try to model backstage deployments in this way. A more “close to the metal” approach with Helm charts or native Deployment resources feels more right.
I’m closing this for now since we’ve gone a different route and haven’t identified any strong use cases for this.