Named Kafka cluster configurations for topic blueprintSee original GitHub issue
Kafka connection details can either be provided during installation (as simply,
bootstrap.servers) or explicitly within a blueprint using advanced topic configuration. In addition, we should let users define a reusable named Kafka connection configuration for as many clusters as they like. The blueprint can then reference these configurations by a simple name.
A Kafka connection configuration will include the following information:
- producer client config
- consumer client config
During an application deployment the cloudflow operator will first lookup any referenced Kafka connection configurations and propagate them along to streamlets as they’re created.
If a blueprint doesn’t reference any Kafka cluster configurations then the current behaviour will be preserved. The order of precedence to load a configuration in this case will be:
- use the
defaultKafka cluster configuration, if it exists
- use the advanced connection info provided in the blueprint, if it exists
- use the
bootstrap.serversconfiguration passed to the operator during install
Before deployment the CLI will verify that secrets containing any referenced Kafka cluster configurations exist.
- Created 3 years ago
- Comments:10 (10 by maintainers)
Top GitHub Comments
My mistake. Helm can create secrets, but I was getting tripped over the fact that it expects the
data field to be base64. Using the
stringData field lets you provide non-encoded data.