[SPIKE] Make Epiphany upgrades selective (Kafka)
See original GitHub issueDescription
In this spike we want to consider an option with selective Kafka upgrades by Epicli.
It should be relatively simple (likely requires no changes in upgrade procedures themselves) to refactor Epiphany upgrades and make it possible for the user to control what components will be upgraded. That may bring value not only to users with legacy clusters but also to the process of testing.
- It needs to be checked if Kafka supports upgrades from any version to any other version. If it requires incremental upgrades between versions, it have to be taken into account.
- The output of this task is a PoC with selective Kafka upgrade by epicli.
- If it’s possible, design document can be created.
All the topics related to the Kafka module will be done separately and covered by #1976 task and tasks that will be created as a result of #1976.
Additional information
The original task contained additional options related to Kafka module, but to avoid dependencies and clarify a single topic per task, these options are not covered here:
2. In the near future we will start introducing modules that will replace classic Epiphany components (like Kafka clusters etc). If we'd do that soon enough then the question would be if there is a possiblity to reuse modules with exising legacy on-prem clusters and what needs to be done to achieve that.
3. Support both 1. and 2.
We'd like to understand how much work do these options require to be implemented.
We want a design-doc out of his research.
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Documentation - Apache Kafka
Each partition is an ordered, immutable sequence of records that is continually appended to—a structured commit log. The records in the partitions are...
Read more >5 Common Pitfalls When Using Apache Kafka - Confluent
Data is first written to a leader replica and then replicated across to the follower replicas.
Read more >20 best practices for Apache Kafka at scale | New Relic
Apache Kafka simplifies working with data streams, but it might get complex at scale. Learn best practices to help simplify that complexity.
Read more >Best practices for right-sizing your Apache Kafka clusters to ...
In this case, it makes sense to either increase the volume throughput if possible or to add more volumes to the cluster. All...
Read more >Lessons Learned From Running Kafka at Datadog
In Kafka, an unclean leader election occurs when an unclean broker (“unclean” because it has not finished replicating the latest data updates ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@ar3ndt We urgently need these fixes:
Instead of the condition
when: ('"kafka" in upgrade_components') or upgrade_components|length == 0
the conditionwhen: "'kafka' in upgrade_components or upgrade_components|length == 0"
seems to work fine. The first condition always returns true (thanks @to-bar for debugging this together).Moving this task back to TODO.
1, 2, 3 Fixed. User can run the upgrade command with an optional argument
--upgrade-components
providing the list of components to be updated. Running the command without this argument will update all components.