Topic operator auto regenerate topic after deleting
See original GitHub issueDescribe the bug I create a Kafka cluster with Topic Operator. Then I create “my-topic” by using Kafka Topic crd. But when I use CLI to delete topic, it will be regenerate.
To Reproduce Steps to reproduce the behavior:
- Create Custom Resource ‘Kafka’
- Create Custom Resource ‘KafkaTopic’
- Go to Zookeeper pod
- Run command ‘$ bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic my-topic’
- Kafka Topic ‘my-topic’ is auto regenerated
Expected behavior Topic is deleted and will not be re generated.
Environment (please complete the following information):
- Strimzi version: 0.25.0
- Installation method: [e.g. YAML files, Helm chart, OperatorHub.io]
- Kubernetes cluster: OpenShift 4.9
- Infrastructure: Baremetal
YAML files and logs
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
spec:
entityOperator:
topicOperator: {}
userOperator: {}
kafka:
authorization:
type: simple
config:
inter.broker.protocol.version: '2.8'
log.message.format.version: '2.8'
transaction.state.log.min.isr: 2
replica.fetch.max.bytes: 41943040
max.message.bytes: 10485760
offsets.topic.replication.factor: 3
listeners:
- authentication:
type: scram-sha-512
name: plain
port: 9092
tls: false
type: internal
- authentication:
type: scram-sha-512
name: tls
port: 9093
tls: true
type: internal
- authentication:
type: scram-sha-512
name: external
port: 9094
tls: true
type: route
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
key: kafka-metrics-config.yml
name: kafka-metrics
replicas: 3
storage:
class: nfs
deleteClaim: false
size: 5Gi
type: persistent-claim
version: 2.8.0
kafkaExporter:
groupRegex: .*
topicRegex: .*
zookeeper:
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
key: zookeeper-metrics-config.yml
name: kafka-metrics
replicas: 3
storage:
class: nfs
deleteClaim: false
size: 5Gi
type: persistent-claim
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: my-topic
labels:
strimzi.io/cluster: my-cluster
spec:
config: {}
partitions: 10
replicas: 3
topicName: my-topic
Additional context After regeneration, both partitions and replicas of topic are 1.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:26 (11 by maintainers)
Top Results From Across the Web
How Do I Solve the Problem that Kafka Topics Cannot ... - 华为云
How do I delete a Kafka topic if it fails to be deleted?Possible cause 1: The delete.topic.enable configuration item is not set to...
Read more >Kafka delete topics when auto.create.topics is enabled
The problem is that upon deleting the topic, it is being recreated almost immediately. These topics don't have any consumer (as the worker ......
Read more >Use the Deleted Records module to restore a deleted record
Recovers the record and all cascaded deletes and other database actions that resulted from the delete. This option appears when a rollback ...
Read more >Deleting a Kafka topic | CDP Private Cloud
You can delete Kafka topics by navigating to the Topics page, and using the Delete Topic option from the topic Profile. Navigate to...
Read more >Deleting roles or instance profiles - AWS Documentation
Topics. View role access; Deleting a service-linked role; Deleting an IAM role ... This linkage happens automatically for roles and instance profiles that ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Yes, it might be an option.
I think that we have to update the code of TopicOperator to fix this issue. When I read the code of Kafka delete topic function, I found that it’s the future function. It means that although when we call delete topic function, it return success, but it is still deleting the topic actually. And if we call any function related to topic while it’s deleting, such as list topic or get topic config, the Kafka cluster will auto generate the topic with default replication. I don’t know what exactly the Strimzi code does, but I guess this is the reason. This is why if we create the topic and delete the topic with TopicOperator, we don’t meet any issue, but if we create the topic, then push many records to topic for a while, then we delete topic using TopicOperator, we might meet this issue.
@chaehni I can confirm that.
I can reproduce the issue you describe consistently when running a load test which creates a bunch of test topics (e.g. 20). After a successful test run, I do a bulk topic deletion to get rid of these test topics and I hit the issue. The TO recreate them almost immediately with the same configuration but empty (the topicId is different).
I guess we are trigger some reconciliation logic edge case here, which needs to be investigated further. At least we seem to have a reproducer.
Possible workaround: if you look at the TO logs, you may find
InvalidStateStoreException
warnings. In my case, I found that simply restarting the TO pod before the bulk topic deletion fixes the issue.