Cannot set topic retention policy (error 500)
See original GitHub issueDescribe the bug I have a namespace with 4 day retention policy:
pulsar-admin namespaces create tenant/ns
pulsar-admin namespaces set-retention --size -1 --time 4d tenant/ns
I also have topic-level policy enabled:
# Enable or disable system topic
systemTopicEnabled=true
# The schema compatibility strategy to use for system topics
systemTopicSchemaCompatibilityStrategy=ALWAYS_COMPATIBLE
# Enable or disable topic level policies, topic level policies depends on the system topic
# Please enable the system topic first.
topicLevelPoliciesEnabled=true
When I attempt to turn off the retention policy for a single topic, I get either an error 504(Bad gateway) when going through pulsar-proxy or an error 500 if issuing the command from the broker with pulsar-admin:
./pulsar-admin topics set-retention -s 0 -t 0 tenant/ns/mytopic
00:20:36.472 [AsyncHttpClient-7-1] WARN org.apache.pulsar.client.admin.internal.BaseResource - [http://localhost:8080/admin/v2/persistent/tenant/ns/mytopic/retention] Failed to perform http post request: javax.ws.rs.InternalServerErrorException: HTTP 500 Internal Server Error
HTTP 500 Internal Server Error
The result is the same when attempting to change size or time to any value, not just 0. There is nothing in the broker or pulsar-proxy logs to indicate the nature of this failure. get-retention
for the topic appears to work (returns null
), and delete-retention
appears to succeed, even though no retention policy is set for the topic, but I cannot be sure, since I cannot set the policy for the topic.
Expected behavior Topic policy changed
Desktop (please complete the following information):
- OS: n/a
Additional context
The build is at current HEAD for 2.7 branch, so 2.7.5-SNAPSHOT
Issue Analytics
- State:
- Created 2 years ago
- Comments:9 (8 by maintainers)
Top GitHub Comments
There are multiple brokers, and this is going through the proxy and it never succeeds, so does it make sense that all brokers are overloaded all the time?
This appears to have been resolved by an update.