Issue with v0.1.4
See original GitHub issueHello, We’re on v0.1.4 and just spun up a cruise control machine and see this warning all over the logs:
[2018-09-13 18:31:36,539] WARN Goal violation detector received exception (com.linkedin.kafka.cruisecontrol.detector.GoalViolationDetector) com.linkedin.kafka.cruisecontrol.exception.OptimizationFailureException: Insufficient healthy cluster capacity for resource:disk existing cluster utilization 2215449.25 allowed capacity 720000.0 at com.linkedin.kafka.cruisecontrol.analyzer.goals.CapacityGoal.initGoalState(CapacityGoal.java:173) at com.linkedin.kafka.cruisecontrol.analyzer.goals.AbstractGoal.optimize(AbstractGoal.java:81) at com.linkedin.kafka.cruisecontrol.detector.GoalViolationDetector.optimizeForGoal(GoalViolationDetector.java:172) at com.linkedin.kafka.cruisecontrol.detector.GoalViolationDetector.run(GoalViolationDetector.java:125) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
In my capacity.json, the Disk capacity is set as “DISK”: “500000”, so not sure from where it is getting the value 720000.0.
Could you please suggest @becketqin or @efeg ?
Issue Analytics
- State:
- Created 5 years ago
- Comments:11 (3 by maintainers)
Top GitHub Comments
@qz-fordham Unless users implement their own pluggable capacity resolver, the default capacity resolver retrieves the broker capacity information from a file. This file is expected to be populated by users, and it should reflect the real capacity of brokers. If users forget populating this file or use incorrect capacity information while doing so, CC would get unrealistic capacity information. Whereas bytes stored on Kafka logs comes from Kafka metrics; hence, they represent the actual / current data.
In this example, looks like the user indicated the following:
Hope it clarifies the root cause.
Hi @efeg Thanks a lot for a detailed response. Here are my inputs on the things you asked to check:
We noticed that the values f the disk capacities in ‘capacity.json’ were incorrect. We’re in the process of correcting that values in order to avoid this warning.
Once we do that, I’ll confirm here if the issue is fixed or not.