decommission broker doesn't work
See original GitHub issueI installed cruise control . I was trying to remove a broker but the the partitions were never moved to other brokers .
curl --globoff -X POST http://localhost:9090/kafkacruisecontrol/remove_broker?brokerid=1004&&dryrun=true&throttle_removed_broker=true&kafka_assigner=true&json=true.
i was checking my kafka manager to see if the partitions moved from that broker to other brokers it never happens.
I was looking at the code is it because the goals are empty. But the documentation didn’t mention any thing about goals. I even tried putting some goals but that didn’t work.
can you please suggest what is the correct way to decommission a broker.
`private boolean addOrRemoveBroker(HttpServletRequest request, HttpServletResponse response, EndPoint endPoint) throws Exception { List<Integer> brokerIds = new ArrayList<>(); boolean dryrun; DataFrom dataFrom; boolean throttleAddedOrRemovedBrokers; List<String> goals; boolean json = wantJSON(request); try { String[] brokerIdsString = request.getParameter(BROKER_ID_PARAM).split(“,”); for (String brokerIdString : brokerIdsString) { brokerIds.add(Integer.parseInt(brokerIdString)); }
dryrun = getDryRun(request);
goals = getGoals(request);
dataFrom = getDataFrom(request);
String throttleBrokerString = endPoint == EndPoint.ADD_BROKER ?
request.getParameter(THROTTLE_ADDED_BROKER_PARAM) : request.getParameter(THROTTLE_REMOVED_BROKER_PARAM);
throttleAddedOrRemovedBrokers = throttleBrokerString == null || Boolean.parseBoolean(throttleBrokerString);
} catch (Exception e) {
StringWriter sw = new StringWriter();
e.printStackTrace(new PrintWriter(sw));
setErrorResponse(response, sw.toString(), e.getMessage(), SC_BAD_REQUEST, json);
// Close session
return true;
}
GoalsAndRequirements goalsAndRequirements = getGoalsAndRequirements(request, response, goals, dataFrom, false);
if (goalsAndRequirements == null) {
return false;
}
// Get proposals asynchronously.
GoalOptimizer.OptimizerResult optimizerResult;
if (endPoint == EndPoint.ADD_BROKER) {
optimizerResult =
getAndMaybeReturnProgress(request, response,
() -> _asyncKafkaCruiseControl.addBrokers(brokerIds,
dryrun,
throttleAddedOrRemovedBrokers,
goalsAndRequirements.goals(),
goalsAndRequirements.requirements()));
} else {
optimizerResult =
getAndMaybeReturnProgress(request, response,
() -> _asyncKafkaCruiseControl.decommissionBrokers(brokerIds,
dryrun,
throttleAddedOrRemovedBrokers,
goalsAndRequirements.goals(),
goalsAndRequirements.requirements()));
}
if (optimizerResult == null) {
return false;
}
setResponseCode(response, SC_OK);
OutputStream out = response.getOutputStream();
out.write(KafkaCruiseControlServletUtils.getProposalSummary(optimizerResult)
.getBytes(StandardCharsets.UTF_8));
for (Map.Entry<Goal, ClusterModelStats> entry : optimizerResult.statsByGoalPriority().entrySet()) {
Goal goal = entry.getKey();
out.write(String.format("%n%nStats for goal %s%s:%n", goal.name(), goalResultDescription(goal, optimizerResult))
.getBytes(StandardCharsets.UTF_8));
out.write(entry.getValue().toString().getBytes(StandardCharsets.UTF_8));
}
out.write(String.format("%nCluster load after %s broker %s:%n",
endPoint == EndPoint.ADD_BROKER ? "adding" : "removing", brokerIds)
.getBytes(StandardCharsets.UTF_8));
out.write(optimizerResult.brokerStatsAfterOptimization().toString()
.getBytes(StandardCharsets.UTF_8));
out.flush();
return true;
} `
Issue Analytics
- State:
- Created 6 years ago
- Comments:21 (2 by maintainers)
Top GitHub Comments
i was checking the kafka manager after this even though the said Starting 171 partition movements. (com.linkedin.kafka.cruisecontrol.executor.Executor) [2018-02-27 20:46:57,088] INFO Executor will execute 20 task(s) (com.linkedin.kafka.cruisecontrol.executor.Executor)
and completed the task of movement i am checking kafka manager but i still see topics in broker 1004.
i was expecting the number of partitions for broker 1004 should be none in kafka manager . Why is that ?
And i checked directly in the logs dir or the data directory where the kafka data is stored. i dont see any thing in that directory which is what we want. i think that might be an issue with kafka manager not updating itself .
Thanks for the help . only question i had was around whether decommissioning works with self healing works or not .
@becketqin i have seen the error few times only , but the the zookeeper path /admin/reassign_partitions never gets cleared. it remains forever .