Setting SERVER_SERVLET_CONTEXT_PATH causes container to crash.
See original GitHub issueDescribe the bug Setting SERVER_SERVLET_CONTEXT_PATH causes container to crash.
Set up We are attempting to use the Amazon ALB with ingress fanout in Kubernetes and trying to access Kafka UI from something like https://foo.bar/kafka-ui but setting SERVER_SERVLET_CONTEXT_PATH to “/kafka-ui” causes the container to restart continuously.
Kafka UI version: 0.3.1 Kubernetes Version: 1.20
Steps to Reproduce Steps to reproduce the behavior:
- Deploy Configmap containing the configuration for Kafka cluster into Kubernetes. Ensure SERVER_SERVLET_CONTEXT_PATH is set to “/kafka-ui”
- Deploy Helm chart to Kubernetes for Kafka UI using the existingConfigMap to point to the deployed configmap from Step 1.
- Pod crashes with the below exception in the logs.
Expected behavior I expect the container to actually work with SERVER_SERVLET_CONTEXT_PATH set.
Log
Exerpt from the debug log indicating that the shutdown hook is being called for some unknown reason:
2021-12-29 18:45:37,853 DEBUG [SpringApplicationShutdownHook] o.s.b.a.ApplicationAvailabilityBean: Application availability state ReadinessState changed from ACCEPTING_TRAFFIC to REFUSING_TRAFFIC 2021-12-29 18:45:37,861 DEBUG [SpringApplicationShutdownHook] o.s.b.w.r.c.AnnotationConfigReactiveWebServerApplicationContext: Closing org.springframework.boot.web.reactive.context.AnnotationConfigReactiveWebServerApplicationContext@4466af20, started on Wed Dec 29 18:43:11 GMT 2021 2021-12-29 18:45:37,862 DEBUG [SpringApplicationShutdownHook] o.s.c.s.DefaultLifecycleProcessor: Stopping beans in phase 2147483647 2021-12-29 18:45:37,863 DEBUG [SpringApplicationShutdownHook] o.s.c.s.DefaultLifecycleProcessor: Bean 'webServerGracefulShutdown' completed its stop procedure 2021-12-29 18:45:37,863 DEBUG [SpringApplicationShutdownHook] o.s.c.s.DefaultLifecycleProcessor: Stopping beans in phase 2147483646 2021-12-29 18:45:37,867 DEBUG [SpringApplicationShutdownHook] o.s.c.s.DefaultLifecycleProcessor: Bean 'webServerStartStop' completed its stop procedure 2021-12-29 18:45:37,867 DEBUG [SpringApplicationShutdownHook] o.s.s.c.ThreadPoolTaskScheduler: Shutting down ExecutorService 'taskScheduler' 2021-12-29 18:45:37,867 DEBUG [SpringApplicationShutdownHook] o.s.j.e.MBeanExporter: Unregistering JMX-exposed beans on shutdown 2021-12-29 18:45:37,871 DEBUG [SpringApplicationShutdownHook] o.a.k.c.a.KafkaAdminClient: [AdminClient clientId=adminclient-1] Initiating close operation. 2021-12-29 18:45:37,871 DEBUG [SpringApplicationShutdownHook] o.a.k.c.a.KafkaAdminClient: [AdminClient clientId=adminclient-1] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 2021-12-29 18:45:37,875 ERROR [scheduling-1] o.s.s.s.TaskUtils$LoggingErrorHandler: Unexpected error occurred in scheduled task reactor.core.Exceptions$ReactiveException: java.lang.InterruptedException at reactor.core.Exceptions.propagate(Exceptions.java:392) at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:91) at reactor.core.publisher.Mono.block(Mono.java:1706) at com.provectus.kafka.ui.service.ClustersMetricsScheduler.updateMetrics(ClustersMetricsScheduler.java:30) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84) at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:830) Caused by: java.lang.InterruptedException: null at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1040) at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1345) at java.base/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:232) at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:87) ... 14 common frames omitted
Additional Information
Removing SERVER_SERVLET_CONTEXT_PATH from the configuration causes the pod to come up, but since our ingress is defined to use a fanout URL, the page returned when access the UI is always blank. Inspecting the page shows that all of the assets called to load the UI are returning 404 because the fanout path is not being respected by the underlying pod.
Issue Analytics
- State:
- Created 2 years ago
- Comments:18 (9 by maintainers)
It works on our end. This is a helm config we used:
configmap:
ingress:
Which worked perfectly fine on
http://xxx.yyy/kafka-ui/
.I’m closing this since this is clearly not a bug. Feel free to join us in discussions or on discord!
That’s really weird ☹️ We’ll try to get a working env with a custom context path and I’ll get back to you