question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Bug] PUT /admin/loggers/root returned 404

See original GitHub issue

Describe the bug After upgrading to v0.20 we see a lot of errors in strimzi cluster operator like

2020-12-04 11:45:04 DEBUG KafkaConnectApiImpl:347 - Making PUT request to /admin/loggers/root with body {"level":"${connect.root.logger.level}"}
2020-12-04 11:45:04 DEBUG KafkaConnectApiImpl:359 - Logger root did not update to level ${connect.root.logger.level} (http code 404)
2020-12-04 11:45:04 ERROR AbstractOperator:238 - Reconciliation #81(timer) KafkaConnect(cxp/cxp-connect): createOrUpdate failed
io.strimzi.operator.cluster.operator.assembly.ConnectRestException: PUT /admin/loggers/root returned 404 (Not Found): Unexpected status code

It looks like it happens when .spec.logging.type equals to external. If I set .spec.logging.type: inline and specify connect.root.logger.level: INFO the exception disappears.

To Reproduce

  1. Create a KC with .spec.logging.type: external.
  2. Observe the exception in strimzi cluster operator logs.

Expected behavior No errors hapepn due to .spec.logging.type == external.

Environment (please complete the following information):

  • Strimzi version: 0.20
  • Installation method: YAML files
  • Kubernetes cluster: Kubernetes 1.18

YAML files and logs KafkaConnect custom resource:

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  annotations:
    strimzi.io/use-connector-resources: "true"
  labels:
    app: cxp-connect
  managedFields:
  - apiVersion: kafka.strimzi.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:logging:
          f:name: {}
          f:type: {}
    manager: kubectl
    operation: Update
    time: "2020-12-04T10:51:30Z"
  - apiVersion: kafka.strimzi.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:jvmOptions:
          f:-XX:
            f:ExplicitGCInvokesConcurrent: {}
            f:InitiatingHeapOccupancyPercent: {}
            f:MaxGCPauseMillis: {}
            f:UseContainerSupport: {}
            f:UseG1GC: {}
            f:UseStringDeduplication: {}
      f:status:
        f:conditions: {}
        f:labelSelector: {}
        f:observedGeneration: {}
    manager: okhttp
    operation: Update
    time: "2020-12-04T10:51:46Z"
  name: cxp-connect
  namespace: cxp
  resourceVersion: "223917255"
  selfLink: /apis/kafka.strimzi.io/v1beta1/namespaces/cxp/kafkaconnects/cxp-connect
  uid: 68ae7a55-b94d-4e93-8bd5-a7818bdfcc0d
spec:
  bootstrapServers: REDACTED
  config:
    config.storage.replication.factor: 3
    config.storage.topic: cxp-connect-configs
    connector.client.config.override.policy: All
    group.id: cxp-connect
    internal.key.converter: org.apache.kafka.connect.json.JsonConverter
    internal.key.converter.schemas.enable: false
    internal.value.converter: org.apache.kafka.connect.json.JsonConverter
    internal.value.converter.schemas.enable: false
    key.converter: org.apache.kafka.connect.json.JsonConverter
    offset.flush.interval.ms: 20000
    offset.flush.timeout.ms: 50000
    offset.storage.partitions: 3
    offset.storage.replication.factor: 3
    offset.storage.topic: cxp-connect-offsets
    replication.factor: 3
    status.storage.partitions: 3
    status.storage.replication.factor: 3
    status.storage.topic: cxp-connect-status
    task.shutdown.graceful.timeout.ms: 60000
    value.converter: org.apache.kafka.connect.json.JsonConverter
  image: strimzi/kafka:0.20.0-kafka-2.5.0
  jvmOptions:
    -XX:
      ExplicitGCInvokesConcurrent: true
      InitiatingHeapOccupancyPercent: 35
      MaxGCPauseMillis: 20
      MaxRAMPercentage: "75.0"
      UseContainerSupport: true
      UseG1GC: true
      UseStringDeduplication: true
    -server: true
    javaSystemProperties:
    - name: java.awt.headless
      value: "true"
  livenessProbe:
    initialDelaySeconds: 10
    timeoutSeconds: 30
  logging:
    name: cxp-connect-log4j-json
    type: external
  metrics:
    lowercaseOutputLabelNames: true
    lowercaseOutputName: true
    rules:
    - help: Kafka $1 JMX metric start time seconds
      labels:
        clientId: $2
      name: kafka_$1_start_time_seconds
      pattern: kafka.(.+)<type=app-info, client-id=(.+)><>start-time-ms
      type: GAUGE
      valueFactor: 0.001
    - help: Kafka $1 JMX metric info version and commit-id
      labels:
        $3: $4
        clientId: $2
      name: kafka_$1_$3_info
      pattern: 'kafka.(.+)<type=app-info, client-id=(.+)><>(commit-id|version): (.+)'
      type: GAUGE
      value: 1
    - help: Kafka $1 JMX metric type $2
      labels:
        clientId: $3
        partition: $5
        topic: $4
      name: kafka_$2_$6
      pattern: kafka.(.+)<type=(.+)-metrics, client-id=(.+), topic=(.+), partition=(.+)><>(.+-total|compression-rate|.+-avg|.+-replica|.+-lag|.+-lead)
      type: GAUGE
    - help: Kafka $1 JMX metric type $2
      labels:
        clientId: $3
        topic: $4
      name: kafka_$2_$5
      pattern: kafka.(.+)<type=(.+)-metrics, client-id=(.+), topic=(.+)><>(.+-total|compression-rate|.+-avg)
      type: GAUGE
    - help: Kafka $1 JMX metric type $2
      labels:
        clientId: $3
        nodeId: $4
      name: kafka_$2_$5
      pattern: kafka.(.+)<type=(.+)-metrics, client-id=(.+), node-id=(.+)><>(.+-total|.+-avg)
      type: UNTYPED
    - help: Kafka $1 JMX metric type $2
      labels:
        clientId: $3
      name: kafka_$2_$4
      pattern: kafka.(.+)<type=(.+)-metrics, client-id=(.*)><>(.+-total|.+-avg|.+-bytes|.+-count|.+-ratio|.+-age|.+-flight|.+-threads|.+-connectors|.+-tasks|.+-ago)
      type: GAUGE
    - help: Kafka Connect JMX Connector status
      labels:
        connector: $1
        status: $3
        task: $2
      name: kafka_connect_connector_status
      pattern: 'kafka.connect<type=connector-task-metrics, connector=(.+), task=(.+)><>status:
        ([a-z-]+)'
      type: GAUGE
      value: 1
    - help: Kafka Connect JMX metric type $1
      labels:
        connector: $2
        task: $3
      name: kafka_connect_$1_$4
      pattern: kafka.connect<type=(.+)-metrics, connector=(.+), task=(.+)><>(.+-total|.+-count|.+-ms|.+-ratio|.+-avg|.+-failures|.+-requests|.+-timestamp|.+-logged|.+-errors|.+-retries|.+-skipped)
      type: GAUGE
    - help: Kafka Connect JMX metric $1
      labels:
        connector: $1
      name: kafka_connect_worker_$2
      pattern: kafka.connect<type=connect-worker-metrics, connector=(.+)><>([a-z-]+)
      type: GAUGE
    - help: Kafka Connect JMX metric worker
      name: kafka_connect_worker_$1
      pattern: kafka.connect<type=connect-worker-metrics><>([a-z-]+)
      type: GAUGE
    - help: Kafka Connect JMX metric rebalance information
      name: kafka_connect_worker_rebalance_$1
      pattern: kafka.connect<type=connect-worker-rebalance-metrics><>([a-z-]+)
      type: GAUGE
    - labels:
        context: $2
        name: $3
        plugin: $1
      name: debezium_metrics_$4
      pattern: debezium.([^:]+)<type=connector-metrics, context=([^,]+), server=([^>]+)><>((?!RowsScanned)[^:]+)
  readinessProbe:
    initialDelaySeconds: 10
    timeoutSeconds: 30
  replicas: 1
  resources:
    limits:
      cpu: 2000m
      memory: 3Gi
    requests:
      cpu: 1000m
      memory: 3Gi
  template:
    pod:
      metadata:
        labels:
          kafka-cluster: kafka1
      terminationGracePeriodSeconds: 120
  tls:
    trustedCertificates: []
  version: 2.5.0
status:
  conditions:
  - lastTransitionTime: "2020-12-04T10:51:46.897533Z"
    message: 'PUT /admin/loggers/root returned 404 (Not Found): Unexpected status
      code'
    reason: ConnectRestException
    status: "True"
    type: NotReady
  labelSelector: strimzi.io/cluster=cxp-connect,strimzi.io/name=cxp-connect-connect,strimzi.io/kind=KafkaConnect
  observedGeneration: 44
  replicas: 1
  url: http://cxp-connect-connect-api.cxp.svc:8083

The logger config map:

apiVersion: v1
data:
  log4j.properties: |
    log4j.rootCategory = WARN, stdout
    log4j.appender.stdout=org.apache.log4j.ConsoleAppender
    log4j.appender.stdout.layout=net.logstash.log4j.JSONEventLayoutV1
    log4j.appender.stdout.layout.includedFields=location
    log4j.logger.org.reflections=ERROR
    log4j.logger.org.apache.zookeeper=ERROR
    log4j.logger.org.I0Itec.zkclient=ERROR
    log4j.logger.org.reflections=ERROR
    log4j.logger.io.debezium.connector.mysql.MySqlSchema=ERROR
    log4j.logger.io.debezium.connector.mysql.MySqlValueConverters=ERROR
kind: ConfigMap
metadata:
  annotations:
  creationTimestamp: "2020-09-25T09:28:13Z"
  labels:
    app: cxp-connect
    cleanup: "true"
    deployed-with: skaffold
    docker-api-version: "1.40"
    profiles: k8s-usw2-dev-skaffold
    skaffold-builder: local
    skaffold-deployer: kustomize
    skaffold-tag-policy: envTemplateTagger
    tail: "true"
  name: cxp-connect-log4j-json
  namespace: cxp

Strimzi cluster operator relevant logs:

2020-12-04 11:45:04 DEBUG KafkaConnectApiImpl:347 - Making PUT request to /admin/loggers/root with body {"level":"${connect.root.logger.level}"}
2020-12-04 11:45:04 DEBUG KafkaConnectApiImpl:359 - Logger root did not update to level ${connect.root.logger.level} (http code 404)
2020-12-04 11:45:04 ERROR AbstractOperator:238 - Reconciliation #81(timer) KafkaConnect(cxp/cxp-connect): createOrUpdate failed
io.strimzi.operator.cluster.operator.assembly.ConnectRestException: PUT /admin/loggers/root returned 404 (Not Found): Unexpected status code
	at io.strimzi.operator.cluster.operator.assembly.KafkaConnectApiImpl.lambda$updateConnectorLogger$24(KafkaConnectApiImpl.java:360) ~[io.strimzi.cluster-operator-0.20.0.jar:0.20.0]
	at io.vertx.core.http.impl.HttpClientRequestImpl.handleResponse(HttpClientRequestImpl.java:390) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.http.impl.HttpClientRequestBase.checkHandleResponse(HttpClientRequestBase.java:167) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.http.impl.HttpClientRequestBase.handleResponse(HttpClientRequestBase.java:148) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.http.impl.Http1xClientConnection.handleResponseBegin(Http1xClientConnection.java:623) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.http.impl.Http1xClientConnection.handleHttpMessage(Http1xClientConnection.java:593) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.http.impl.Http1xClientConnection.handleMessage(Http1xClientConnection.java:575) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:366) [io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.impl.EventLoopContext.execute(EventLoopContext.java:43) [io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.impl.ContextImpl.executeFromIO(ContextImpl.java:229) [io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.net.impl.VertxHandler.channelRead(VertxHandler.java:173) [io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [io.netty.netty-codec-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:311) [io.netty.netty-codec-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:425) [io.netty.netty-codec-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) [io.netty.netty-codec-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [io.netty.netty-handler-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [io.netty.netty-common-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [io.netty.netty-common-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty.netty-common-4.1.50.Final.jar:4.1.50.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
2020-12-04 11:45:04 DEBUG StatusDiff:39 - Ignoring Status diff {"op":"replace","path":"/conditions/0/lastTransitionTime","value":"2020-12-04T11:45:04.889083Z"}
2020-12-04 11:45:04 DEBUG AbstractOperator:318 - Reconciliation #81(timer) KafkaConnect(cxp/cxp-connect): Status did not change
2020-12-04 11:45:04 DEBUG AbstractOperator:502 - Reconciliation #81(timer) KafkaConnect(cxp/cxp-connect): Updated metric strimzi.resource.state[tag(kind=KafkaConnect),tag(name=cxp-connect),tag(resource-namespace=cxp)] = 0
2020-12-04 11:45:04 WARN  AbstractOperator:470 - Reconciliation #81(timer) KafkaConnect(cxp/cxp-connect): Failed to reconcile
io.strimzi.operator.cluster.operator.assembly.ConnectRestException: PUT /admin/loggers/root returned 404 (Not Found): Unexpected status code
	at io.strimzi.operator.cluster.operator.assembly.KafkaConnectApiImpl.lambda$updateConnectorLogger$24(KafkaConnectApiImpl.java:360) ~[io.strimzi.cluster-operator-0.20.0.jar:0.20.0]
	at io.vertx.core.http.impl.HttpClientRequestImpl.handleResponse(HttpClientRequestImpl.java:390) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.http.impl.HttpClientRequestBase.checkHandleResponse(HttpClientRequestBase.java:167) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.http.impl.HttpClientRequestBase.handleResponse(HttpClientRequestBase.java:148) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.http.impl.Http1xClientConnection.handleResponseBegin(Http1xClientConnection.java:623) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.http.impl.Http1xClientConnection.handleHttpMessage(Http1xClientConnection.java:593) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.http.impl.Http1xClientConnection.handleMessage(Http1xClientConnection.java:575) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:366) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.impl.EventLoopContext.execute(EventLoopContext.java:43) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.impl.ContextImpl.executeFromIO(ContextImpl.java:229) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.vertx.core.net.impl.VertxHandler.channelRead(VertxHandler.java:173) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) ~[io.netty.netty-codec-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:311) ~[io.netty.netty-codec-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:425) ~[io.netty.netty-codec-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[io.netty.netty-codec-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) ~[io.netty.netty-handler-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) ~[io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [io.netty.netty-transport-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [io.netty.netty-common-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [io.netty.netty-common-4.1.50.Final.jar:4.1.50.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty.netty-common-4.1.50.Final.jar:4.1.50.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]

Additional context Might be caused by https://github.com/strimzi/strimzi-kafka-operator/pull/3501/

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
scholzjcommented, Dec 4, 2020

I tried it and I can reproduce it. I think the problem in your case is in this:

log4j.rootCategory = WARN, stdout

When you change it to:

log4j.rootLogger = WARN, stdout

It starts working fine. TBH, I’m not sure why it is the case. So I will let @sknot-rh look at this to say if this is a bug or not.

0reactions
ramanenkacommented, Dec 4, 2020

@scholzj added to the description.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Bugs & Issues Server Error 404 - File or directory not found.
The error message "Server Error 404 - File or directory not found" popping up in the back end of Sitefinity might be due...
Read more >
404 error doesn't appear in Apache error.log - Stack Overflow
404 is a content issue, not a server issue. So my recommendation is to look in your access.log (or equivalent) for them. If...
Read more >
Rewrites incorrectly 404 when there is no source context
If you enable rewrites, and put a rewrite.config in, where the source location does not match a valid context, the server issues a...
Read more >
Recurring 404 error for all pages (but not posts or bbPress ...
[This thread is closed.] I have been tearing my hair out trying to solve an infuriating bug for the past few weeks, and...
Read more >
8 Ways to Fix “WordPress Keeps Logging Out” Error
Deactivate all your plugins by renaming the folder to something like “plugins_old”. Then you can navigate back to your WordPress dashboard and ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found