S3 Sink connector, error during offset commit
See original GitHub issueWhat version of the Stream Reactor are you reporting this issue for?
kafka-connect-aws-s3-3.0.1-2.5.0-all.tar.gz
What is the expected behaviour?
Records to written to AWS S3
What was observed?
WARN [sink-s3|task-0] WorkerSinkTask{id=sink-s3-0} Offset commit failed during close (org.apache.kafka.connect.runtime.WorkerSinkTask:390)
ERROR [sink-s3|task-0] WorkerSinkTask{id=sink-s3-0} Commit of offsets threw an unexpected exception for sequence number 1: null (org.apache.kafka.connect.runtime.WorkerSinkTask:267)
java.lang.UnsupportedOperationException: empty.maxBy
What is your Connect cluster configuration (connect-avro-distributed.properties)?
- name: CONNECT_REST_PORT
value: "8083"
- name: CONNECT_BOOTSTRAP_SERVERS
value: "blockchain-kafka-kafka-0.default.svc.cluster.local:9092"
- name: CONNECT_GROUP_ID
value: "kafka-connect"
- name: CONNECT_CONFIG_STORAGE_TOPIC
value: "_connect-configs"
- name: CONNECT_OFFSET_STORAGE_TOPIC
value: "_connect-offsets"
- name: CONNECT_STATUS_STORAGE_TOPIC
value: "_connect-status"
- name: CONNECT_KEY_CONVERTER
value: "org.apache.kafka.connect.storage.StringConverter"
- name: CONNECT_VALUE_CONVERTER
value: "org.apache.kafka.connect.json.JsonConverter"
- name: CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE
value: "false"
- name: CONNECT_REST_ADVERTISED_HOST_NAME
value: "kafka-connect"
- name: CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN
value: "[%d] %p %X{connector.context}%m (%c:%L)%n"
- name: CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR
value: "1"
- name: CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR
value: "1"
- name: CONNECT_STATUS_STORAGE_REPLICATION_FACTOR
value: "1"
- name: AWS_ACCESS_KEY_ID
What is your connector properties configuration (my-connector.properties)?
{
"connector.class": "io.lenses.streamreactor.connect.aws.s3.sink.S3SinkConnector",
"topics": "test_yordan_kafka_connect_single_partition",
"tasks.max": "1",
"aws.auth.mode": "Default",
"topics": "test_yordan_kafka_connect_single_partition",
"connect.s3.kcql": "insert into yordan-flink-savepoints-test-hz-stage:test-bucket select * from test_yordan_kafka_connect_single_partition `json` WITH_FLUSH_COUNT = 5000",
"connect.s3.aws.region": "eu-central-1",
"timezone": "UTC",
"errors.log.enable":true
}
Please provide full log files (redact and sensitive information)
Issue Analytics
- State:
- Created a year ago
- Comments:7 (3 by maintainers)
Top Results From Across the Web
S3 Sink errors "Commit of offsets threw an unexpected ...
S3 Sink Connector does not work for me on version 5.5.0 and 5.4.2 Connector fails with error "Commit of offsets threw an unexpected ......
Read more >Kafka S3 sink connector does not commit offset - Stack Overflow
The problem is that Kafka Connect does not commit offset on the topic. It reads the same offset all time -> it overwrites...
Read more >Kafka Connect Amazon S3 Sink Connector
The S3 connector, currently available as a sink, allows you to export data from Kafka topics to S3 objects in either Avro or...
Read more >Kafka to AWS S3 | S3 open source Kafka connector
A Kafka Connect sink connector for writing records from Kafka to AWS S3 buckets. ... The full details of the default chain are...
Read more >Kafka Connect S3: Out of memory: Java heap space
I'm running Kafka Connect S3 inside docker container in AWS ECS in standalone ... Target commit offset for my.topic-6 is 16000 (io.confluent.connect.s3.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@YordanPavlov This is a common issue.
The jClouds library gives us problems here because it tries to work out the region but sometimes gets it wrong. Either
OR
Thank you for your support, the connector works for me now.