question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

The message is 1051237 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.

See original GitHub issue

I have a setup described in #888 and sometimes I hit this error:

2020-10-01 09:53:25,627 INFO  [org.apa.kaf.cli.con.KafkaConsumer] (example-apicurioregistry-74914287-0a80-45d3-9979-c014cd6e8aa9-StreamThread-2) [Consumer clientId=example-apicurioregistry-74914287-0a80-45d3-9979-c014cd6e8aa9-StreamThread-2-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions
2020-10-01 09:53:25,627 INFO  [org.apa.kaf.str.pro.int.StreamThread] (example-apicurioregistry-74914287-0a80-45d3-9979-c014cd6e8aa9-StreamThread-2) stream-thread [example-apicurioregistry-74914287-0a80-45d3-9979-c014cd6e8aa9-StreamThread-2] State transition from PARTITIONS_ASSIGNED to RUNNING
2020-10-01 09:53:25,627 INFO  [org.apa.kaf.str.KafkaStreams] (example-apicurioregistry-74914287-0a80-45d3-9979-c014cd6e8aa9-StreamThread-2) stream-client [example-apicurioregistry-74914287-0a80-45d3-9979-c014cd6e8aa9] State transition from REBALANCING to RUNNING
2020-10-01 09:53:25,633 ERROR [org.apa.kaf.str.pro.int.RecordCollectorImpl] (example-apicurioregistry-74914287-0a80-45d3-9979-c014cd6e8aa9-StreamThread-2) stream-thread [example-apicurioregistry-74914287-0a80-45d3-9979-c014cd6e8aa9-StreamThread-2] task [0_0] Error sending record to topic example-apicurioregistry-storage-store-changelog due to The message is 1051237 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.; No more records will be sent and no more offsets will be recorded for this task. Enable TRACE logging to view failed record key and value.: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1051237 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.

2020-10-01 09:53:25,635 ERROR [org.apa.kaf.str.pro.int.AssignedStreamsTasks] (example-apicurioregistry-74914287-0a80-45d3-9979-c014cd6e8aa9-StreamThread-2) stream-thread [example-apicurioregistry-74914287-0a80-45d3-9979-c014cd6e8aa9-StreamThread-2] Failed to process stream task 0_0 due to the following error:: org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=0_0, processor=KSTREAM-SOURCE-0000000000, topic=storage-topic, partition=0, offset=3212, stacktrace=org.apache.kafka.streams.errors.StreamsException: task [0_0] Abort sending since an error caught with a previous record (timestamp 1601545983613) to topic example-apicurioregistry-storage-store-changelog due to org.apache.kafka.common.errors.RecordTooLargeException: The message is 1051237 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
	at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:144)
	at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.access$500(RecordCollectorImpl.java:52)
	at org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:204)
	at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:960)
	at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:865)
	at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:171)
	at org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:69)
	at org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:62)
	at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.log(ChangeLoggingKeyValueBytesStore.java:116)
	at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.put(ChangeLoggingKeyValueBytesStore.java:69)
	at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.put(ChangeLoggingKeyValueBytesStore.java:31)
	at org.apache.kafka.streams.state.internals.CachingKeyValueStore.putAndMaybeForward(CachingKeyValueStore.java:102)
	at org.apache.kafka.streams.state.internals.CachingKeyValueStore.lambda$initInternal$0(CachingKeyValueStore.java:72)
	at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:151)
	at org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:244)
	at org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:240)
	at org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:150)
	at org.apache.kafka.streams.state.internals.CachingKeyValueStore.putInternal(CachingKeyValueStore.java:131)
	at org.apache.kafka.streams.state.internals.CachingKeyValueStore.put(CachingKeyValueStore.java:123)
	at org.apache.kafka.streams.state.internals.CachingKeyValueStore.put(CachingKeyValueStore.java:36)
	at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.put(MeteredKeyValueStore.java:262)
	at org.apache.kafka.streams.processor.internals.ProcessorContextImpl$KeyValueStoreReadWriteDecorator.put(ProcessorContextImpl.java:487)
	at io.apicurio.registry.streams.StreamsTopologyProvider$StorageTransformer.transform(StreamsTopologyProvider.java:188)
	at io.apicurio.registry.streams.StreamsTopologyProvider$StorageTransformer.transform(StreamsTopologyProvider.java:147)
	at org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapter$1.transform(TransformerSupplierAdapter.java:47)
	at org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapter$1.transform(TransformerSupplierAdapter.java:36)
	at org.apache.kafka.streams.kstream.internals.KStreamFlatTransform$KStreamFlatTransformProcessor.process(KStreamFlatTransform.java:56)
	at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:118)
	at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:201)
	at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:180)
	at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:133)
	at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:87)
	at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:429)
	at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:474)
	at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:536)
	at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:792)
	at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:698)
	at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:671)
Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1051237 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.

	at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:446)
	at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:474)
	at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:536)
	at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:792)
	at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:698)
	at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:671)
Caused by: org.apache.kafka.streams.errors.StreamsException: task [0_0] Abort sending since an error caught with a previous record (timestamp 1601545983613) to topic example-apicurioregistry-storage-store-changelog due to org.apache.kafka.common.errors.RecordTooLargeException: The message is 1051237 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
	at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:144)
	at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.access$500(RecordCollectorImpl.java:52)
	at org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:204)
	at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:960)
	at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:865)
	at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:171)
	at org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:69)
	at org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:62)
	at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.log(ChangeLoggingKeyValueBytesStore.java:116)
	at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.put(ChangeLoggingKeyValueBytesStore.java:69)
	at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.put(ChangeLoggingKeyValueBytesStore.java:31)
	at org.apache.kafka.streams.state.internals.CachingKeyValueStore.putAndMaybeForward(CachingKeyValueStore.java:102)
	at org.apache.kafka.streams.state.internals.CachingKeyValueStore.lambda$initInternal$0(CachingKeyValueStore.java:72)
	at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:151)
	at org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:244)
	at org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:240)
	at org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:150)
	at org.apache.kafka.streams.state.internals.CachingKeyValueStore.putInternal(CachingKeyValueStore.java:131)
	at org.apache.kafka.streams.state.internals.CachingKeyValueStore.put(CachingKeyValueStore.java:123)
	at org.apache.kafka.streams.state.internals.CachingKeyValueStore.put(CachingKeyValueStore.java:36)
	at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.put(MeteredKeyValueStore.java:262)
	at org.apache.kafka.streams.processor.internals.ProcessorContextImpl$KeyValueStoreReadWriteDecorator.put(ProcessorContextImpl.java:487)
	at io.apicurio.registry.streams.StreamsTopologyProvider$StorageTransformer.transform(StreamsTopologyProvider.java:188)
	at io.apicurio.registry.streams.StreamsTopologyProvider$StorageTransformer.transform(StreamsTopologyProvider.java:147)
	at org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapter$1.transform(TransformerSupplierAdapter.java:47)
	at org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapter$1.transform(TransformerSupplierAdapter.java:36)
	at org.apache.kafka.streams.kstream.internals.KStreamFlatTransform$KStreamFlatTransformProcessor.process(KStreamFlatTransform.java:56)
	at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:118)
	at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:201)
	at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:180)
	at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:133)
	at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:87)
	at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:429)
	... 5 more
Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1051237 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.

This happens, when registry is starting.

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:8 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
smccarthy-iecommented, Jan 20, 2021

@EricWittmann Yes, we need a new section on the different ways in which you can configure the registry and their order of precedence (Java system properties, Quarkus app properties, env variables, config properties, etc.). I’ll create a Doc issue for this.

The current docs also tend to mention specific config settings (or link to them in Quarkus docs), but don’t explain how you can actually set them. When we add new config settings in the docs, I’d like to see examples of how to set them. Thanks.

1reaction
EricWittmanncommented, Jan 19, 2021

Thanks for following up! In particular it’s useful to know about the ENV var REGISTRY_STREAMS_STORAGE-PRODUCER_MAX_REQUEST_SIZE format (with the dash). We’ll see what we can do about improving the docs.

@smccarthy-ie do you have any thoughts on perhaps a dedicated section on configuration properties (or some updates to such a section if we already have one)?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Kafka: The message when serialized is larger than the ...
The message is 2097240 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size ...
Read more >
RecordTooLargeException on large messages in Kafka ...
RecordTooLargeException: The message is 16777327 bytes when serialized which is larger than the maximum request size you have configured with ...
Read more >
Send Large Messages With Kafka | Baeldung
An optional configuration property, “message.max.bytes“, can be used to allow all topics on a Broker to accept messages of greater than 1MB in ......
Read more >
How to set max.request.size on KafkaConnect CRDs? #2592
RecordTooLargeException: The message is 1904641 bytes when serialized which is larger than the maximum request size you have configured with ...
Read more >
The message is xxx bytes when serialized which is larger than ...
The message is xxx bytes when serialized which is larger than the maximum request size you have configured with the max.request.size ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found