Global-id value is 65034875748
See original GitHub issueI need help to solve this problem, I am creating a sink connector with a Kafka topic so that the data is thrown into Big Query, my producer already writes the messages in the topic in AVRO format and performs the schema registration in the apicurio registry which is deployed on the same cluster as my kafka (I am using strimzi to manage the Kafka Cluster), but when the messages have to go through the convert, the schema is not found, and the following error message is always issued
RESTEASY003870: Unable to extract parameter from http request: javax.ws.rs.PathParam(“globalId”) value is ‘65034875748’
I would like the schema to be found automatically using the strategy TopicIdStrategy where the search is done by adding the suffix, value or key to the name of the consumption topic, as it is already being recorded in the apicurio registry
My Kafka Connector configuration yaml file ⬇️⬇️⬇️⬇️
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
name: schema-registry-sink-connector
labels:
strimzi.io/cluster: kafka-connect-schema-registry-component
spec:
class: 'com.wepay.kafka.connect.bigquery.BigQuerySinkConnector'
tasksMax: 1
config:
tasks.max: 1
#consumer configs
topics: 'tb.avro.schema'
consumer.override.auto.offset.reset: "latest"
#kafka converter configs
value.converter.schemas.enable: true
key.converter.schemas.enable: false
value.converter: "io.apicurio.registry.utils.converter.AvroConverter"
value.converter.apicurio.registry.url: "http://my-apicurio-registry-apicurio-registry.kafka.svc.cluster.local:8080"
key.converter: org.apache.kafka.connect.storage.StringConverter
#bquery configs
project: caramel-box
defaultDataset: 'db_product'
keyfile: 'svc-account-bquery.json'
autoCreateTables: false
deleteEnabled: false
upsertEnabled: false
schemaRetriever: "com.wepay.kafka.connect.bigquery.retrieve.IdentitySchemaRetriever"
autoUpdateSchemas: false
sanitizeTopics: false
bigQueryPartitionDecorator: false
The error message I’m getting ⬇️⬇️⬇️⬇️
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:516)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:493)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:332)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:234)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:203)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: io.apicurio.registry.rest.client.exception.RestClientException: RESTEASY003870: Unable to extract parameter from http request: javax.ws.rs.PathParam("globalId") value is '65034875748'
at io.apicurio.registry.rest.client.impl.ErrorHandler.handleErrorResponse(ErrorHandler.java:64)
at io.apicurio.rest.client.handler.BodyHandler.lambda$toSupplierOfType$1(BodyHandler.java:46)
at io.apicurio.rest.client.JdkHttpClient.sendRequest(JdkHttpClient.java:202)
at io.apicurio.registry.rest.client.impl.RegistryClientImpl.getContentByGlobalId(RegistryClientImpl.java:293)
at io.apicurio.registry.resolver.AbstractSchemaResolver.lambda$resolveSchemaByGlobalId$1(AbstractSchemaResolver.java:183)
at io.apicurio.registry.resolver.ERCache.lambda$getValue$0(ERCache.java:132)
at io.apicurio.registry.resolver.ERCache.retry(ERCache.java:171)
at io.apicurio.registry.resolver.ERCache.getValue(ERCache.java:131)
at io.apicurio.registry.resolver.ERCache.getByGlobalId(ERCache.java:111)
at io.apicurio.registry.resolver.AbstractSchemaResolver.resolveSchemaByGlobalId(AbstractSchemaResolver.java:178)
at io.apicurio.registry.resolver.DefaultSchemaResolver.resolveSchemaByArtifactReference(DefaultSchemaResolver.java:148)
at io.apicurio.registry.serde.AbstractKafkaDeserializer.resolve(AbstractKafkaDeserializer.java:147)
at io.apicurio.registry.serde.AbstractKafkaDeserializer.deserialize(AbstractKafkaDeserializer.java:104)
at io.apicurio.registry.utils.converter.SerdeBasedConverter.toConnectData(SerdeBasedConverter.java:129)
at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$4(WorkerSinkTask.java:516)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
Issue Analytics
- State:
- Created a year ago
- Comments:8 (5 by maintainers)
Top GitHub Comments
Which version of the confluent libraries are you using? As you can see here the references field is present in the class and is being returned by the server, so I think you might be using an old version of the confluent library.
As Eric pointed out this was caused by #2636 (an incompatibility in the ccompat api) which is now solved, so I’m closing this one as solved.