Improve connector stack to limit concurrent injection on same elements
See original GitHub issueHi I’m getting an error. It seems is related with Grakn but I can’t go deeper.
This is the console log:
error: [GRAKN] executeWrite error > A database error has occured! {"name":"DatabaseError","_error":{},"_showLocations":false,"_showPath":false,"time_thrown":"2020-03-02T15:21:38.959Z","data":{"type":"technical","details":"There is more than one thing of type [stix_relation_embedded] that owns the key [a8c2e3b7-a37b-5a52-a4fd-1432763c5033] of type [internal_id_key]. "},"internalData":{},"stack":"DatabaseError: A database error has occured!\n at G (/opencti/build/index.js:1:5033)"}
error: [OPENCTI] Technical error > A database error has occured! {"locations":[{"line":4,"column":29}],"path":["reportEdit","relationAdd"],"extensions":{"code":"INTERNAL_SERVER_ERROR","exception":{"name":"DatabaseError","_error":{},"_showLocations":false,"_showPath":false,"time_thrown":"2020-03-02T15:21:38.959Z","data":{"type":"technical","details":"There is more than one thing of type [stix_relation_embedded] that owns the key [a8c2e3b7-a37b-5a52-a4fd-1432763c5033] of type [internal_id_key]. "},"internalData":{},"_stack":"DatabaseError: A database error has occured!\n at G (/opencti/build/index.js:1:5033)","stacktrace":["DatabaseError: A database error has occured!"," at G (/opencti/build/index.js:1:5033)"]}}}
When I stop the “yarn serv” and the grakn server and start it again, the error dissapear. Could you help me?
Thanks in advanced.
Ticket edition by @richard-julien We need to test another way to inject stix2 to limit the concurrency errors. Idea:
- Check the consistency of a stix2.
- Split the stix2 in unit element, ordered by consistency
- Send everything to rabbit
- Worker handle the job in order
- Because of multiple workers you can have a moment where a worker start to work on relation before the end of all entities creation. So this injection will fail for element not found. In this case we need to handle this kind of error to prevent the acknowledge of the message and retry it 10 times (each time after 1sec of waiting). After all the retry we can consider that the message will never be injected correctly and discard it. Expected benefits: Improve speed / limit concurrency problems Error to come: Some rejection sometimes for element not found before actually founding them
Issue Analytics
- State:
- Created 4 years ago
- Comments:11 (6 by maintainers)
Top Results From Across the Web
Tuning Performance - MuleSoft Documentation
Tuning performance in Mule involves analyzing and improving these three stages ... Concurrent user requests at the connector level is the total concurrent...
Read more >Limit the number of concurrent requests on a client connection
A value of zero (0) applies no limit to the number of concurrent requests. When no limit is set, the Citrix ADC appliance...
Read more >Apache Kafka Reference Guide - Quarkus
Application components connect to channels to publish and consume messages. The Kafka connector maps channels to Kafka topics.
Read more >Right way to use Spring WebClient in multi-thread environment
I have one question regarding Spring WebClient. In my application I need to do many similar API calls, sometimes I ...
Read more >Concurrency in microservices :: Open Liberty Docs
Concurrency is the ability to run multiple tasks in parallel, which can increase the efficiency of an application. Tasks can be submitted to...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

For me this error is a concurrency problem when using multiple workers to inject data.
Currently we split the stix2 to generate autonomous chunks, the result of that is to have same entity/relation in multiple chunks. When 2 or more chunks are processed in parallel you can have some concurrency issue. The good news is that you dont have any side effect of that, it “just” do more operation that really needed and log some errors 😃
We have another concurrent strategy we want to test to limit this kind of situation and also improve the performance but its not plan yet.
And to precise exactly how we implement this limitation,
internal_id_keyofstix_relation_embeddedare UUIDv5 (predictable) based on the entity ID and for instance the marking definition ID. So trying to add the same marking definition to an entity that is already linked to it will result to the same UUIDv5 and throw an error because theinternal_id_keyalready exists.