Error: Produced invalid attestation for slot N: Signature is invalid
See original GitHub issueSeen this on three separate occasions on the Witti network today:
2020-06-21 00:51:28.318+00:00 | ValidatorApiChannel-0 | INFO | teku-validator-log | ^[[34mValidator *** Published attestation Count: 1, Slot: 180257, Root: e029d2..8b75^[[0m
2020-06-21 00:51:28.342+00:00 | ValidatorApiChannel-0 | ERROR | teku-validator-log | ^[[31mValidator *** Produced invalid attestation for slot 180257: Signature is invalid^[[0m
2020-06-21 00:51:32.323+00:00 | ValidatorApiChannel-0 | INFO | teku-validator-log | ^[[34mValidator *** Published aggregate Count: 1, Slot: 180257, Root: e029d2..8b75^[[0m
This looks a bit weird.
It’s possibly related to a large number of warnings in my logs about attestations with invalid signatures:
2020-06-21 01:14:12.409+00:00 | SlotEventsChannel-3 | WARN | StateTransition | State Transition error
tech.pegasys.teku.core.exceptions.BlockProcessingException: tech.pegasys.teku.core.exceptions.BlockProcessingException: Invalid attestation: Signature is invalid
at tech.pegasys.teku.core.StateTransition.initiate(StateTransition.java:87) ~[teku-ethereum-core-0.11.5-SNAPSHOT.jar:0.11.5-dev-d95fc0d1]
at tech.pegasys.teku.core.ForkChoiceUtil.on_block(ForkChoiceUtil.java:224) ~[teku-ethereum-core-0.11.5-SNAPSHOT.jar:0.11.5-dev-d95fc0d1]
at tech.pegasys.teku.statetransition.forkchoice.ForkChoice.onBlock(ForkChoice.java:65) ~[teku-ethereum-statetransition-0.11.5-SNAPSHOT.jar:0.11.5-dev-d95fc0d1]
at tech.pegasys.teku.statetransition.blockimport.BlockImporter.importBlock(BlockImporter.java:70) ~[teku-ethereum-statetransition-0.11.5-SNAPSHOT.jar:0.11.5-dev-d95fc0d1]
at tech.pegasys.teku.sync.BlockManager.importBlock(BlockManager.java:122) ~[teku-sync-0.11.5-SNAPSHOT.jar:0.11.5-dev-d95fc0d1]
...
Caused by: tech.pegasys.teku.core.exceptions.BlockProcessingException: Invalid attestation: Signature is invalid
at tech.pegasys.teku.core.BlockProcessorUtil.verify_attestations(BlockProcessorUtil.java:399) ~[teku-ethereum-core-0.11.5-SNAPSHOT.jar:0.11.5-dev-d95fc0d1]
at tech.pegasys.teku.core.blockvalidator.SimpleBlockValidator.validatePreState(SimpleBlockValidator.java:78) ~[teku-ethereum-core-0.11.5-SNAPSHOT.jar:0.11.5-dev-d95fc0d1]
at tech.pegasys.teku.core.blockvalidator.BatchBlockValidator.validatePreState(BatchBlockValidator.java:43) ~[teku-ethereum-core-0.11.5-SNAPSHOT.jar:0.11.5-dev-d95fc0d1]
at tech.pegasys.teku.core.blockvalidator.BlockValidator.validate(BlockValidator.java:88) ~[teku-ethereum-core-0.11.5-SNAPSHOT.jar:0.11.5-dev-d95fc0d1]
...
I have dozens of these mesages throughout the day. I don’t know how invalid attestations are getting into blocks. In any case, there seem to be many badly signed attestations floating around, and we need to make sure we don’t try to build them into aggregates.
Issue Analytics
- State:
- Created 3 years ago
- Comments:16 (15 by maintainers)
Top Results From Across the Web
Troubleshooting Prysm
Troubleshooting your execution node# ; 403 signature invalid, This is usually caused by invalid JWT configuration. If you're using HTTP-JWT to connect your...
Read more >Error Codes and Messages
1) Invalid Lien Reservation Number for Activation, a reversal will be initialted. 2) Lien Reservation Number is empty for activation, a reversal will...
Read more >iTextSharp 5.5.6: Signature invalid with SwissSign USB token
Signature is invalid. There are errors in the formatting or information contained in this signature. Signer's identity has not yet been verified.
Read more >PIV attestation statements
Note that any private key generated on the YubiKey, using the PIV application, ... Because the PIN and PUK block after too many...
Read more >go-ykpiv - Go Packages
At its core, it's a set of x509 Certificates and corresponding private keys in a ... Message: "Incorrect Param"} IncorrectSlot = Error{Code: ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@cemozerr Yep this is exactly what I see in debugger now. Waiting for your potential fix to confirm 👍
Let’s assume the last slot of the epoch is 40. This is where we’ll calculate the future epoch duties. We don’t receive the latest block on time in this slot, so during
processHead()
ourchainHead
gets updated again with block and state of slot 39. (i.e. it doesn’t change). AfterprocessHead()
is completed, during the same slot, we do receive the block for slot 40. It just arrived a bit late.At slot 41, we should normally see that there was indeed a block for slot 40, and thus we should re-calculate the block duties accordingly. For that, we need to be able to detect the “reorg”.
So at slot 41, 4 seconds into the slot,
processHead
runs again. This time we update thechainHead
to the latest block and state at slot 41. We also save the previous chain head root and slot to check if they are ancestors of the new chain head.The problem is, the previous chain head slot is obtained from the
chainHead
object, which retrieves slot information from block slot. So we compare the previous chain head root with the ancestor of the new chain head root at slot 39, instead of comparing with the ancestor at 40. We should have gotten the ancestor of the new chain head at slot 40 and compared that with the previous chain head root instead.Thus, we don’t detect the re-org. [insert-mic-drop-emoji]