Allow to resume indexing using blk files after interruption
See original GitHub issueCurrently, restarting the indexer before it finishes indexing completely will cause it to continue fetching blocks using the RPC instead of the .blk
files, which is significantly slower.
Issue Analytics
- State:
- Created 4 years ago
- Comments:13
Top Results From Across the Web
Rebuilding indexing--space consumption - Ask TOM
We have a situation where when we go to rebuild certain indexes within the database, it fails with a "Unable to extend temp...
Read more >SCANNING / INDEXING USER REFERENCE GUIDE
No Index: allows you to perform manual indexing after documents have been scanned into OnBase. The documents will be sent to the Awaiting...
Read more >Index with Enterprise Search - WordPress VIP Documentation
IDs are indexed from highest to lowest. A common use case would be to resume indexing after an interruption or timeout. For example,...
Read more >Index Basics | Indexing | Manual | ArangoDB Documentation
The primary index allows quick selection of documents in the collection using either the _key or _id attributes. It will be used from...
Read more >Windows 11 Indexing is paused - Microsoft Community
Rebuilding pauses, carries on and pauses...you get the picture... I am able to find folders with windows search, but not key words within...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Success! It caught up and is continuing indexing with the block files. Thanks!
It’s much faster pulling from the
BlkFiles
– but I was left with the curious issue thatelectrs
continued to crash and restart, which was the root of my issue above……usually with no indication as to why – except one time where I saw a cut-off line in the error log similar to:
…which would seem to indicate that my machine is out of memory… and a quick check of
top
confirms that I am at 97% of both physical and swap… I have 8GB of physical and 8GB of swap and I have shut down almost every process except for Docker and it’s still crashing… but this issue is beyond the scope of this project, so…THANKS AGAIN!!!