question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

electrs crashes while indexing mainnet with "Too many open files"

See original GitHub issue

Electrs new-index works fine for testnet but while syncing mainnet, errors out with -

(Truncated log)

TRACE - skipping block 0000000000000000001a871a0c81fe392e9d90562e702eddd2835e27da815f1d
TRACE - skipping block 0000000000000000001198ed4b9090ef67acebc8ca517bdcd67efc930e554b6c
TRACE - skipping block 0000000000000000001c02b01cb173dc33cd901d0842be6f331037c03b1b1afa
TRACE - skipping block 000000000000000000131227a7c21c0c247b5ee30aeffbd1f9ccba6038d071d5
TRACE - skipping block 0000000000000000000c99cf30cb7609a3d3e1bc6b65c6360b03130e34b2f150
TRACE - fetched 9 blocks
DEBUG - writing 98889 rows to RocksDB { path: "./db/mainnet/newindex/txstore" }, flush=Disable
DEBUG - starting full compaction on RocksDB { path: "./db/mainnet/newindex/txstore" }
DEBUG - finished full compaction on RocksDB { path: "./db/mainnet/newindex/txstore" }
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { message: "IO error: While open a file for random read: ./db/mainnet/newindex/txstore/000938.sst: Too many open files" }', src/libcore/result.rs:997:5
Aborted (core dumped)

Also, the size of ./db is ~325GB. Is this normal?

Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:9

github_iconTop GitHub Comments

1reaction
clarkmoodycommented, Dec 12, 2019

The issue on my system turned out to be caused by systemd overriding system wide limits with a “sane” default. Was resolved by setting LimitNOFILE with a higher value in the electrs service file.

@setpill Excellent, thanks! Running via systemd here.

Might be nice to make a note of this in the docs 😉

0reactions
dongcarlcommented, Dec 12, 2019

Here’s how I got around this on the command line:

sudo prlimit --nofile=65536 sudo -u "$(id -u)" -g "$(id -g)" cargo blah blah wtv

The first sudo makes us root and gives us access to modify file limits, the second sudo brings us back to our original user to execute cargo properly

Read more comments on GitHub >

github_iconTop Results From Across the Web

Too many open files error while reindexing btc blockchain by ...
I'm using electrs backend API documentation to build btc blockchain index engine and local HTTP API - https://github.com/Blockstream/electrs ...
Read more >
3.1.1 - Crash due to java.io.IOException: Too many open files
That seems to have cleared up the "Too many open files" error for me. Unfortunately the unifi-video web interface is still crashing after...
Read more >
Cointelegraph: Bitcoin, Ethereum, Crypto News & Price Indexes
The most recent news about crypto industry at Cointelegraph. Latest news about bitcoin, ethereum, blockchain, mining, cryptocurrency prices and more.
Read more >
What to do to fix "Too many open files" on a 3-node cluster ...
/usr/share/crate/lib/site/index.html: Too many open files. This is a development cluster that has almost no data on it with minimal load.
Read more >
IOTA-BT: A P2P File-Sharing System Based on IOTA - MDPI
BitTorrent (BT) is the most popular peer-to-peer file-sharing system. ... are updated to the blockchain only when two users open and close a...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found