question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Load average, LevelDB and consultancy

See original GitHub issue

Hello @fergiemcdowall,

I’m Frank Rousseau, co-founder & CTO of Cozy Cloud a company that builds Cozy, a FOSS platform that makes the personal server easy to manage. It allows user to easily deploy applications on their own server. These apps collaborate together by sharing the same data store.

We included search-index in Cozy to manage the data indexation. It helps us a lot because we provide a search feature in several applications of the platform. It works fine when we index few documents like notes and file names. But when we want to make intensive indexation (like indexing a full mailbox), it crashes. Operations done via LevelDB consumes a lot of CPU and memory until the process crashes.

We would like to know if we could hire you for a few days in order to help us fixing this problem. Do you have any availabilities?

Issue Analytics

  • State:closed
  • Created 8 years ago
  • Comments:17 (15 by maintainers)

github_iconTop GitHub Comments

2reactions
fergiemcdowallcommented, Apr 15, 2016

@blahah Hurrah! 🎈 🍰 🎉

search-index will typically have 10-100 times as many key-value pairs as documents, depending on how long they are, and how your set your options. Also- the same key-value pair can be overwritten with every batch. If you have inserted 1.3 million into search-index, you may well have 20-30 million pairs into levelup, and possibly 100 million inserts in total (since the same keys are used in many batches).

This is a problem that it would be great to solve, and good, large datasets are surprisingly difficult to come by, so if your are OK with sharing your dataset, I would be happy to help you debug this

1reaction
blahahcommented, Apr 15, 2016

🎉 I’m going to give myself this: 🏆

I’m pretty sure the issue is in search-index, because I’ve put 3.5 million documents into a levelDB before, using levelup with leveldown backend, and memory usage was within the node default limit. I’ll double-check by putting the same 1.3 million as caused the leak above into vanilla levelup - will report back on the memory usage.

Read more comments on GitHub >

github_iconTop Results From Across the Web

What is it, and what's the best load average for your Linux ...
Load average is considered to be ideal when its value is lower than the number of CPUs in the Linux server. For example,...
Read more >
ST06: Load Average - SAP Community
According to the SAP opinion - like a rule of thumb - if the average load is around 1 percent it is OK,...
Read more >
Linux Load Averages: Solving the Mystery - Brendan Gregg
The TENEX load average is a measure of CPU demand. The load average is an average of the number of runnable processes over...
Read more >
UNIX Load Average Part 1: How It Works - Fortra
The load average tries to measure the number of active processes at any time. As a measure of CPU utilization, the load average...
Read more >
Understanding Linux CPU Load - when should you be worried?
In terms of load averages the three numbers represent averages over progressively longer periods of time (i.e. 1, 5, and 15-min. averages).
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found