question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

90% CPU load with capped collections and tailable cursors

See original GitHub issue

Sorry for creating a new issue already, but after the memory leak has been fixed by the guys at mongodb-native, there is another thing that I’d like to mention. I am experiencing ~90% cpu load of a node process. As before, I am not sure if this is a mongoose or a mongodb-native issue.

This seems to be related to using capped collections with streaming tailable cursors. “abcdefg.feed” is a 100kb capped collection. strace output is like:

...
write(17, ";\0\0\0{\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 59) = 59
write(17, "5\0\0\0|\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 53) = 53
write(17, "1\0\0\0}\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 49) = 49
write(17, "7\0\0\0~\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 55) = 55
write(17, "5\0\0\0\177\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 53) = 53
write(17, "4\0\0\0\200\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 52) = 52
write(17, "2\0\0\0\201\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 50) = 50
write(17, "3\0\0\0\202\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 51) = 51
write(17, "2\0\0\0\203\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 50) = 50
write(17, "5\0\0\0\204\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 53) = 53
write(17, "3\0\0\0\205\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 51) = 51
write(17, "1\0\0\0\206\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 49) = 49
write(17, "5\0\0\0\207\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 53) = 53
write(17, "6\0\0\0\210\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 54) = 54
write(17, ":\0\0\0\211\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 58) = 58
write(17, "1\0\0\0\212\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 49) = 49
write(17, "?\0\0\0\213\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 63) = 63
write(17, "5\0\0\0\214\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 53) = 53
write(17, "8\0\0\0\215\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 56) = 56
write(17, "1\0\0\0\216\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 49) = 49
write(17, "9\0\0\0\217\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 57) = 57
write(17, ";\0\0\0\220\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 59) = 59
write(17, "6\0\0\0\221\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 54) = 54
write(17, "8\0\0\0\222\1J\0\0\0\0\0\325\7\0\0\0\0\0\0abcdefg.feed"..., 56) = 56
...
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 97.67    0.004649           0     85417           write
  1.72    0.000082           0       387           close
  0.61    0.000029           0      1783           read
  0.00    0.000000           0        47           open
  0.00    0.000000           0        15           stat
  0.00    0.000000           0         4           mmap
  0.00    0.000000           0         4           munmap
  0.00    0.000000           0      1208           futex
  0.00    0.000000           0      1014           epoll_wait
  0.00    0.000000           0      1768           epoll_ctl
  0.00    0.000000           0       618       274 accept4
------ ----------- ----------- --------- --------- ----------------
100.00    0.004760                 92265       274 total

Notable here is that, in my scenario, there are about 1k streaming tailable cursors at once. However, I don’t understand why this is generating such high “write” cpu load.

mongoose@3.6.16 mongodb@1.3.18 node@0.10.5 on a fresh linux box.

Any ideas? Do you know how streaming the tailable cursor works internally? Does this actually “poll by writing” or is it just waiting for more data?

Issue Analytics

  • State:closed
  • Created 10 years ago
  • Reactions:1
  • Comments:7

github_iconTop GitHub Comments

2reactions
aheckmanncommented, Aug 12, 2013

Options set using query.setOptions() are passed to the underlying cursor.

model.find().setOptions({ tailableRetryInterval: ms }).stream()

http://mongodb.github.io/node-mongodb-native/api-generated/collection.html?highlight=tailableretryinterval

0reactions
aheckmanncommented, Aug 12, 2013

😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

[SERVER-9580] Multiple tailable cursors against the same ...
Open multiple tailable cursors with or without awaitdata against a capped collection causes sustained high cpu (on my machine about 30% cpu) ...
Read more >
Mongodb tailable cursor, high cpu usage - nodejs + mongodb
1 Answer 1 · A CPU usage of 10% is nothing. Really. · When you reduce the interval in which the tailable cursor...
Read more >
anyone using capped collections with tailable and awaitData?
I turned on reactivemongo DEBUG logging and when my .tailable.awaitData cursor is being used, I get a high rate of request/responses on the ......
Read more >
[Solved]-MongoDB high cpu usage/long read time-mongodb
I.e. as your collection grows, the reads are getting slower. If you're using the find method, you can run explain on the resulting...
Read more >
Capped Collections — MongoDB Manual
Similar to the Unix tail -f command, the tailable cursor "tails" the end of a capped collection. As new documents are inserted into...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found