question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Large query to /executions fails in mongodb

See original GitHub issue

The indexes added in 2.1 helped with a similar error but now I’m getting the following using production data that did not reproduce using 2k test executions.

My use case is that I need to query all the parent executions (status, action name, arguments, start/stop time) in the past 24 hours where the action name matches a substring. This allows me to throttle/deny additional requests from users based on a set of policies.

2016-12-13 22:03:54,425 140362311938768 INFO hooks [-] e02fb208-4202-4fee-bd77-1e4d7fe356d8 -  GET /executions with filters={u'limit': u'2000', u'exclude_attributes': u'result,trigger_instance', u'timestamp_gt': u'2016-12-12T22:03:54.403458Z', u'parent': u'null'} (remote_addr='127.0.0.1',method='GET',filters={u'exclude_attributes': u'result,trigger_instance', u'limit': u'2000', u'parent': u'null', u'timestamp_gt': u'2016-12-12T22:03:54.403458Z'},request_id='e02fb208-4202-4fee-bd77-1e4d7fe356d8',path='/executions')
[..]
2016-12-13 22:03:54,430 140362311938768 INFO resource [-] GET all /executions with filters={'start_timestamp__gt': datetime.datetime(2016, 12, 12, 22, 3, 54, 403458, tzinfo=tzutc()), 'order_by': ['+start_timestamp', 'action.ref'], 'parent': u'null'} (offset=0,limit=2000,filters={'start_timestamp__gt': datetime.datetime(2016, 12, 12, 22, 3, 54, 403458, tzinfo=tzutc()), 'order_by': ['+start_timestamp', 'action.ref'], 'parent': u'null'},sort=[])
2016-12-13 22:03:54,578 140362311938768 ERROR hooks [-] API call failed: errmsg: "Sort operation used more than the maximum 33554432 bytes of RAM. Add an index, or specify a smaller limit."
Traceback (most recent call last):
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/pecan/core.py", line 631, in __call__
    self.invoke_controller(controller, args, kwargs, state)
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/pecan/core.py", line 531, in invoke_controller
    result = controller(*args, **kwargs)
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/rbac/decorators.py", line 61, in func_wrapper
    return func(*args, **kwargs)
  File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/models/api/base.py", line 284, in callfunction
    raise e
OperationFailure: errmsg: "Sort operation used more than the maximum 33554432 bytes of RAM. Add an index, or specify a smaller limit." (_exception_data={'_OperationFailure__code': 96, '_OperationFailure__details': {u'ok': 0.0, u'code': 96, u'waitedMS': 0L, u'errmsg': u'errmsg: "Sort operation used more than the maximum 33554432 bytes of RAM. Add an index, or specify a smaller limit."'}},_exception_class='OperationFailure',_exception_message='errmsg: "Sort operation used more than the maximum 33554432 bytes of RAM. Add an index, or specify a smaller limit."')

Is it worth increasing internalQueryExecMaxBlockingStortBytes or is there a missing index that would help here? Or is there an index that is not being updated often enough?

Thanks!

Issue Analytics

  • State:open
  • Created 7 years ago
  • Comments:8 (2 by maintainers)

github_iconTop GitHub Comments

2reactions
JohnWelborncommented, Dec 13, 2016

I set it to 128mb and it is working. But if this error could come back… I’m curious if there is a solution with indexes, or if this points out the need for an index.

0reactions
stale[bot]commented, Mar 12, 2019

Thanks for contributing to this issue. As it has been 90 days since the last activity, we are automatically marking is as stale. If this issue is not relevant or applicable anymore (problem has been fixed in a new version or similar), please close the issue or let us know so we can close it. On the contrary, if the issue is still relevant, there is nothing you need to do, but if you have any additional details or context which would help us when working on this issue, please include it as a comment to this issue.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Cause of poor performance - 1.7 million records - MongoDB
The response time of queries for that amount of data is too much. 200 seconds for 1.7 million of records. Here the explain...
Read more >
How to correctly execute find query with comparing on long ...
By first query I mean the repository method execution in my app integration test,by second - execution the query in mongo shell. The...
Read more >
[SERVER-54710] Large number of $or clauses can create ...
When the profiler entry is too large it can trigger the "BSONObjectTooLarge" error and causing such a query fail to execute. Error: error:...
Read more >
Monitoring MongoDB Performance Metrics (WiredTiger)
MongoDB creates a checkpoint only every 60 seconds, which means that, without journaling turned on, a failure may cause data loss of up...
Read more >
Troubleshoot query issues when using the Azure Cosmos DB ...
The number of unique shardKeyRangeId values is the number of physical partitions where the query needed to be executed. In this example, the ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found