question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

GroupBy array based result rows

See original GitHub issue

Motivation

GroupBy queries internally represent result rows as MapBasedRow objects, which have the following two fields:

private final DateTime timestamp;
private final Map<String, Object> event;

As a result, we need to do relatively expensive Map put and get operations (typically these are HashMaps or LinkedHashMaps) at many points: when rows are first generated after each segment scan, when they are merged on historicals, when they are serialized and deserialized, and then when they are merged again on the broker.

The overhead is especially noticeable when the resultset of the groupBy query is large.

See also #6389.

Proposed changes

  1. Create a ResultRow class that simply wraps an Object[] and allows position-based access.
  2. Modify the GroupBy query implementation to use ResultRow throughout, rather than Row / MapBasedRow.
  3. Add ObjectMapper decorateObjectMapper(ObjectMapper, QueryType) to QueryToolChest, to aid in implementing the compatibility plan described in “Operational impact” below. QueryResource would use it so it could serialize results into either arrays or maps depending on the value of resultAsArray. DirectDruidClient would use it so it could deserialize results into ResultRow regardless of whether they originated as ResultRows or MapBasedRows. (By the way, the serialized form of a ResultRow would be a simple JSON array.)

Rationale

Some other potential approaches that I considered, and did not go with, include:

  • Creating an ArrayBasedRow that implements org.apache.druid.data.input.Row (just like MapBasedRow does). The reason for avoiding this is that the interface is all about retrieving fields by name – getRaw(String dimension), etc – and I wanted to do positional access instead.
  • Using Object[] instead of a wrapper ResultRow around the Object[]. It would have saved a little memory, but I thought the benefits of type-safety (it’s clear what ResultRow means when it appears in method signatures) and a nicer API would be worth it.

Operational impact

The format of data in the query cache would not change.

The wire format of groupBy results would change (this is part of the point of the change) but I plan to do this with no compatibility impact, by adding a new query context flag resultAsArray that defaults to false. If false, Druid would use the array-based result rows for in-memory operations, but then convert them to MapBasedRows for serialization purposes, keeping the wire format compatible. If true, Druid would use array-based result rows for serialization too.

I’d have brokers always set resultAsArray true on queries they send down to historicals. Since we tell cluster operators to update historicals first, that means that by the time the broker is updated, we can assume the historicals will know how to interpret the option. Users would also be able to set resultAsArray if they want once brokers are updated, and receive array-based results themselves.

So, due to the above design, there should be no operational impact.

Test plan

Existing unit tests will cover a lot of this. In addition, I plan to test on live clusters, especially the compatibility stuff.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:1
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
gianmcommented, Jul 21, 2019

The idea is the row order would be determined solely by the granularity, dimensions, aggregators, and post-aggregators, in the following way:

  1. If granularity != ALL then the first element is a row timestamp. Otherwise, the timestamp is omitted.
  2. The next set of elements are each dimension, in order.
  3. Then aggregators.
  4. Then post-aggregators. These might be omitted if they haven’t been computed yet (e.g. they won’t be included in the ResultRow objects that come from the per-segment engine).

There wouldn’t be headers, callers would be expected to know which element is which based on the above rules.

0reactions
gianmcommented, Jul 21, 2019

Would potentially be a nice option for every query type though. I bet for most TopNs granularity is “all” and so the timestamp could be omitted.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to group dataframe rows into list in pandas groupby
You can do this using groupby to group on the column of interest and then apply list to every group: In [1]: df...
Read more >
Pandas Group Rows into List Using groupby()
You can group DataFrame rows into a list by using pandas.DataFrame.groupby() function on the column of interest, select the column you want as...
Read more >
How to group dataframe rows into list in Pandas Groupby?
Group rows into a list in Pandas using agg(). We can use the groupby() method on column1, and agg() method to apply the...
Read more >
Group by: split-apply-combine — pandas 1.5.2 documentation
Some combination of the above: GroupBy will examine the results of the apply step and try to return ... These will split the...
Read more >
Group rows of data (Power Query) - Microsoft Support
You can group a column by using an aggregate function or group by a row. Example. The following procedures are based on the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found