[discuss] Data Transform specs
See original GitHub issuehttps://github.com/vega/vega/wiki/Data-Transforms
Appendix: Data Transform Research
Plotly Transforms
No libraries for data transform have been found.
Vega Transforms
https://github.com/vega/vega/wiki/Data-Transforms - v2
https://vega.github.io/vega/docs/transforms/ - v3
Vega provided Data Transforms can be used to manipulate datasets before rendering a visualisation. E.g., one may need to perform transformations such as aggregation or filtering (there many types, see link above) of a dataset and display the graph only after that. Another situation would be creating a new dataset by applying various calculations on an old one.
Usually transforms are defined in transform
array inside data
property.
“Transforms that do not filter or generate new data objects can be used within the transform array of a mark definition to specify post-encoding transforms.”
Examples:
Filtering
https://vega.github.io/vega-editor/?mode=vega&spec=parallel_coords
This example filters rows that have both Horsepower
and Miles_per_Gallon
fields.
{
"data": [
{
"name": "cars",
"url": "data/cars.json",
"transform": [
{
"type": "filter",
"test": "datum.Horsepower && datum.Miles_per_Gallon"
}
]
}
]
}
Geopath, aggregate, lookup, filter, sort, voronoi and linkpath
https://vega.github.io/vega-editor/?mode=vega&spec=airports
This example has a lot of transforms - in some cases there is only transform applied to a dataset, in other cases there are sequence of transforms.
In the first dataset, it applies geopath
transform which maps GeoJSON features to SVG path strings. It uses alberUsa
projection type (more about projection).
In the second dataset, it applies sum operation on “count” field and outputs it as “flights” fields.
In the third dataset:
- it compares its “iata” field against “origin” field of “traffic” dataset. Matching values are outputed as “traffic” field.
- Next, it filters out all values that are null.
- After that, it applies
geo
transform as in the first dataset above. - Next, it filters out layout_x and layout_y values that are null.
- Then, it sorts dataset by traffic.flights field in descending order.
- After that, it applies
voronoi
transform to compute voronoi diagram based on “layout_x” and “layout_y” fields.
In the last dataset:
- First, it filters values on which there is a signal called “hover” (specified in the Vega spec’s “signals” property) with “iata” attribute that matches to the dataset’s “origin” field.
- Next, it looks up matching values of “airports” dataset’s “iata” field against its “origin” and “destination” fields. Output fields are saved as “_source” and “_target”.
- Filters “_source” and “_target” values that are truthy (not null).
- Finally, linkpath transform creates visual links between nodes (more about linkpath).
{
"data": [
{
"name": "states",
"url": "data/us-10m.json",
"format": {"type": "topojson", "feature": "states"},
"transform": [
{
"type": "geopath", "projection": "albersUsa",
"scale": 1200, "translate": [450, 280]
}
]
},
{
"name": "traffic",
"url": "data/flights-airport.csv",
"format": {"type": "csv", "parse": "auto"},
"transform": [
{
"type": "aggregate", "groupby": ["origin"],
"summarize": [{"field": "count", "ops": ["sum"], "as": ["flights"]}]
}
]
},
{
"name": "airports",
"url": "data/airports.csv",
"format": {"type": "csv", "parse": "auto"},
"transform": [
{
"type": "lookup", "on": "traffic", "onKey": "origin",
"keys": ["iata"], "as": ["traffic"]
},
{
"type": "filter",
"test": "datum.traffic != null"
},
{
"type": "geo", "projection": "albersUsa",
"scale": 1200, "translate": [450, 280],
"lon": "longitude", "lat": "latitude"
},
{
"type": "filter",
"test": "datum.layout_x != null && datum.layout_y != null"
},
{ "type": "sort", "by": "-traffic.flights" },
{ "type": "voronoi", "x": "layout_x", "y": "layout_y" }
]
},
{
"name": "routes",
"url": "data/flights-airport.csv",
"format": {"type": "csv", "parse": "auto"},
"transform": [
{ "type": "filter", "test": "hover && hover.iata == datum.origin" },
{
"type": "lookup", "on": "airports", "onKey": "iata",
"keys": ["origin", "destination"], "as": ["_source", "_target"]
},
{ "type": "filter", "test": "datum._source && datum._target" },
{ "type": "linkpath" }
]
}
]
}
Further research on Vega transforms
https://github.com/vega/vega-dataflow-examples/
It is quite difficult to me to read the code as there is not enough documentation. I have included here the simplest example:
vega-dataflow.js
contains Dataflow, all transforms and vega’s utilities.
<!DOCTYPE HTML>
<html>
<head>
<title>Dataflow CountPattern</title>
<script src="../../build/vega-dataflow.js"></script>
<style>
body { margin: 10px; font-family: Helvetica Neue, Arial; font-size: 14px; }
textarea { width: 800px; height: 200px; }
pre { font-family: Monaco; font-size: 10px; }
</style>
</head>
<body>
<textarea id="text"></textarea><br/>
<input id="slider" type="range" min="2" max="10" value="4"/>
Frequency Threshold<br/>
<pre id="output"></pre>
</body>
</html>
df
is a Dataflow instance where we register (.add) functions and parameters - as below on line 36-38. The same with adding transforms - lines 40-44. We can pass different parameters to the transforms depending on requirements of each of them. Event handlers can added by using .on
method of the Dataflow instance - lines 46-48.
var tx = vega.transforms; // all transforms
var out = document.querySelector('#output');
var area = document.querySelector('#text');
area.value = [
"Despite myriad tools for visualizing data, there remains a gap between the notational efficiency of high-level visualization systems and the expressiveness and accessibility of low-level graphical systems."
].join('\n\n');
var stopwords = "(i|me|my|myself|we|us|our|ours|ourselves|you|your|yours|yourself|yourselves|he|him|his)";
var get = vega.field('data');
function readText(_, pulse) {
if (this.value) pulse.rem = this.value;
return pulse.source = pulse.add = [vega.ingest(area.value)];
}
function threshold(_) {
var freq = _.freq,
f = function(t) { return t.count >= freq; };
return (f.fields = ['count'], f);
}
function updatePage() {
out.innerText = c1.value.slice()
.sort(function(a,b) {
return (b.count - a.count)
|| (b.text > a.text ? -1 : a.text > b.text ? 1 : 0);
})
.map(function(t) {
return t.text + ': ' + t.count;
})
.join('\n');
}
var df = new vega.Dataflow(), // create a new Dataflow instance
// then add various operators into Dataflow instance:
ft = df.add(4), // word frequency threshold
ff = df.add(threshold, {freq:ft})
rt = df.add(readText),
// add a transforms (tx):
cp = df.add(tx.CountPattern, {field:get, case:'lower',
pattern:'[\\w\']{2,}', stopwords:stopwords, pulse:rt}),
cc = df.add(tx.Collect, {pulse:cp}),
fc = df.add(tx.Filter, {expr:ff, pulse:cc}),
c1 = df.add(tx.Collect, {pulse:fc}),
up = df.add(updatePage, {pulse: c1});
df.on(df.events(area, 'keyup').debounce(250), rt)
.on(df.events('#slider', 'input'), ft, function(_, e) { return +e.target.value; })
.run();
DP Pipelines transforms
DPP provides number of transforms that can be applied to a dataset. However, those transforms cannot be processed inside browsers as the library requires Python scripts to run.
Below is a copy-paste from DPP docs:
concatenate
Concatenates a number of streamed resources and converts them to a single resource.
Parameters:
-
sources
- Which resources to concatenate. Same semantics asresources
instream_remote_resources
.If omitted, all resources in datapackage are concatenated.
Resources to concatenate must appear in consecutive order within the data-package.
-
target
- Target resource to hold the concatenated data. Should define at least the following properties:name
- name of the resourcepath
- path in the data-package for this file.
If omitted, the target resource will receive the name
concat
and will be saved atdata/concat.csv
in the datapackage. -
fields
- Mapping of fields between the sources and the target, so that the keys are the target field names, and values are lists of source field names.This mapping is used to create the target resources schema.
Note that the target field name is always assumed to be mapped to itself.
Example:
- run: concatenate
parameters:
target:
name: multi-year-report
path: data/multi-year-report.csv
sources: 'report-year-20[0-9]{2}'
fields:
activity: []
amount: ['2009_amount', 'Amount', 'AMOUNT [USD]', '$$$']
In this example we concatenate all resources that look like report-year-<year>
, and output them to the multi-year-report
resource.
The output contains two fields:
activity
, which is calledactivity
in all sourcesamount
, which has varying names in different resources (e.g.Amount
,2009_amount
,amount
etc.)
join
Joins two streamed resources.
“Joining” in our case means taking the target resource, and adding fields to each of its rows by looking up data in the source resource.
A special case for the join operation is when there is no target stream, and all unique rows from the source are used to create it. This mode is called deduplication mode - The target resource will be created and deduplicated rows from the source will be added to it.
Parameters:
-
source
- information regarding the source resourcename
- name of the resourcekey
- One of- List of field names which should be used as the lookup key
- String, which would be interpreted as a Python format string used to form the key (e.g.
{<field_name_1>}:{field_name_2}
)
delete
- delete from data-package after joining (False
by default)
-
target
- Target resource to hold the joined data. Should define at least the following properties:name
- as insource
key
- as insource
, ornull
for creating the target resource and performing deduplication.
-
fields
- mapping of fields from the source resource to the target resource. Keys should be field names in the target resource. Values can define two attributes:-
name
- field name in the source (by default is the same as the target field name) -
aggregate
- aggregation strategy (how to handle multiple source rows with the same key). Can take the following options:-
sum
- summarise aggregated values. For numeric values it’s the arithmetic sum, for strings the concatenation of strings and for other types will error. -
avg
- calculate the average of aggregated values.For numeric values it’s the arithmetic average and for other types will err.
-
max
- calculate the maximum of aggregated values.For numeric values it’s the arithmetic maximum, for strings the dictionary maximum and for other types will error.
-
min
- calculate the minimum of aggregated values.For numeric values it’s the arithmetic minimum, for strings the dictionary minimum and for other types will error.
-
first
- take the first value encountered -
last
- take the last value encountered -
count
- count the number of occurrences of a specific key For this method, specifyingname
is not required. In case it is specified,count
will count the number of non-null values for that source field. -
set
- collect all distinct values of the aggregated field, unordered -
array
- collect all values of the aggregated field, in order of appearance -
any
- pick any value.
By default,
aggregate
takes theany
value. -
If neither
name
oraggregate
need to be specified, the mapping can map to the empty object{}
or tonull
. -
-
full
- Boolean,- If
True
(the default), failed lookups in the source will result in “null” values at the source. - if
False
, failed lookups in the source will result in dropping the row from the target.
- If
Important: the “source” resource must appear before the “target” resource in the data-package.
Examples:
- run: join
parameters:
source:
name: world_population
key: ["country_code"]
delete: yes
target:
name: country_gdp_2015
key: ["CC"]
fields:
population:
name: "census_2015"
full: true
The above example aims to create a package containing the GDP and Population of each country in the world.
We have one resource (world_population
) with data that looks like:
country_code | country_name | census_2000 | census_2015 |
---|---|---|---|
UK | United Kingdom | 58857004 | 64715810 |
… |
And another resource (country_gdp_2015
) with data that looks like:
CC | GDP (£m) | Net Debt (£m) |
---|---|---|
UK | 1832318 | 1606600 |
… |
The join
command will match rows in both datasets based on the country_code
/ CC
fields, and then copying the value in the census_2015
field into a new population
field.
The resulting data package will have the world_population
resource removed and the country_gdp_2015
resource looking like:
CC | GDP (£m) | Net Debt (£m) | population |
---|---|---|---|
UK | 1832318 | 1606600 | 64715810 |
… |
A more complex example:
- run: join
parameters:
source:
name: screen_actor_salaries
key: "{production} ({year})"
target:
name: mgm_movies
key: "{title}"
fields:
num_actors:
aggregate: 'count'
average_salary:
name: salary
aggregate: 'avg'
total_salaries:
name: salary
aggregate: 'sum'
full: false
This example aims to analyse salaries for screen actors in the MGM studios.
Once more, we have one resource (screen_actor_salaries
) with data that looks like:
year | production | actor | salary |
---|---|---|---|
2016 | Vertigo 2 | Mr. T | 15000000 |
2016 | Vertigo 2 | Robert Downey Jr. | 7000000 |
2015 | The Fall - Resurrection | Jeniffer Lawrence | 18000000 |
2015 | Alf - The Return to Melmack | The Rock | 12000000 |
… |
And another resource (mgm_movies
) with data that looks like:
title | director | producer |
---|---|---|
Vertigo 2 (2016) | Lindsay Lohan | Lee Ka Shing |
iRobot - The Movie (2018) | Mr. T | Mr. T |
… |
The join
command will match rows in both datasets based on the movie name and production year. Notice how we overcome incompatible fields by using different key patterns.
The resulting dataset could look like:
title | director | producer | num_actors | average_salary | total_salaries |
---|---|---|---|---|---|
Vertigo 2 (2016) | Lindsay Lohan | Lee Ka Shing | 2 | 11000000 | 22000000 |
… |
Issue Analytics
- State:
- Created 7 years ago
- Comments:7 (6 by maintainers)
Top GitHub Comments
hi @ppKrauss
No, this is targeted at specifying data transformations for views on data like visualisations.
For a framework around traceability of data sources (data provenance), please see our Pipelines framework.
@pwalsh rather than close I’ve moved to icebox as this is a genuine issue that I think we will need for the views spec very soon.