ClueWeb22
See original GitHub issueDataset Information:
ClueWeb22 is the newest in the Lemur Project’s ClueWeb line of datasets that support research on information retrieval, natural language processing and related human language technologies. This new dataset is being developed by the Lemur Project with significant assistance and support from Microsoft Corporation.
The ClueWeb22 dataset has several novel characteristics compared with earlier ClueWeb datasets.
- It is much larger.
- Documents are of higher quality.
- Documents are provided in several formats (HTML, clean text, screen shots).
- Document page analyses are provided that reveal where on a page text was displayed, and what was near it.
- The dataset includes a large set of crowdsourced queries and shallow relevance assessments (a pseudo search log).
Authors: Arnold Overwijk, Chenyan Xiong (@xiongchenyan), Jamie Callan (@jamiecallan), Cameron VandenBerg, Xiao Lucy Liu
Links to Resources:
- Website: https://lemurproject.org/clueweb22/index.php
- Documents spec: https://lemurproject.org/clueweb22/docspecs.php
- Queries spec: https://lemurproject.org/clueweb22/qryspecs.php
- SIRIP paper: https://doi.org/10.1145/3477495.3536321
Dataset ID(s) & supported entities:
clueweb22/a
: 200M docs, queries, qrels, scoreddocs?clueweb22/b
: 2B docs, queries?, qrels?, scoreddocs?clueweb22/l
: 10B docs, queries?, qrels?, scoreddocs?
Checklist
Mark each task once completed. All should be checked prior to merging a new dataset.
- Dataset definition (in
ir_datasets/datasets/clueweb22.py
) - Tests (in
tests/integration/clueweb22.py
) - Metadata generated (using
ir_datasets generate_metadata
command, should appear inir_datasets/etc/metadata.json
) - Documentation (in
ir_datasets/etc/clueweb22.yaml
)- Documentation generated in https://github.com/seanmacavaney/ir-datasets.com/
- Downloadable content (in
ir_datasets/etc/downloads.json
) Manual download requirded.- Download instructions added
-
Download verification action (in.github/workflows/verify_downloads.yml
). Only one needed pertopid
. -
Any small public files from NIST (or other potentially troublesome files) mirrored in https://github.com/seanmacavaney/irds-mirror/. Mirrored status properly reflected indownloads.json
.
Additional comments/concerns/ideas/etc.
The dataset is planned to be used for shared tasks in the near future. I also personally think it is of very high value to have this in ir_datasets.
Open Questions
- Where to get the topic tag mentioned in the paper?
- Is
VDOM-Paragraph
the same asVDOM-Passage
in the WARC headers? - What means the
?
in the inlink format anchor type description?
Issue Analytics
- State:
- Created a year ago
- Reactions:2
- Comments:9 (4 by maintainers)
Top GitHub Comments
Sean is correct. Each warc.gz file is compressed by record and has a companion offset file. To get the HTML of a specific document, open the appropriate .warc.offset file (can be determined from the ClueWeb docid), fseek to find the byte offsets of the start/end of the document (also determined from the ClueWeb docid), open the .warc.gz file, fseek to the start of the document, read the bytes, and uncompress them.
We can provide data samples if you need them.
We are trying to apply this or a similar architecture to all other types of data in the dataset, so that everything can be accessed quickly given a ClueWeb docid.
Best,
Jamie
On 10/5/2022 7:09 AM, Sean MacAvaney wrote:
As the categories are subsets of the larger ones, I’ve now also added “views” that can, for example, be used to just parse the plain text from the B category. The keys would be
clueweb22/b/as-l
,clueweb22/b/as-a
,clueweb22/b/as-l/en
,clueweb22/b/as-a/en
and so on. To not clutter the list of dataset IDs too much, we could also just skip the language-specific versions for the “views”.