question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Add example YAML definitions to common dataset Python docstrings

See original GitHub issue

Description

Users often ask for an easy way to look up the relevant YAML dataset configuration for use in the DataCatalog would look like. We include a series of examples in this section of the documentation, but it’s not especially easy to (1) link to as these aren’t under headings (2) find via search engine for the same reason.

This would be useful for all datasets - but the highest priority are those which drive most traffic to our documentation website:

  • kedro.extras.datasets.pandas.CSVDataSet
  • kedro.extras.datasets.spark.SparkDataSet
  • kedro.io.PartitionedDataSet
  • kedro.extras.datasets.pandas.ParquetDataSet
  • kedro.extras.datasets.pickle.PickleDataSet
  • kedro.extras.datasets.pandas.ExcelDataSet
  • kedro.extras.datasets.pandas.SQLQueryDataSet
  • kedro.extras.datasets.pandas.GBQTableDataSet
  • kedro.extras.datasets.spark.SparkHiveDataSet

Possible Implementation

def __init__(
        self,
        filepath: str,
        backend: str = "pickle",
        load_args: Dict[str, Any] = None,
        save_args: Dict[str, Any] = None,
        version: Version = None,
        credentials: Dict[str, Any] = None,
        fs_args: Dict[str, Any] = None,
    ) -> None:
        """Creates a new instance of ``PickleDataSet`` pointing to a concrete Pickle
        file on a specific filesystem. ``PickleDataSet`` supports four backends to
        serialize/deserialize objects: `pickle`, `joblib`, `dill`, and `compress_pickle`.

        Example YAML data catalog entry:
        >>> airplanes:
        >>>     type: pickle.PickleDataSet
        >>>     filepath: data/06_models/airplanes.pkl
        >>>     backend: pickle

        Args:
            filepath: Filepath in POSIX format to a Pickle file prefixed with a protocol like
                `s3://`. If prefix is not provided, `file` protocol (local filesystem) will be used.
                The prefix should be any protocol supported by ``fsspec``.
                Note: `http(s)` doesn't support versioning.
            backend: Backend to use, must be one of ['pickle', 'joblib', 'dill', 'compress_pickle'].
                Defaults to 'pickle'.
        """

To accomplish this, do the following:

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
datajoelycommented, Oct 6, 2021

@avan-sh I’ve also hacked something together that might help you draft these, hopefully useful!:


import inspect
from kedro.extras.datasets import pandas
import anyconfig

def sample_yaml(ds):
    kind = ds.__name__
    kwargs = dict(inspect.signature(ds).parameters)
    hint = {k:' # '+inspect.formatannotation(v.annotation) for k,v in kwargs.items()}
    samples = {**{'type': kind} , **{k:f'...' for k,v in kwargs.items()}}
    structure =  {'example_'+kind.lower().replace('dataset', '') :samples}
    yaml_string = anyconfig.dumps(structure, 'yaml').split('\n')
    hint_lookup = {i:hint.get(x.strip().split(':')[0] ,'') for i,x in enumerate(yaml_string)}
    return '\n'.join([x+hint_lookup[i]  for i,x in enumerate(yaml_string)]).replace("'",'')

print(sample_yaml(pandas.CSVDataSet))

Which produces:

example_csv:
  type: CSVDataSet
  filepath: ... # str
  load_args: ... # Dict[str, Any]
  save_args: ... # Dict[str, Any]
  version: ... # kedro.io.core.Version
  credentials: ... # Dict[str, Any]
  fs_args: ... # Dict[str, Any]
1reaction
avan-shcommented, Oct 5, 2021

Took a stab at this for CSV dataset, taking some inspiration from the old discussion. The new docs would look as below, hyperlinked to.

image

Would love to hear any suggestions for changes before I add it to other datasets.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Include yaml examples in dataset docs #579 - GitHub
Description The most common way I look up the docs for a DataSet is to google ... Add example YAML definitions to common...
Read more >
Adding Structured Data to Docstrings | Biopragmatics
Its documentation uses the sphinx-automodapi extension to generate pretty lists of all the datasets, models, loss functions, regularizers, etc.
Read more >
Documenting Python APIs with Docstrings
We use Python docstrings to create reference documentation for our Python APIs. ... This technique is useful for the Notes and Examples Numpydoc...
Read more >
Python Docstrings Tutorial : Examples & Format for Pydoc ...
See Python Docstrings. Learn about the different types of docstrings & various docstring formats like Sphinx, Numpy, and Pydoc with examples now.
Read more >
Documenting a Python package with mkdocs-material
Lets say our fictitious “my-package” Python package has the following structure, and we want to add the code reference for the “workflow” module....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found