Test docstrings for parameters are equal
See original GitHub issueWe recently merged a test that checks consistency between parameters in function signatures and their docstrings using sklearn.utils.testing.check_docstring_parameters
. I would like to have a function in sklearn.utils.testing
which similarly makes use of numpydoc
to check that parts of docstrings are identical among a set of objects. I would expect this helper to eventually be contributed back to numpydoc.
It might look something like:
def assert_consistent_docs(objects,
include_params=None, exclude_params=None,
include_attribs=None, exclude_attribs=None
include_returns=None, exclude_returns=None):
"""
Checks if types and descriptions of parameters, etc, are identical across
objects. ``object``s may either be ``NumpyDocString`` instances or objects
(classes, functions, descriptors) with docstrings that can be parsed as
numpydoc.
By default it asserts that any Parameters/Returns/Attributes entries having the
same name among ``objects`` docstrings also have the same type
specification and description (ignoring whitespace).
``include_*`` and ``exclude_*`` parameters here are mutually exclusive,
and specify a whitelist or blacklist, respectively, of parameter, attribute or
return value names. May be '*' to include/exclude all from that section.
"""
... do stuff ...
Then we could call it with:
assert_consistent_docs([sklearn.metrics.precision_recall_fscore_support,
sklearn.metrics.precision_score,
sklearn.metrics.recall_score,
sklearn.metrics.f1_score,
sklearn.metrics.fbeta_score],
exclude_returns='*')
This will ensure (by making a test fail when the condition is not met) that all these related scoring functions have identical parameter descriptions (whitespace excepted) wherever they have identical parameters. (I’ve not actually checked whether this is or should be true of all these metric functions.) Most importantly, having such an assertion means we can rest assured that when we change the documentation of some parameter or return value, we will be forced to do so consistently.
Issue Analytics
- State:
- Created 6 years ago
- Reactions:1
- Comments:6 (6 by maintainers)
Top GitHub Comments
I would like to work on this. Will try to get back with a PR in a few days.
I don’t think so, as we want to force the user to be explicit about either inclusion or exclusion