question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

test_train_network.py failing on Travis CI

See original GitHub issue

The builds (#379, #376) are failing for the recent PR’s, as the test (test_train_network.py) is failing on Travis. Here is the error for the failed test : -

=================================== FAILURES ===================================
_______ TestCore.test_train_model_runs_successfully_for_simplified_case ________
self = <test_train_network.TestCore object at 0x7f82f4afb790>
    @pytest.mark.integration
    def test_train_model_runs_successfully_for_simplified_case(self):
        # Note: This test is simply a mock test to ensure that the pipeline
        # runs successfully, and is not a test of the quality of the model
        # itself.
    
        train_model(
            str(self.trainingPath),
            str(self.modelPath),
            self.config_network,
>           debug_mode=True
            )
test/test_train_network.py:135: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
AxonDeepSeg/train_network.py:277: in train_model
    model.save(str(path_model) + "/model.hdf5")
../../../miniconda/envs/ads_venv/lib/python3.7/site-packages/keras/engine/network.py:1090: in save
    save_model(self, filepath, overwrite, include_optimizer)
../../../miniconda/envs/ads_venv/lib/python3.7/site-packages/keras/engine/saving.py:382: in save_model
    _serialize_model(model, f, include_optimizer)
../../../miniconda/envs/ads_venv/lib/python3.7/site-packages/keras/engine/saving.py:114: in _serialize_model
    layer_group[name] = val
../../../miniconda/envs/ads_venv/lib/python3.7/site-packages/keras/utils/io_utils.py:218: in __setitem__
    dataset = self.data.create_dataset(attr, val.shape, dtype=val.dtype)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
self = <Closed HDF5 group>, name = b'cconv-d0-c0/convolution/conv2d_1/kernel:0'
shape = (3, 3, 1, 5), dtype = dtype('float32'), data = None
kwds = {'track_order': False}, group = <Closed HDF5 group>
    def create_dataset(self, name, shape=None, dtype=None, data=None, **kwds):
        """ Create a new HDF5 dataset
    
        name
            Name of the dataset (absolute or relative).  Provide None to make
            an anonymous dataset.
        shape
            Dataset shape.  Use "()" for scalar datasets.  Required if "data"
            isn't provided.
        dtype
            Numpy dtype or string.  If omitted, dtype('f') will be used.
            Required if "data" isn't provided; otherwise, overrides data
            array's dtype.
        data
            Provide data to initialize the dataset.  If used, you can omit
            shape and dtype arguments.
    
        Keyword-only arguments:
    
        chunks
            (Tuple or int) Chunk shape, or True to enable auto-chunking. Integers can
            be used for 1D shape.
    
        maxshape
            (Tuple or int) Make the dataset resizable up to this shape. Use None for
            axes you want to be unlimited. Integers can be used for 1D shape.
        compression
            (String or int) Compression strategy.  Legal values are 'gzip',
            'szip', 'lzf'.  If an integer in range(10), this indicates gzip
            compression level. Otherwise, an integer indicates the number of a
            dynamically loaded compression filter.
        compression_opts
            Compression settings.  This is an integer for gzip, 2-tuple for
            szip, etc. If specifying a dynamically loaded compression filter
            number, this must be a tuple of values.
        scaleoffset
            (Integer) Enable scale/offset filter for (usually) lossy
            compression of integer or floating-point data. For integer
            data, the value of scaleoffset is the number of bits to
            retain (pass 0 to let HDF5 determine the minimum number of
            bits necessary for lossless compression). For floating point
            data, scaleoffset is the number of digits after the decimal
            place to retain; stored values thus have absolute error
            less than 0.5*10**(-scaleoffset).
        shuffle
            (T/F) Enable shuffle filter.
        fletcher32
            (T/F) Enable fletcher32 error detection. Not permitted in
            conjunction with the scale/offset filter.
        fillvalue
            (Scalar) Use this value for uninitialized parts of the dataset.
        track_times
            (T/F) Enable dataset creation timestamps.
        track_order
            (T/F) Track attribute creation order if True. If omitted use
            global default h5.get_config().track_order.
        external
            (Iterable of tuples) Sets the external storage property, thus
            designating that the dataset will be stored in one or more
            non-HDF5 files external to the HDF5 file.  Adds each tuple
            of (name, offset, size) to the dataset's list of external files.
            Each name must be a str, bytes, or os.PathLike; each offset and
            size, an integer.  If only a name is given instead of an iterable
            of tuples, it is equivalent to [(name, 0, h5py.h5f.UNLIMITED)].
        allow_unknown_filter
            (T/F) Do not check that the requested filter is available for use.
            This should only be used with ``write_direct_chunk``, where the caller
            compresses the data before handing it to h5py.
        """
        if 'track_order' not in kwds:
            kwds['track_order'] = h5.get_config().track_order
    
        with phil:
            group = self
            if name:
>               if '/' in name:
E               TypeError: a bytes-like object is required, not 'str'
../../../miniconda/envs/ads_venv/lib/python3.7/site-packages/h5py/_hl/group.py:143: TypeError

However, when I’m running the tests on my local machine, all tests, including the test (test_train_network.py), passes. I don’t understand this disparity. Anyone has any clue?

On my local machine, I ran pytest -v and all the tests ran successfully.

System Requirements OS: macOS Mojave

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

2reactions
mathieuboudreaucommented, Nov 2, 2020

That was it @jcohenadad - thanks!!

h5py wasn’t being set in our requirements file, so we likely were getting the latest version from Keras or Tensorflow’s installation rules. The solution in SCT worked as-is here as well. PR #382

0reactions
mathieuboudreaucommented, Nov 2, 2020

could it be related to neuropoly/spinalcordtoolbox#2987?

Interesting - thanks for the tip! Trying that now.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Common Build Problems - Travis CI Docs
Segmentation faults from the language interpreter (Ruby, Python, PHP, Node.js, etc.) #. If your build is failing due to unexpected segmentation faults in ......
Read more >
Django Unit Tests Failing on Travis CI Builds - Stack Overflow
You can solve this problem by disabling the Heroku test runner like so in your settings.py : # Configure Django App for Heroku....
Read more >
Travis Build Fails when using Python Coverage #8972 - GitHub
Travis build fails when using command: coverage report which includes coverage of python site-packages. Travis build succeeds when using ...
Read more >
Latest Python topics - Travis CI Community
Topic Replies Views Activity About the Python category 1 1190 May 30, 2020 SciPy disappeared from Python 3.10? 0 114 December 3, 2022 Add Python 3.10...
Read more >
Integrating with Travis CI | Postman Learning Center
Committing early and often helps the team avoid technical debt and detect problems. With CI, every code commit triggers an automated process that...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found