question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

DeepChem Installation Failure

See original GitHub issue

🐛 Bug

DeepChem when installed by the following commands:

conda create -y --name deepchem python=3.8
conda activate deepchem
pip install tensorflow~=2.4
pip install deepchem
conda install -y -c conda-forge rdkit
pip install jupyter
pip install seaborn
git clone git@github.com:deepchem/DeepLearningLifeSciences.git

cd DeepLearningLifeSciences/Chapter11/
jupyter notebook 

Open chapter_11_02_erk2_graph_conv.ipynb
Cell->Run All

fails with the following error

---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
<ipython-input-6-1c890c29d1d2> in <module>

      6     model = generate_graph_conv_model()
      7     train_dataset, valid_dataset, test_dataset = splitter.train_valid_test_split(dataset)
----> 8     model.fit(train_dataset)
      9     train_scores = model.evaluate(train_dataset, metrics, transformers)
     10     training_score_list.append(train_scores["mean-matthews_corrcoef"])

~/anaconda3/envs/deepchem/lib/python3.8/site-packages/deepchem/models/keras_model.py in fit(self, dataset, nb_epoch, max_checkpoints_to_keep, checkpoint_interval, deterministic, restore, variables, loss, callbacks, all_losses)
    318     The average loss over the most recent checkpoint interval
    319    """
--> 320     return self.fit_generator(
    321         self.default_generator(
    322             dataset, epochs=nb_epoch,

~/anaconda3/envs/deepchem/lib/python3.8/site-packages/deepchem/models/keras_model.py in fit_generator(self, generator, max_checkpoints_to_keep, checkpoint_interval, restore, variables, loss, callbacks, all_losses)
    407         inputs = inputs[0]
    408 
--> 409       batch_loss = apply_gradient_for_batch(inputs, labels, weights, loss)
    410       current_step = self._global_step.numpy()
    411 

~/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
    826     tracing_count = self.experimental_get_tracing_count()
    827     with trace.Trace(self._name) as tm:
--> 828       result = self._call(*args, **kwds)
    829       compiler = "xla" if self._experimental_compile else "nonXla"
    830       new_tracing_count = self.experimental_get_tracing_count()

~/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
    869       # This is the first call of __call__, so we have to initialize.
    870       initializers = []
--> 871       self._initialize(args, kwds, add_initializers_to=initializers)
    872     finally:
    873       # At this point we know that the initialization is complete (or less

~/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
    723     self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
    724     self._concrete_stateful_fn = (
--> 725         self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
    726             *args, **kwds))
    727 

~/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
   2967       args, kwargs = None, None
   2968     with self._lock:
-> 2969       graph_function, _ = self._maybe_define_function(args, kwargs)
   2970     return graph_function
   2971 

~/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
   3359 
   3360           self._function_cache.missed.add(call_context_key)
-> 3361           graph_function = self._create_graph_function(args, kwargs)
   3362           self._function_cache.primary[cache_key] = graph_function
   3363 

~/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   3194     arg_names = base_arg_names + missing_arg_names
   3195     graph_function = ConcreteFunction(
-> 3196         func_graph_module.func_graph_from_py_func(
   3197             self._name,
   3198             self._python_function,

~/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    988         _, original_func = tf_decorator.unwrap(python_func)
    989 
--> 990       func_outputs = python_func(*func_args, **func_kwargs)
    991 
    992       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

~/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
    632             xla_context.Exit()
    633         else:
--> 634           out = weak_wrapped_fn().__wrapped__(*args, **kwds)
    635         return out
    636 

~/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
    975           except Exception as e:  # pylint:disable=broad-except
    976             if hasattr(e, "ag_error_metadata"):
--> 977               raise e.ag_error_metadata.to_exception(e)
    978             else:
    979               raise

NotImplementedError: in user code:

    /home/pwalters/anaconda3/envs/deepchem/lib/python3.8/site-packages/deepchem/models/keras_model.py:474 apply_gradient_for_batch  *
        grads = tape.gradient(batch_loss, vars)
    /home/pwalters/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/eager/backprop.py:1080 gradient  **
        flat_grad = imperative_grad.imperative_grad(
    /home/pwalters/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/eager/imperative_grad.py:71 imperative_grad
        return pywrap_tfe.TFE_Py_TapeGradient(
    /home/pwalters/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/eager/backprop.py:162 _gradient_function
        return grad_fn(mock_op, *out_grads)
    /home/pwalters/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/ops/math_grad.py:473 _UnsortedSegmentSumGrad
        return _GatherDropNegatives(grad, op.inputs[1])[0], None, None
    /home/pwalters/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/ops/math_grad.py:439 _GatherDropNegatives
        array_ops.ones([array_ops.rank(gathered)
    /home/pwalters/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
        return target(*args, **kwargs)
    /home/pwalters/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:3120 ones
        output = _constant_if_small(one, shape, dtype, name)
    /home/pwalters/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:2804 _constant_if_small
        if np.prod(shape) < 1000:
    <__array_function__ internals>:5 prod
        
    /home/pwalters/anaconda3/envs/deepchem/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3030 prod
        return _wrapreduction(a, np.multiply, 'prod', axis, dtype, out,
    /home/pwalters/anaconda3/envs/deepchem/lib/python3.8/site-packages/numpy/core/fromnumeric.py:87 _wrapreduction
        return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
    /home/pwalters/anaconda3/envs/deepchem/lib/python3.8/site-packages/tensorflow/python/
framework/ops.py:852 __array__
        raise NotImplementedError(

    NotImplementedError: Cannot convert a symbolic Tensor (gradient_tape/private__graph_conv_keras_model/graph_gather/sub:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

The full environment is given by

name: deepchem
channels:
  - conda-forge
  - soumith
  - defaults
dependencies:
  - _libgcc_mutex=0.1=conda_forge
  - _openmp_mutex=4.5=1_gnu
  - boost=1.74.0=py38hc10631b_3
  - boost-cpp=1.74.0=hc6e9bd1_2
  - bzip2=1.0.8=h7f98852_4
  - ca-certificates=2020.12.5=ha878542_0
  - cairo=1.16.0=h6cf1ce9_1008
  - certifi=2020.12.5=py38h578d9bd_1
  - cycler=0.10.0=py_2
  - fontconfig=2.13.1=hba837de_1005
  - freetype=2.10.4=h0708190_1
  - gettext=0.19.8.1=h0b5b191_1005
  - greenlet=1.0.0=py38h709712a_0
  - icu=68.1=h58526e2_0
  - jpeg=9d=h36c2ea0_0
  - kiwisolver=1.3.1=py38h1fd1430_1
  - lcms2=2.12=hddcbb42_0
  - ld_impl_linux-64=2.33.1=h53a641e_7
  - libblas=3.9.0=8_openblas
  - libcblas=3.9.0=8_openblas
  - libffi=3.3=he6710b0_2
  - libgcc-ng=9.3.0=h2828fa1_19
  - libgfortran-ng=9.3.0=hff62375_19
  - libgfortran5=9.3.0=hff62375_19
  - libglib=2.68.1=h3e27bee_0
  - libgomp=9.3.0=h2828fa1_19
  - libiconv=1.16=h516909a_0
  - liblapack=3.9.0=8_openblas
  - libopenblas=0.3.12=pthreads_h4812303_1
  - libpng=1.6.37=h21135ba_2
  - libstdcxx-ng=9.3.0=h6de172a_19
  - libtiff=4.2.0=hdc55705_0
  - libuuid=2.32.1=h7f98852_1000
  - libwebp-base=1.2.0=h7f98852_2
  - libxcb=1.13=h7f98852_1003
  - libxml2=2.9.10=h72842e0_4
  - lz4-c=1.9.3=h9c3ff4c_0
  - matplotlib-base=3.4.1=py38hcc49a3a_0
  - ncurses=6.2=he6710b0_1
  - olefile=0.46=pyh9f0ad1d_1
  - openjpeg=2.4.0=hf7af979_0
  - openssl=1.1.1k=h7f98852_0
  - pcre=8.44=he1b5a44_0
  - pillow=8.1.2=py38ha0e1e83_1
  - pip=21.0.1=py38h06a4308_0
  - pixman=0.40.0=h36c2ea0_0
  - pthread-stubs=0.4=h36c2ea0_1001
  - pycairo=1.20.0=py38h323dad1_1
  - pyparsing=2.4.7=pyh9f0ad1d_0
  - python=3.8.8=hdb3f193_4
  - python-dateutil=2.8.1=py_0
  - python_abi=3.8=1_cp38
  - pytz=2021.1=pyhd8ed1ab_0
  - rdkit=2021.03.1=py38hf8acc3d_0
  - readline=8.1=h27cfd23_0
  - reportlab=3.5.66=py38hadf75a6_0
  - setuptools=52.0.0=py38h06a4308_0
  - six=1.15.0=pyh9f0ad1d_0
  - sqlalchemy=1.4.7=py38h497a2fe_0
  - sqlite=3.35.4=hdfb4753_0
  - tk=8.6.10=hbc83047_0
  - tornado=6.1=py38h497a2fe_1
  - wheel=0.36.2=pyhd3eb1b0_0
  - xorg-kbproto=1.0.7=h7f98852_1002
  - xorg-libice=1.0.10=h7f98852_0
  - xorg-libsm=1.2.3=hd9c2040_1000
  - xorg-libx11=1.7.0=h7f98852_0
  - xorg-libxau=1.0.9=h7f98852_0
  - xorg-libxdmcp=1.1.3=h7f98852_0
  - xorg-libxext=1.3.4=h7f98852_1
  - xorg-libxrender=0.9.10=h7f98852_1003
  - xorg-renderproto=0.11.1=h7f98852_1002
  - xorg-xextproto=7.3.0=h7f98852_1002
  - xorg-xproto=7.0.31=h7f98852_1007
  - xz=5.2.5=h7b6447c_0
  - zlib=1.2.11=h7b6447c_3
  - zstd=1.4.9=ha95c52a_0
  - pip:
    - absl-py==0.12.0
    - argon2-cffi==20.1.0
    - astunparse==1.6.3
    - async-generator==1.10
    - attrs==20.3.0
    - backcall==0.2.0
    - bleach==3.3.0
    - cachetools==4.2.1
    - cffi==1.14.5
    - chardet==4.0.0
    - decorator==5.0.6
    - deepchem==2.5.0
    - defusedxml==0.7.1
    - entrypoints==0.3
    - flatbuffers==1.12
    - gast==0.3.3
    - google-auth==1.28.1
    - google-auth-oauthlib==0.4.4
    - google-pasta==0.2.0
    - grpcio==1.32.0
    - h5py==2.10.0
    - idna==2.10
    - ipykernel==5.5.3
    - ipython==7.22.0
    - ipython-genutils==0.2.0
    - ipywidgets==7.6.3
    - jedi==0.18.0
    - jinja2==2.11.3
    - joblib==1.0.1
    - jsonschema==3.2.0
    - jupyter==1.0.0
    - jupyter-client==6.1.12
    - jupyter-console==6.4.0
    - jupyter-core==4.7.1
    - jupyterlab-pygments==0.1.2
    - jupyterlab-widgets==1.0.0
    - keras-preprocessing==1.1.2
    - markdown==3.3.4
    - markupsafe==1.1.1
    - mistune==0.8.4
    - nbclient==0.5.3
    - nbconvert==6.0.7
    - nbformat==5.1.3
    - nest-asyncio==1.5.1
    - notebook==6.3.0
    - numpy==1.19.5
    - oauthlib==3.1.0
    - opt-einsum==3.3.0
    - packaging==20.9
    - pandas==1.2.4
    - pandocfilters==1.4.3
    - parso==0.8.2
    - pexpect==4.8.0
    - pickleshare==0.7.5
    - prometheus-client==0.10.1
    - prompt-toolkit==3.0.18
    - protobuf==3.15.8
    - ptyprocess==0.7.0
    - pyasn1==0.4.8
    - pyasn1-modules==0.2.8
    - pycparser==2.20
    - pygments==2.8.1
    - pyrsistent==0.17.3
    - pyzmq==22.0.3
    - qtconsole==5.0.3
    - qtpy==1.9.0
    - requests==2.25.1
    - requests-oauthlib==1.3.0
    - rsa==4.7.2
    - scikit-learn==0.24.1
    - scipy==1.6.2
    - seaborn==0.11.1
    - send2trash==1.5.0
    - tensorboard==2.4.1
    - tensorboard-plugin-wit==1.8.0
    - tensorflow==2.4.1
    - tensorflow-estimator==2.4.0
    - termcolor==1.1.0
    - terminado==0.9.4
    - testpath==0.4.4
    - threadpoolctl==2.1.0
    - traitlets==5.0.5
    - typing-extensions==3.7.4.3
    - urllib3==1.26.4
    - wcwidth==0.2.5
    - webencodings==0.5.1
    - werkzeug==1.0.1
    - widgetsnbextension==3.5.1
    - wrapt==1.12.1
prefix: /home/pwalters/anaconda3/envs/deepchem

I’m relaying a bug report by @PatWalters on this issue

To Reproduce

Steps to reproduce the behavior:

Expected behavior

Environment

  • OS:
  • Python version:
  • DeepChem version:
  • RDKit version (optional):
  • TensorFlow version (optional):
  • PyTorch version (optional):
  • Any other relevant information:

Additional context

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:9 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
PatWalterscommented, May 13, 2021

MCC is only valid for binary classification. What about converting the class labels back to integers for MMC and throwing an error if there are more than 2 classes.

On Thu, May 13, 2021 at 4:05 PM Peter Eastman @.***> wrote:

It’s related to the changes in metrics handling, but I’m not certain what the correct solution is. The dataset contains binary class labels: each label is either a 0 or a 1. When evaluating the metric, normalize_labels_shape() converts them to one-hot. But matthews_corrcoef() requires labels, not one-hot encoded values. How should this work?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/deepchem/deepchem/issues/2490#issuecomment-840801228, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAVCVTLL7P2WBWTX2IVXAUDTNQWG5ANCNFSM43G6I7SA .

0reactions
peastmancommented, Jul 14, 2021

I think everything in this issue is now fixed?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Release 2.6.2.dev deepchem-contributors
Install deepchem via pip or conda by simply running, ... fail to featurize for some molecules and featurization becomes slow.
Read more >
deepchem/Lobby - Gitter
model.fit() works fine, but model.save() raises an error NotImplemented. How does the saving and reloading of models work in dc2.3? In the 'old'...
Read more >
Deepchem on M1 MacBook - Community
I am trying to install deepchem via C… ... The actual problem was that rdkit was not supported on the M1 Mac because...
Read more >
DeepChem on Kaggle | Data Science and Machine Learning
To install by myself in a notebook failed. Additionally i found one notebook on kaggle but i can not reproduce the introductions. I...
Read more >
Import error with rdkit - Biostars
I am trying to use deepchem to import MolculeNet datasets. ... I was using pip to install packages, but then I tried switching...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found