question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error on running Bambi on Colab

See original GitHub issue

Greetings all,

I am trying to run bambi on colab, however I got into some issues;

First problem is as follows;

`NoSectionError                            Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/aesara/configparser.py](https://localhost:8080/#) in fetch_val_for_key(self, key, delete_key)
    236             try:
--> 237                 return self._aesara_cfg.get(section, option)
    238             except InterpolationError:

15 frames
NoSectionError: No section: 'blas'

During handling of the above exception, another exception occurred:

KeyError                                  Traceback (most recent call last)
KeyError: 'blas__ldflags'

During handling of the above exception, another exception occurred:

ModuleNotFoundError                       Traceback (most recent call last)
ModuleNotFoundError: No module named 'mkl'

During handling of the above exception, another exception occurred:

RuntimeError                              Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/aesara/link/c/cmodule.py](https://localhost:8080/#) in check_mkl_openmp()
   2683 you set this flag and don't set the appropriate environment or make
   2684 sure you have the right version you *will* get wrong results.
-> 2685 """
   2686         )
   2687 

RuntimeError: 
Could not import 'mkl'.  If you are using conda, update the numpy
packages to the latest build otherwise, set MKL_THREADING_LAYER=GNU in
your environment for MKL 2018.

If you have MKL 2017 install and are not in a conda environment you
can set the Aesara flag blas__check_openmp to False.  Be warned that if
you set this flag and don't set the appropriate environment or make
sure you have the right version you *will* get wrong results.`

Than I’ve updated mkl via !pip install mkl -U and got the second issue;

AttributeError                            Traceback (most recent call last)
[<ipython-input-7-e7c20f48c217>](https://localhost:8080/#) in <module>()
----> 1 import bambi as bmb

6 frames
[/usr/local/lib/python3.7/dist-packages/aesara/tensor/nnet/opt.py](https://localhost:8080/#) in <module>()
    492 
    493 # Register Cpu Optimization
--> 494 conv_groupopt = aesara.graph.optdb.LocalGroupDB()
    495 conv_groupopt.__name__ = "conv_opts"
    496 register_specialize_device(conv_groupopt, "fast_compile", "fast_run")

AttributeError: module 'aesara' has no attribute 'graph'

Then, I’ve update tensorflow with !pip install -U tensorflow-gpu and I got stuck in third;

ValueError                                Traceback (most recent call last)
[<ipython-input-9-e7c20f48c217>](https://localhost:8080/#) in <module>()
----> 1 import bambi as bmb

8 frames
[/usr/local/lib/python3.7/dist-packages/aesara/graph/optdb.py](https://localhost:8080/#) in register(self, name, optimizer, use_db_name_as_tag, *tags, **kwargs)
     69 
     70         if name in self.__db__:
---> 71             raise ValueError(f"The tag '{name}' is already present in the database.")
     72 
     73         if use_db_name_as_tag:

ValueError: The tag 'local_inplace_sparse_block_gemv' is already present in the database.

Anybody has the same problem? I am not expert enough to proceed further, can somebody explain what is actually wrong?

Thanks.

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:12

github_iconTop GitHub Comments

2reactions
canyon289commented, Jul 23, 2022

To scope this ticket, it sounds like the error for running Bambi on colab has been resolved right? We have a short term fix, and I assure you the long term fix is actively being worked on. If the issue now is slow sampling on colab, thats a very different thing, either requiring model debugging, data thinning, or frankly just paying for colab pro to get more computational speed.

Let me know if my assessment here is correct

2reactions
hans-ekbrandcommented, Jul 23, 2022

Now I am trying to run sampling on google colab (standart) with GPU

warmup: 6%|▋ | 699/11000 [37:59<**9:42:17**, 3.39s/it, 1023 steps of size 2.09e-03. acc. prob=0.93]

I am not sure how I should proceed, and I am open to suggestions.

Collect samples semi-manually: set “draws” to a number low enough so that the run finish before the session is interrupted, save the arviz object to disk (with pickle), transfer the file to a persistant storage medium (colab is wiped automatically when you get disconnected), disconnect the colab instance, wait until your quota on colab is high enough for you to aquire a new GPU, repeat. Do posterior analys only when you have collected enough draws.

EDIT: no reason to use “parallel” on colab, you only have one GPU there.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Unexpected Error when testing Bambi package #389 - GitHub
I'm working on a laptop with Windows 10 and using Anaconda 64-bit. I have created an environment for working with Pymc3 and bambi....
Read more >
How to run the notebook on google colab (throws an error)
To fix the error, run these two commands in a cell before running other cells of the notebook: Install compatible TensorFlow version :....
Read more >
ERROR when running cmdstanpy tutorial on google colab ...
I am running the basic “hello world” tutorial for cmdstanpy found here. The problem I have is that I cannot access the sample...
Read more >
Google Colab Error
to LCZero. Hello, I'm running into trouble in Google Colab trying to create training games. I obtain the following error. Downloading network.
Read more >
Getting error in DreamBooth running on Google Colab - Reddit
I am using this google colab notebook to run dreambooth using automatic1111 webgui. Everything ran smoothly few days back.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found