Issue with Installing using Docker on Mac OS (Big Sur Intel) - syntax and also GPU's
See original GitHub issueDescribe the bug Was following the Docker Hub installation notes from this page: Docker Hub Install
To Reproduce It says to use:
docker run -it --rm --gpus all --ipc=host --net=host -v ~:/workspace/ projectmonai/monailabel:latest bash
That throws a couple of errors, one related to the ‘~:/’ syntax and another one related to the GPU:
docker: Error response from daemon: create ~: volume name is too short, names should be at least two alphanumeric characters.
initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown
Expected behavior I expected it to start up.
Environment Mac OS running Big Sur on a Intel iMac, Late 2014 Graphics: AMD Radeon R9 M295X 4 GB
My solution for now was to use:
docker run -it --rm --ipc=host --net=host -v ${PWD}:/workspace/ projectmonai/monailabel:latest bash
which fixes the syntax problem and also disables GPUs. That at least gets it running.
Wondering if there is a way to get it to run on that Mac, or do I need to use a LINUX host with a compatible GPU.
Also, the server apparently runs on port 8000 in the container. Can I just expose / map that port to get access to the server from the host ? Is there a pre-made docker-compose instead of having to use the command line ?
================================
Printing MONAI config...
================================
MONAI version: 1.0.1
Numpy version: 1.22.2
Pytorch version: 1.13.0a0+d0d6b1f
MONAI flags: HAS_EXT = True, USE_COMPILED = False, USE_META_DICT = False
MONAI rev id: 8271a193229fe4437026185e218d5b06f7c8ce69
MONAI __file__: /opt/monai/monai/__init__.py
Optional dependencies:
Pytorch Ignite version: 0.4.10
Nibabel version: 4.0.2
scikit-image version: 0.19.3
Pillow version: 9.0.1
Tensorboard version: 2.10.0
gdown version: 4.5.3
TorchVision version: 0.14.0a0
tqdm version: 4.64.1
lmdb version: 1.3.0
psutil version: 5.9.2
pandas version: 1.4.4
einops version: 0.5.0
transformers version: 4.21.3
mlflow version: 1.30.0
pynrrd version: 0.4.3
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
================================
Printing system config...
================================
System: Linux
Linux version: Ubuntu 20.04.5 LTS
Platform: Linux-5.15.49-linuxkit-x86_64-with-glibc2.10
Processor: x86_64
Machine: x86_64
Python version: 3.8.13
Process name: python
Command: ['python', '-c', 'import monai; monai.config.print_debug_info()']
Open files: []
Num physical CPUs: 8
Num logical CPUs: 8
Num usable CPUs: 8
CPU usage (%): [67.1, 1.9, 1.7, 24.6, 1.4, 1.2, 3.1, 1.2]
CPU freq. (MHz): 3988
Load avg. in last 1, 5, 15 mins (%): [5.6, 3.5, 3.8]
Disk usage (%): 54.7
Avg. sensor temp. (Celsius): UNKNOWN for given OS
Total physical memory (GB): 7.8
Available memory (GB): 4.9
Used memory (GB): 2.2
================================
Printing GPU config...
================================
Num GPUs: 0
Has CUDA: False
cuDNN enabled: True
cuDNN version: 8600
Issue Analytics
- State:
- Created 10 months ago
- Comments:5
Top GitHub Comments
Yeah medium sized GPU. Say 8GB to 12GB should help to see basic e2e workflows… either on aws/cloud or even a laptop with Nvidia GPU is good to try
On smaller GPUs you may not run heavy training jobs… but good enough for sanity test
I believe you have resolved the data/cuda related setup issue on your env… feel free to reopen the issue if you were not able to use monailabel and run basic infer/train examples