Docker setup improvements: Give access to CSI cam for Jetson nano
See original GitHub issueSince jetpack 4.3 , nvidia provides an official base image for jetson’s: .
Our current setup to “dockerize” opendatacam on jetson is pretty hacky because we didn’t have access to CUDA stuff inside docker and we needed to:
- Compile darknet and opencv outside docker and copy the builds inside the image at build time
- When running the image, we need to manually mount all the CUDA dependency from the base OS and rely on the fact that CUDA was properly installed on the jetsons we were launching OpenDataCam from
This new docker base image provided from nvidia should enable us to
- have almost the same Dockerfile for Ubuntu CUDA machine / Jetsons by using cuda to compile darknet / compile opencv directly when building the image
- have access the CSI cam like raspberry pi cam from docker , for platform like jetson nano it was a limitation
I’m working on updating our Docker process and I successfully managed to have opencv compiled inside the docker build, but I’m still struggling with some issues to compile darknet when building the image.
I’ve posted a repro on the nvidia forums: https://devtalk.nvidia.com/default/topic/1072782/jetson-nano/-compiling-darknet-in-l4t-base-r32-3-1-docker-image-get-error-cicc-not-found/
I’ll pick up this this later, and for now stick to our current pipeline to provide some testable build for OpenDataCam v3, as from user point of view, this doesn’t change much.
For reference, here is the current work in progress Dockerfile for jetsons
FROM nvcr.io/nvidia/l4t-base:r32.3.1
RUN apt-get update -y && apt-get install -y \
libgstreamer1.0-0 \
gstreamer1.0-plugins-base \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav \
gstreamer1.0-doc \
gstreamer1.0-tools \
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev
RUN apt-get update -y && apt-get install -y pkg-config \
zlib1g-dev libwebp-dev \
libtbb2 libtbb-dev \
libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev libv4l-dev \
cmake
RUN apt-get install -y \
autoconf \
autotools-dev \
build-essential \
gcc \
git
RUN apt-get update -y && apt-get install -y ffmpeg
ENV OPENCV_RELEASE_TAG 4.1.1
RUN git clone --depth 1 -b ${OPENCV_RELEASE_TAG} https://github.com/opencv/opencv.git /var/local/git/opencv
RUN cd /var/local/git/opencv
RUN mkdir -p /var/local/git/opencv/build && \
cd /var/local/git/opencv/build $$ && \
cmake -D CMAKE_INSTALL_PREFIX=/usr/local CMAKE_BUILD_TYPE=Release -D WITH_GSTREAMER=ON -D WITH_GSTREAMER_0_10=OFF -D WITH_CUDA=OFF -D WITH_TBB=ON -D WITH_LIBV4L=ON WITH_FFMPEG=ON -DOPENCV_GENERATE_PKGCONFIG=ON ..
RUN cd /var/local/git/opencv/build && \
make install
RUN git clone --depth 1 -b uselib https://github.com/tdurand/darknet /var/local/darknet
WORKDIR /var/local/darknet
RUN sed -i -e s/GPU=0/GPU=1/ Makefile;
#RUN sed -i -e s/CUDNN=0/CUDNN=1/ Makefile;
RUN sed -i -e s/OPENCV=0/OPENCV=1/ Makefile;
RUN sed -i -e s/LIBSO=0/LIBSO=1/ Makefile;
# Uncomment line corresponding to ARCH , 42 nano, 45 tx2, 33 xavier
RUN sed -i '42 s/^#//g' Makefile;
# ERROR here, darknet doesn’t compile
RUN make
cc @b-g
Issue Analytics
- State:
- Created 4 years ago
- Comments:66 (63 by maintainers)
Top GitHub Comments
Updates from the marvelous world 🌈️🌈️🌈️🌈️🌈️🌈️ of Jetson + Jetpack + Docker + Nvidia 😋️😋️
I tried to update our Docker image & process to use the latest base image
nvcr.io/nvidia/l4t-base:r32.4.2
It works (we still need to compile darknet outside and copy it to the Dockerfile at build time), and simplify greatly all the hacks we were doing at runtime to mount CUDA etc etc…
But I still can’t access to the raspberry pi cam on jetson nano, I’ve posted an issue on the nvidia forums https://forums.developer.nvidia.com/t/access-to-raspberry-cam-v2-fails-from-docker-container-using-l4t-32-4-2-using-nvarguscamerasrc/121512
Here is the work in progress Dockerfile
Thanks, will do! I’m aiming to release a new beta with this setup in the coming days