ParaView in Docker with OpenGL support - opengl

After installing various packages and programs like vtk, tvtk, ParaView, mayavi, … on my system, I end up with totally broken global packages. For instance: currently I'm not able to run mayavi for more than few seconds, than it crashes without any message. The problem is that every library needs different version of dependencies (notably Qt4 vs. Qt5), you sometime needs to build the software manually to allow certain non-standard features (ParaView with Python support) and so on. The result is total mess.
Therefore, I decided to build ParaView in Docker to isolate the software. I definitely need Python scripting capabilities of ParaView which is not default choice for Ubuntu repository package. Here is result of my work. I was inspired by this repository, however there are certain drawbacks, notably no Python and MPI support and it is a fork of official ParaView repo.
So, I used it and create a new repository. It is an Ubuntu image with all necessary packages, ParaView is built with MPI and Python support. See README how to build it and how to run it. If anyone is interested I can push the image to dockerhub. Note that user on host machine needs to have uid 1000, otherwise X server tunnel won't work correctly. This can be, however, easily fixed.
So, the issue is following. When I run the ParaView, I see this error message:
libGL error: failed to open drm device: No such file or directory
libGL error: failed to load driver: i965
Obviously, there is no OpenGL acceleration. Is there anyone who knows how to enable OpenGL support in docker? I know of this repository, however I don't like the solution via vnc. Is there any other way how to do the same? I'm not familiar with OpenGL so any help is much appreciated.

You may try these steps:
install mesa-utils in your image
add your container user to group video.
Then you should be able to use software rendered OpenGL.
Sharing X unix socket from host can have some caveats. You can use mviereck/x11docker to run your image on a second X server instead. Software rendered OpenGL works fine. Hardware rendering is experimental and in development.
On your github repo example you are using host display :0, sharing $DISPLAY and unix sockets:
docker run -ti -e DISPLAY=unix$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix paraview
If you share all files in dev/dri with your container (especially /dev/dri/card0), most probably you get hardware acceleration. If you get some rendering glitches, you can use docker run option --ipc=host. Depending on X setup, one needs to share ~/.Xauthority and $XAUTHORITY, too, or setting xhost +SI:localuser:root on host if container user is root.
CAUTION: This setup breaks down container isolation! (For better isolation, check out x11docker.)

In addition to what #mviereck proposed, on an nvidia-docker container, I needed to do docker run --privileged. My entire docker run command looks like:
Obviously this isn't an ideal solution, but is sufficient for many local use cases where isolation isn't a major concern.
CMD="${DOCKER} run --detach=true \
--privileged \
--group-add ${DOCKER_GROUP_ID} \
--env HOME=${HOME_DIR} \
--env DISPLAY \
--interactive \
--name DevContainer \
--net=host \
--rm \
--tty \
--user=${USER_ID}:${GROUP_ID} \
--volume $HOME_DIR_HOST:${HOME_DIR} \
--volume $WORK_DIR:${WORK_DIR} \
--volume /tmp/.X11-unix:/tmp/.X11-unix \
--volume /var/run/docker.sock:/var/run/docker.sock \
${IDEA_IMAGE}"
Many of those options are superfluous to OpenGL, but useful for certain applications needing extended access.
Since I'm using an nvidia docker container, $DOCKER is actually nvidia-docker in my case.
I also added my host user to the video group, though I'm not sure if that mattered.

Related

AWS Batch Failing to launch Dockerfile - standard_init_linux.go:219: exec user process caused: exec format error

I am attempting to use AWS Batch to launch a linux server, which will in essence perform the fetch and go example included within AWS (to download a SH from S3 and run it).
Does AWS Batch work at all for anyone?
The aws fetch_and_go example always fails, even followed someone elses guide online which mimicked the aws example.
I have tried creating Dockerfile for amazonlinux:latest and ubuntu:20.04 with numerous RUN and CMD.
The scripts always seem to fail with the error:
standard_init_linux.go:219: exec user process caused: exec format error
I thought at first this was relevant to my deployment access rights maybe within the amazonlinux so have played with chmod 777, chmod -x etc on the she file.
The final nail in the coffin, my current script is litterely:
FROM ubuntu:20.04
Launch this using AWS Batch, no command or parameters passed through and it still fails with the same error code. This is almost hinting to me that there is either a setup issue with my AWS Batch (which im using default wizard settings, except changing to an a1.medium server) or that AWS Batch has some major issues.
Has anyone had any success with AWS Batch launching their own Dockerfiles ? Could they share their examples and/or setup parameters?
Thank you in advance.
A1 instances are ARM based first-generation Graviton CPU. It is highly likely the image you are trying to run something that is expecting x86 CPU (Intel or AMD). Any instance class with a "g" in it ("c6g" or "m5g") are Graviton2 which is also ARM based and will not work for the default examples.
You can test whether a specific container will run by launching an A1 instance yourself and running the container (after installing docker). My guess is that you will get the same error. Running on Intel or AMD instances should work.
To leverage Batch with ARM your containerized application will need to work on ARM. If you point me to the exact example, I can give more details on how to adjust to run on A1 or Graviton2 instances.
I had the same issue, and it was because I build the image locally on my M1 Mac.
Try adding --platform linux/amd64 to your docker build command before pushing if this is your case.
In addition to the other comment. You can create multi-arch images yourself which will provide the correct architecture.
https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/

Tensorflow Serving using Docker

I have created a Neural Network Regression Model and I wish to deploy it using AWS.
I am using tensorflow serving, and have gone so far as to save the model.
Now I am trying to use Docker to deploy it in the container using Docker on Windows 10 home
As an example, I tried to use multiple tutorials but when it comes to this command, no matter what I do, it just doesn't work for me:
docker run -t --rm -p 8501:8501 -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" -e MODEL_NAME=half_plus_two tensorflow/serving
Every time I change something, I get a different error. I am totally at loss. Please direct me to some tutorials for this that are simple but complete for novices like me. I have already read the TensorFlow documentation but the errors persist.
Any help would REALLY oblige me greatly since I have been stuck for about a month now.
The easiest tutorial I found was https://www.tensorflow.org/tfx/serving/docker#serving_example
Also
Docker toolkit
has trouble with the mounts as you have to manually specify the path, so if you can afford it, upgrade to Windows Pro which will simplify dockerization. That way you will get Docker Desktop which is much simpler.

Can I load additional libraries in Gitpod without creating my own Docker image?

I have recently tried out Gitpod, which seems to be a quite cool tool.
For testing purposes, I have opened some C++ GitHub repository of mine that uses Boost (among other libraries). Unfortunately, Boost does not seem to be installed in the Docker image, so my code does not compile.
I know about the possibility of creating own Docker images, but I was wondering if there are alternative, easier options as well. Does Gitpod provide any Environment Modules-like option to dynamically load/unload certain "commonly used" libraries or do I always have to provide my own Docker instance in this case?
I work on Gitpod. Thank you for trying it and the compliment :)
We didn't want to invent yet another module system for Gitpod.
Instead, we decided to support Dockerfiles and build them on-demand, because Dockerfiles allow using all those amazing module systems that are already out there: Debian's packages, Alpine's packages, Node Version Manager (NVM), Ruby Version Manager (RVM), SDKman, etc. Basically any Linux-compatible package manager down to simple wget.
You can also use own Docker images, but I find Dockerfiles more convenient because you can check them into git and thereby version them together with your source code. It's dev-environment-as-code and should be shared across the team. Also, you don't need to bother with building and pushing them to a registry (e.g. hub.docker.com).
What Gitpod does offer, hoever, is a selection of Docker images (src). The most prominent one is gitpod/workspace-full, which it Gitpod's default image.
To get back to your question about the easiest way to get the right modules into your Gitpod development environment:
inheriting from gitpod/workspace-full is very convenient.
If you don't want (2), copy'n'pasting sections from gitpod/workspace-full is convenient.
Often, putting RUN apt-get update && apt-get install -y libboost-all-dev into your Dockerfile is enough. This is APT to install the package libboost-all-dev.
Most software projects have documentation on how to build them under Linux. These instructions usually work in Dockerfiles, too.
Search on hub.docker.com for useful Docker images. You can inherit from those images or find their Dockerfiles and copy'n'paste sections from there.

Unable to bring up docker project

I'm following this Docker tutorial, which creates a simple Docker-managed Django site, and when I try to run docker-compose up to launch my docker project, I get the ambiguous error:
ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?
The error suggests that the Docker daemon isn't running, but service docker status shows the Docker daemon is running.
If instead I run sudo docker-compose up, then it succeeds, but it chowns a lot of my local development files to the root user, which is easy enough to fix, but annoying.
Why does Docker require root access just to start a local Django development server? How do I fix this?
My versions:
Docker version 18.06.1-ce, build e68fc7a
docker-compose version 1.11.1, build 7c5d5e4
Ubuntu 16.04.5 LTS
If you can run any Docker command at all, you can trivially root the host:
docker run --rm -v /:/host busybox \
cat /host/etc/shadow
Additionally, Docker containers frequently run as root within their own container space, which means that whatever parts of the host filesystem you choose to expose into them, they can make arbitrary changes as arbitrary user IDs. You can use a docker run -u option to pick a different user ID, but you can pick any user ID, even one that belongs to another user on a shared system.
It is very reasonable to use sudo as a way to get root privileges for things that need it, and this is a typical out-of-the-box Docker configuration.
At the end of the day the only real gate on this is the Unix permissions on the file /var/run/docker.sock. This is often mode 0660 owned by a dedicated docker group. If you don’t mind your normal user being able to read and write arbitrary host files without much of a control at all, you can add yourself to that group. That’s frequently appropriate for something like a developer laptop; but on anything like a production system it deserves some real consideration of its security implications.

Run docker from within toolbox

Within Google Container OS, I would like to use it as my cloud development environment. How would I run the docker command from the toolbox? Do I need to add the docker.sock as a bind mount? I need to be able to run docker (and docker-compose) to run my development environment.
Google Container OS images come with docker already installed and configured, so you will be able to use the docker command from the command line without any prior configuration if you create a virtual machine from one of these images, and SSH into the machine.
As for docker-compose, this doesn't come pre-installed. However, you can install this, and other relevant tools/programs you require by making use of the toolbox you mentioned which provides a shell (including a package manager)in a Debian chroot-like environment (here you automatically gain root privileges).
You can install docker-compose by following these steps:
1) If you havn't already, enter the toolbox environment by running /usr/bin/toolbox
2) Check the latest version of docker-compose here.
3) You can run the following to retrieve and install docker-compose on the machine (substitute the docker-compose version number for the latest version you retrieved in step 2):
curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
You've probably found at this point that although you can now run the freshly installed docker-compose command within the toolbox, you can't run the docker command. This is because by default the toolbox environment doesn't have access to all paths with the rootfs and that the filesystem available doesn't correspond between both environments.
It may be possible to remedy this by exiting out of the toolbox shell, and then edit the /etc/default/toolbox file which allows you to configure the toolbox settings. This would allow you to provide access to the docker binary file in the standard environment by following these steps:
1) Ensure you are no longer in the toolbox shell, then run command which docker. You will see something similar to /usr/bin/docker.
2) Open file /etc/default/toolbox
3) The TOOLBOX_BIND line specifies the paths from rootfs to be made available inside the toolbox environment. To ensure docker is available inside the toolbox environment, you could try adding an entry to the TOOLBOX_BIND section, for example --bind=/usr/bin/docker:/usr/bin/docker.
However, I've found that even though it's possible to edit the /etc/default/toolbox to make the docker binary file available in the toolbox environment, when certain docker commands are run in the toolbox environment, additional errors are generated as the docker version that comes pre-installed on the machine is configured to use certain configuration files and directories and although it may be possible edit the /etc/default/toolbox file and make all of the required locations accessible from within the toolbox environment, it may be simpler to install docker within the toolbox by following the instructions for installing docker on debian found here.
You would then be able, to issue both the docker and docker-compose commands from within toolbox.
Alternatively, it's possible to simply install docker and docker-compose on a standard VM (i.e. without necessarily using a Google Container OS machine type) although the suitability of this depends on your use case.