How can OpenAI Gym's visualizations work within Docker? - opengl

I'd like to get OpenAI Gym working with the rendered OpenGL visualizations within a docker container.
It's straightforward to get OpenAI Gym running within Docker. However, it's not immediately clear how to get the rendered environment to display in a window on my OS X laptop when you call env.render() on an OpenAI environment within the Docker container.
How do I go about this?

You can try sharing your X11 socket file with your container... That way your container can write to it and it will show on you machine:
Something like this:
docker run --privileged=true --rm \
-e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix \
...

Related

How to run a docker image from within a docker image?

I run a dockerized Django-celery app which takes some user input/data from a webpage and (is supposed to) run a unix binary on the host system for subsequent data analysis. The data analysis takes a bit of time, so I use celery to run it asynchronously. The data analysis software is dockerized as well, so my django-celery worker should do os.system('docker run ...'). However, celery says docker: command not found, obviously because docker is not installed within my Django docker image. What is the best solution to this problem? I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I didn't catch the causal relationship here. In fact, we just need to add 2 steps to your Django image:
Follow Install client binaries on Linux to download the docker client binary from prebuilt, then your Django image will have the command docker.
When starting the Django container, add /var/run/docker.sock bind mount, this allows the Django container to directly talk to the docker daemon on the host machine and start the data-analysis tool container on the host. As the analysis container does not start in Django container, they can have separate system resources. In other words, the analysis container's resources do not depend on the resource of the Django image container.
Samples with one docker image which already has the docker client in it:
root#pie:~# ls /dev/fuse
/dev/fuse
root#pie:~# docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker /bin/sh
/ # ls /dev/fuse
ls: /dev/fuse: No such file or directory
/ # docker run --rm -it -v /dev:/dev alpine ls /dev/fuse
/dev/fuse
You can see, although the initial container does not have access to the host's /dev folder, the docker container whose command initialized from the initial container could really have separate resources.
If the above is what you need, then it's the right solution for you. Otherwise, you will have to install the analysis tool in your Django image

How to easily debug Redis and Django/Gunicorn when developing using Docker?

I'm not referring to more sophisticated debugging techniques, but how to get access to the same kind of error messages that are normally directed to terminal tabs?
Basically I'm adopting Docker in a Django project also using Redis.
In the old way of working I opened a linux terminal tab for gunicorn like this: gunicorn --reload --bind 0.0.0.0:8001 myapp.wsgi:application
And this tab kept running Gunicorn and any Python error was shown in this tab so I could see the problem and fix it.
I could also open a second tab for the celery woker: celery -A myapp worker --pool=solo -l info
The same thing happened, the tab was occupied by Celery and any Python error in a task was shown in the tab and I could see the problem and correct the code.
My question is: Using docker is there a way to make each of the containers direct these same errors that would previously go to the screen, to go to log files so that I can debug my code when an error occurs in Python?
What is the correct way to handle simple debugging during development using Docker containers?
After looking more about this in the docker documentation I found a link that solves this problem: View logs for a container or service
Basically the command "docker logs CONTAINER_ID" shows on the screen exactly what we would see in the terminal running the application.
Works perfectly to see Django, Redis and Angular logs.
Just type:
docker logs CONTAINER_ID
Replace the container_id keyword with the real id of the container you want to log in.
To find the id type:
docker ps

Qt app UI elements randomly rendered as blank/black in Docker

I prepared Dockerfile to build Docker image of my Qt application. To run the application I use X - I enable access to X server (xhost +local:root), then I use the following command to run it:
docker run -it --env="DISPLAY" --env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" mindforger:latest mindforger
The problem is that some Qt UI elements (menu items, dialogs, ...) are rendered blank in black (randomly) - check the screenshot below:
I'm on Ubuntu 16.04.5 with Docker 18.06 and Qt 5.01.
I had the same problem, and I couldn't solve it formally. But we found an alternative workaround to not display the error:
In our case, we have four QComboBoxes in a window. Our problem was that after starting the app, the second (sometimes the first) combo box you clicked displayed a black popup. So what we did was initializing the window with two dummy combo boxes, calling the showPopup method, and then hiding both the popups and the combo boxes. So the user can't notice the error. I hope you can make something similar with your app.
I had the same problem and found this solution which you have to run after starting the docker container. Once the specific docker container is started, run the following command within the docker container.
export QT_GRAPHICSSYSTEM="native"
Once this is pasted and executed within the docker terminal session, run the desired QT application and this black box problem should go away. You could even paste this option inside your docker's .bashrc if you didn't want to run it every time manually.
None of the posted solutions solved the identical problem that I encountered. However, this did fix it:
QT_GRAPHICSSYSTEM=raster
In my case (Qt5 application) I resolved this by adding the param --shm-size 128M, mounting /dev/shm:/dev/shm should work too.
I ran into this problem trying to get Google Earth Pro to work in Docker -- several dialog boxes and a few menus would be black-on-black or random pixels.
That specific graphics issue was solved by adding --ipc host to the create (or run) command.
Here is the distilled create command that I used; I didn't have to disable or configure shm (and none of the shm suggestions I tried helped):
docker create \
--ipc host \
-e DISPLAY -e XAUTHORITY \
-h "$HOSTNAME" \
-u "$(id -u):$(id -g)" \
--device /dev/dri/card0 \
-v /dev/dri/card0:/dev/dri/card0 \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v "$XAUTHORITY:$XAUTHORITY" \
my-qt-app
A small break-down:
--ipc host - This is what fixed the "black menus/dialogs"
-v "$XAUTHORITY:$XAUTHORITY", -v /tmp/.X11-unix:/tmp/.X11-unix and -e XAUTHORITY/-e DISPLAY is required or the app doesn't launch and complains about not finding a display server and complete failure without the -v $XAUTHORITY:XAUTHORITY
-v /dev/card0:/dev/card0 (in my script, I find those somewhat naively using findfind /dev/dri -maxdepth 1 -type c) is required to avoid Mesa errors, however, the application still works (poorly).
-u ... My implementation "hides" docker from the user, so it creates the container as the user, sets it up with a containerized "fake" version of their user account and uses it to launch the app, which the docker container is configured to use when it runs it. ...avoiding root GUI apps.

How do I provide credentials to the docker awslogs driver using Docker for Mac?

I'm trying to use the docker awslogs driver and getting the following error:
"docker: Error response from daemon: Failed to initialize logging
driver: NoCredentialProviders: no valid providers in chain.
Deprecated."
According to this GitHub comment, I need to set the AWS_SHARED_CREDENTIALS_FILE environment variable for the docker daemon, but I'm not sure how to do that when using Docker for Mac.
The command I'm using to start the container is:
docker run -d \
--log-driver=awslogs \
--log-opt awslogs-region=us-east-1 \
--log-opt awslogs-group=my-log-group \
my-image
Version information:
Docker for Mac 1.12.1-rc1-beta23 build 11375
OS X El Capitan 10.11.6
but I'm not sure how to do that when using Docker for Mac.
With boot2docker, you would need to modify /var/lib/boot2docker/profile in order to add this variable.
See "Docker daemon config file on boot2docker".
It does persists across the TinyCore-based VM reboot, and the docker daemon would then take it into account.
With the new docker for Mac xhyve-based, the idea should be the same.
/var/lib/boot2docker/profile does exist as well, as shown in this answer.
The official docker dameon doc points to:
--config-file=/etc/docker/daemon.json Daemon configuration file
So try and modify this file.
By default, the comments mention:
~/Library/Containers/com.docker.docker/Data/database/com.doc‌​ker.driver.amd64-lin‌​ux/etc/docker/daemon‌​.json
Using information taken from this answer: Docker deamon config path under mac os
You can connect to the VM layer that runs the docker daemon using:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
And you can modify /etc/docker/daemon.json to add the needed variables there.
Once you make your changes, you can just run:
service docker restart
from within the moby terminal to restart the docker daemon.
Do notice that if you restart docker from your mac, the changes will not persist.
On a side note, if you encounter a login screen when connecting with the screen command, try username: root to access the system.

Getting ember to run under docker on Windows Quickstart

Working through this tutorial on setting up ember-cli in a Docker container:
http://www.rkblog.rk.edu.pl/w/p/setting-ember-cli-development-environment-ember-21/
Here are my steps:
Created docker-compose.yml in an empty folder on the host machine
Launched Docker Quickstart to get a terminal
Changed to the folder with the .yml
Ran the two docker-compose commands below from the terminal (added -d because without that you get a message that interactive mode is not supported)
Ran docker ps -a to verify that the container was running
Ran docker inspect CONTAINER_ID to find the ip address of the running container
Found the IP address at an odd location (172.17.0.2)
Attempted to access port 4200 on that IP from the host Windows machine browser and also from the Docker CL via curl but without success.
Ran docker ps -a and found that both containers that had been instantiated had exited.
Now if I try to start the container again it just exits immediately
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
What am I missing to get up and running? Do I need to open ports on the Default VM running in Virtualbox? How do I diagnose why the container keeps exiting?
First I would suggest using docker-compose up, that is most likely what you want.
To see the logs for a detached container you can run docker logs <container name>. If there are any errors you'll see them there.
A likely cause of the "container exit" is because the process goes into the background. Docker requires a process to stay in the foreground, but many serve commands will background by default. To keep the process in the foreground you can sometimes add use a flag like --foreground or --no-daemon, but I'm not sure if one exists for ember.
If that flag doesn't exist, it's likely that ember server is just checking if stdin/stdout are connected to a tty. By default they are not. You can add these lines to your docker-compose.yml to fix it:
stdin_open: True
tty: True
Ok finally resolved it. The issue with the module resolution may have been long file name resolution on windows because after I moved the source folder to the root of the host I was able to get ember serve running under windows.
Then from the terminal window I ran the commands to init and launch ember-server
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
Then did:
docker-compose up -d
which launched the containers successfully and then I was able to access the Ember page served up at the IP:Port specified earlier in the comments
http://192.168.99.100:4200/