Qt app UI elements randomly rendered as blank/black in Docker - c++

I prepared Dockerfile to build Docker image of my Qt application. To run the application I use X - I enable access to X server (xhost +local:root), then I use the following command to run it:
docker run -it --env="DISPLAY" --env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" mindforger:latest mindforger
The problem is that some Qt UI elements (menu items, dialogs, ...) are rendered blank in black (randomly) - check the screenshot below:
I'm on Ubuntu 16.04.5 with Docker 18.06 and Qt 5.01.

I had the same problem, and I couldn't solve it formally. But we found an alternative workaround to not display the error:
In our case, we have four QComboBoxes in a window. Our problem was that after starting the app, the second (sometimes the first) combo box you clicked displayed a black popup. So what we did was initializing the window with two dummy combo boxes, calling the showPopup method, and then hiding both the popups and the combo boxes. So the user can't notice the error. I hope you can make something similar with your app.

I had the same problem and found this solution which you have to run after starting the docker container. Once the specific docker container is started, run the following command within the docker container.
export QT_GRAPHICSSYSTEM="native"
Once this is pasted and executed within the docker terminal session, run the desired QT application and this black box problem should go away. You could even paste this option inside your docker's .bashrc if you didn't want to run it every time manually.

None of the posted solutions solved the identical problem that I encountered. However, this did fix it:
QT_GRAPHICSSYSTEM=raster

In my case (Qt5 application) I resolved this by adding the param --shm-size 128M, mounting /dev/shm:/dev/shm should work too.

I ran into this problem trying to get Google Earth Pro to work in Docker -- several dialog boxes and a few menus would be black-on-black or random pixels.
That specific graphics issue was solved by adding --ipc host to the create (or run) command.
Here is the distilled create command that I used; I didn't have to disable or configure shm (and none of the shm suggestions I tried helped):
docker create \
--ipc host \
-e DISPLAY -e XAUTHORITY \
-h "$HOSTNAME" \
-u "$(id -u):$(id -g)" \
--device /dev/dri/card0 \
-v /dev/dri/card0:/dev/dri/card0 \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v "$XAUTHORITY:$XAUTHORITY" \
my-qt-app
A small break-down:
--ipc host - This is what fixed the "black menus/dialogs"
-v "$XAUTHORITY:$XAUTHORITY", -v /tmp/.X11-unix:/tmp/.X11-unix and -e XAUTHORITY/-e DISPLAY is required or the app doesn't launch and complains about not finding a display server and complete failure without the -v $XAUTHORITY:XAUTHORITY
-v /dev/card0:/dev/card0 (in my script, I find those somewhat naively using findfind /dev/dri -maxdepth 1 -type c) is required to avoid Mesa errors, however, the application still works (poorly).
-u ... My implementation "hides" docker from the user, so it creates the container as the user, sets it up with a containerized "fake" version of their user account and uses it to launch the app, which the docker container is configured to use when it runs it. ...avoiding root GUI apps.

Related

Running a Qt GUI in a docker container

So, I have a C++ GUI based on Qt5 which I want to run from inside a Docker container.
When I try to start it with
docker run --rm -it my_image
this results in the error output
qt.qpa.xcb: could not connect to display localhost:10.0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, xcb.
So I searched for how to do this. I found GUI Qt Application in a docker container, and based on that called it with
QT_GRAPHICSSYSTEM="native" docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix my_image
which resulted in the same error.
Then I found Can you run GUI applications in a Docker container?.
The accepted answer in there seems to be specific to certain applications such as Firefox?
Scrolling further down I got a solution that tells me to set X11UseLocalhost no in sshd_config and then call it like
docker run -v $HOME:/hosthome:ro -e XAUTHORITY=/hosthome/.Xauthority -e DISPLAY=$(echo $DISPLAY | sed "s/^.*:/$(hostname -i):/") my_image
this produces a slight variation of the error above:
qt.qpa.xcb: could not connect to display 127.0.1.1:13.0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, xcb.
Following another answer, I added ENV DISPLAY :0 to my Dockerfile and called it with
xhost +
XSOCK=/tmp/.X11-unix/X0
docker run -v $XSOCK:$XSOCK my_image
This time, the first line of my error was qt.qpa.xcb: could not connect to display :0.
Then I tried another answer, added
RUN export uid=0 gid=0 && \
mkdir -p /home/developer && \
echo "developer:x:${uid}:${gid}:Developer,,,:/home/developer:/bin/bash" >> /etc/passwd && \
echo "developer:x:${uid}:" >> /etc/group && \
mkdir /etc/sudoers.d/ && \
echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/developer && \
chmod 0440 /etc/sudoers.d/developer && \
chown ${uid}:${gid} -R /home/developer
to my Dockerfile and called docker run -ti --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix my_image, again same error.
I also tried several of the ways described in http://wiki.ros.org/docker/Tutorials/GUI, same error.
Am I doing something wrong? Note that I'm working on a remote machine via SSH, with X11 forwarding turned on of course (and the application works just fine outside of Docker). Also note that what I write is a client-server-application, and the server part which needs no GUI elements but shares most of the source code works just fine from it's container.
I hope for a solution that doesn't require me to change the system as the reason I use Docker in the first place is for users of my application to get it running without much hassle.
You have multiple errors that are covering each other. First of all, make sure you have the correct libraries installed. If your docker image is debian-based, it usually looks like a line
RUN apt-get update && \
apt-get install -y libqt5gui5 && \
rm -rf /var/lib/apt/lists/*
ENV QT_DEBUG_PLUGINS=1
Note the environment variable QT_DEBUG_PLUGINS. This will make the output much more helpful and cite any missing libraries. In the now very verbose output, look for something like this:
Cannot load library /usr/local/lib/python3.9/site-packages/PySide2/Qt/plugins/platforms/libqxcb.so: (libxcb-icccm.so.4: cannot open shared object file: No such file or directory)
The bolded part will be the missing library file; you can find the package it's in with your distribution's package manager (e.g. dpkg -S libxcb-iccm.so.4 on debian).
Next, start docker like this (can be one line, separated for clarity):
docker run \
-e "DISPLAY=$DISPLAY" \
-v "$HOME/.Xauthority:/root/.Xauthority:ro" \
--network host \
YOUR_APP_HERE
Make sure to replace /root with the guest user's HOME directory.
Advanced graphics (e.g. games, 3D modelling etc.) may also require mounting of /dev with -v /dev:/dev. Note that this will give the guest root access to your hard disks though, so you may want to copy/recreate the devices in a more fine-grained way.
On host system allow X connection from docker xhost +local:root
And start your container
docker run -it \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
--name my_app \
--rm \
<your_image>

How to use screen in google cloud startup script?

So I decided to follow this minecraft server guide, and I'm stuck on the part linked "Automate startup and shutdown procedures". It doesn't start the server. I have tried by replacing code with a simple mkdir, which works. So I know it is being executed.
Yet I can't connect to the screen screen -list, (both when going into sudo and my own user). I checked the /run/screen/ folder and there's no other user.
It means it's safe to say that it fails, and not something I set up wrong.
The code I am using is the exact copy without the mounting nor the backup.
#!/bin/bash
sudo su
cd /home/minecraft
screen -d -m -S mc java -Xms3G -Xmx7G -d64 -jar paper.jar nogui
Shouldn't this work?
EDIT: It works and I pasted the exact code I used. NOTE: I do use PaperMC, and a upgraded machine.
I tried minecraft server guide myself and its worked properly.
At the first time, I didnt find my mcs screen with screen -list but I remembered that GCE always executes startup scripts as root after the network is available.
So my next step was simply to change my user for root with the command sudo su and from this point, my mcs screen was available with screen -list.
note that you can also use sudo screen -list
I hope that will help, if not, what is your command output when its executed on your shell?:
screen -d -m -S mc java -Xms1G -Xmx3G -d64 -jar server.jar nogui

How do I get a podman/buildah container to run under CentOS on GCE?

1. Summarize the problem
I am following this simple tutorial from Developers RedHat to get a simple node/express container working.
I cannot get a container to run under a CentOS 7 VM on GCE.
I have a CentOS 7 GCE virtual machine, where I have Docker installed.
I am able to successfully build and run Docker containers and push them to Google's container registry with no problem.
Now I am trying to build podman/buildah containers, and do the same.
I have buildman/podman installed. When I run this:
podman build -t hello-world-nodejs .
I get the following error message:
cannot clone: Invalid argument user namespaces are not enabled in /proc/sys/user/max_user_namespaces Error: could not get runtime: cannot re-exec process
any ideas?
Additionally, if there are any guides into getting this image into Google's container registry, and running under Cloud Run, it would be greatly appreciated.
Ultimately the destination for some containers is a cloud service.
2. Provide background including what you've already tried
I have tried doing a web search for a solution, nothing found that has solved the problem so far.
3. Show some code
podman build -t hello-world-nodejs .
4. Describe expected and actual results including any error messages
I can create and run docker images/containers on this GCE VM, I am trying to do the same with buildah/podman.
The following solved this issue for me:
sudo bash -c 'echo 10000 > /proc/sys/user/max_user_namespaces'
sudo bash -c "echo $(whoami):110000:65536 > /etc/subuid"
sudo bash -c "echo $(whoami):110000:65536 > /etc/subgid"
And then if you encounter an errors related to lchown run the following:
sudo rm -rf ~/.{config,local/share}/containers /run/user/$(id -u)/{libpod,runc,vfs-*}
I have spun up a CentOS 7 VM on GCE and got same issue. The issue is caused because User Namespaces is not enabled on the kernel by default. You have 2 options, either running podman as root (or using sudo) or enabling User Namespaces in your CentOS VM (the hard way).
According to the post here, the use of user namespace and the allocations of uid and gid’s that are required to make rootless containers work securely in your environment.
Probably StackOverflow is not the best place to ask this question. It's better to ask in the ServerFault site since it's a server and not coding problem.

Can't sign out of google cloud datalab

I am using Google Cloud datalab, although really I'm just getting started.
I need to log out and sign in as a new user and when I click sign out this does not sign me out. I check the drop down at the to right and still show as logged in.
That was from the notebook directory screen. When I try the same from a notebook the effect is the same except it warns me that I'm leaving the page first.
This is the same on my local machine and on cloud compute.
How can I sign out on datalab? Is this a bug?
Update
Problem recreated on separate machine, again running locally.
Update 2
I've since found that the application has signed out successfully, but it doesn't indicate this to be the case. It still shows that I'm signed in with my email. Now when trying to run a query it returns "No application credentials found. Perhaps you should sign in."
Update 3
Command used to start datalab:
docker run -d -it -p "127.0.0.1:8081:8080" -v "${HOME}:/content" -e "PROJECT_ID={project-id}" datalab bash
I managed to get the folks working on the project to respond here
Multi-login is not currently supported, but there is a work around which by their own words is:
Run this command from a cell:
!rm /content/datalab/.config/*
I assume that requires a %%bash before the ! to run. But I could actually get this work. I logged into a terminal and ran:
rm -r /content/datalab/.config/*
After this you may have to change projects which you can do with:
%datalab project set -p project_id

How can OpenAI Gym's visualizations work within Docker?

I'd like to get OpenAI Gym working with the rendered OpenGL visualizations within a docker container.
It's straightforward to get OpenAI Gym running within Docker. However, it's not immediately clear how to get the rendered environment to display in a window on my OS X laptop when you call env.render() on an OpenAI environment within the Docker container.
How do I go about this?
You can try sharing your X11 socket file with your container... That way your container can write to it and it will show on you machine:
Something like this:
docker run --privileged=true --rm \
-e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix \
...