Is it possible to use OpenGL in Azure App Service Linux? - opengl

We are building an ASP.NET API running with .NET 5 that uses SkiaSharp to dynamically create and return images. We've noticed that using the GPU has a dramatic increase in performance. We know that in order to use the GPU we need an OpenGL Context instantiated, but with it comes some requirements. Our tests work well in our environments: Mac and Windows, but doesn't work when deployed to the Linux Azure App Service using P1v2 VM.
The error message Unable to create GL context: Unable to load shared library 'libX11'. Doing some research I realized the container doesn't have OpenGL installed and trying to install it through apt-get is not possible because of lack of permissions.
I went the route of running the KuduLite container locally in my machine and installing libgl1-mesa-glx and mesa-utils, but running glxinfo results in the error Error: unable to open display. I found this blog post that explains the requirements of running hardware accelerated OpenGL support in Docker. The blog post is from 2014 so I'm not sure if it is still valid, but if it is there quite a few requirements that before I try to solve locally in my machine I would like to know if they are even possible in Azure App Service Container.
So, is it even possible to have hardware accelerated OpenGL support in Azure App Service docker?

The problem you're running into is, that the machine you're running this on is headless and doesn't run a X11 display server. Most application-frameworks-for-use-with-OpenGL assume that they'll be running in some interactive graphical environment, i.e. having either an X11 server (which is configured to use the GPU) or a Wayland compositor around.
glxinfo doesn't have to do anything about this, BTW. It's just a little tool to query what kind of OpenGL capabilities a given X11 display (server) has. If you don't run X11 in the first place, you don't need it.
Up until a few years ago, this in fact was the only way to get GPU acceleration on Linux. Luckily those days are long gone. These days one can obtain fully headless, offscreen OpenGL contexts using EGL. Nvidia has a nice blog about how to do it:
https://developer.nvidia.com/blog/egl-eye-opengl-visualization-without-x-server/
Then there's this Github repo:
https://github.com/eduble/gl
You'll get the idea: Instead of opening a window you get a so called "surface" and draw on that.
With Vulkan it's even more straightforward, because you don't even have to bother setting up a surface suitable for pushing to a display if your goal is rendering an image to a buffer that you wrap up in a file or send out over a network (look at the offscreen sample in Sascha Willems' examples https://www.saschawillems.de/creations/vulkan-examples/)

Related

How can I run Parsec streaming service on a GCP instance without a GPU? (Error Code 15000)

I am attempting to run Android Studio on a GCP n1-standard-4 instance following this article. Everything works fine until it comes to accessing the instance. However, Chrome RDP gives poor resolution and I would prefer to use something better, which is Parsec. Once I try to connect to the instance, I get the error 15000, 'The host encoder failed to initialize'. I do not have a GPU attached to this instance, so could this be the problem?
Have a look at the Parsec documentation:
Error Codes - 22006 and 15000 (Unable To Initialize Encoder On Your Server) This is an issue on the host's PC. Below are the things that
can cause it.
Check that your GPU is supported! The is the most common issue. If
your GPU is not supported, none of these solutions will help you, and
you will require a new GPU or PC to be able to host. If your GPU is
supported, continue on below.
As you can see, you should use supported GPU with supported drivers for Parsec.
To solve your issue you should use supported GPU, also be sure that you're using supported OS and drivers.
In addition, have a look at the documentation GPUs on Compute Engine and Adding or removing GPUs if you decided to use specific GPU.

MPI CUDA error code 38 cudaErrorNoDevice on Windows [duplicate]

I'm using Python/NumbaPro to use my CUDA complient GPU on a windows box. I use Cygwin as shell and from within a cygwin console it has no problems finding my CUDA device. I test with the simple command
numbapro.check_cuda()
But when I'm connection to the box over OpenSSH (as part of my Cygwin setup), I get the following error:
numba.cuda.cudadrv.error.CudaSupportError: Error at driver init:
Call to cuInit results in CUDA_ERROR_NO_DEVICE:
How to fix this?
The primary cause of this is Windows service session 0 isolation. When you run any application via a service which runs in session 0 (so sshd, or windows remote desktop, for example), the machines native display driver is unavailable. For CUDA applications, this means that you are get a no device available error at runtime because the sshd you use to login is running as a service and there is no available CUDA driver.
The are a few workarounds:
Run the sshd as a process rather than a service.
If you have a compatible GPU, use the TCC driver rather than the GPU display driver.
On the secondary problem, the Python runtime error you are seeing comes from the multiprocessing module. From this question it appears that the root cause is probably the NUMBER_OF_PROCESSORS environment variable not being set. You can use one of the workarounds in that thread to get around that problem

glCreateShader Fails over remote connection

I'm trying to run an OpenGl program on an Amazon EC2 instance. When run on local computers it works fine, but when run through the remote desktop the program crashes and I've narrowed it down to the glCreateShader(GL_VERTEX_SHADER) call.
I researched this previously when running over remote desktop on a computer in the local network and the solution I found was to use a batch script that disconnected the session and started the OpenGL exe. Then when you logged back on it was fine. tscon 1 /dest:console
Unfortunately now this seems not to work when trying to run on the Amazon instance. Does anyone have any experience with OpenGL issues over remote connections?
glCreateShader is one of the functions which location must be obtained at runtime using a …gl…GetProcAddress call. This call will give a valid pointer only if the function is actually supported by the installed OpenGL driver. Also even if the function is supported by the driver, the actual feature accessed by the function may not be supported by the device/OpenGL context you're using.
It's mandatory you're checking the validity of the function address assert(glCreateShader); and that the function is actually supported (OpenGL version >= OpenGL-2.0 or GL_ARB_vertex_shader and GL_ARB_fragment_shader in the list of extensions).
I'm trying to run an OpenGl program on an Amazon EC2 instance.
Virtual machines normally don't have a GPU available. The functionality you're requesting is not available without a GPU in a standard Windows installation. As a workaround, however with largely reduced performance, you can build and install the Mesa3D opengl32.dll software rasterizer alongside your program's .exe (do not install in the system path!).

Scripting Virtualbox to create networks

I know that VB offers many things to control it : SDK, API, COM, web server etc
What I'd like to do is have a GUI to simply create VM connected through networks but I have to know: what is the best solution use Frontends [1], webserver, COM* or API ? elsewhere libvirt ?
A an example a use case could be : I put 3 VMs on my GUI, choose their respective OS , create 1 or more network connection(s) for each and connect these VM to create network(s).
Python, C++, etc, implementation language doesn't matter.
[1] http://www.virtualbox.org/manual/ch01.html#frontends
My qualifications for answering this being that I created and have maintained Vagrant since early 2010. Here are my general opinions of each of the available frontends for scripting VirtualBox:
vboxwebsrv is the VirtualBox web service which provides an API to control VirtualBox. The pro of this is that web services are easy to program for nowadays. The main con is that you must handle startting and stopping this web service manually (or check to make sure it is already running). Historically, the web service has not been fully up-to-date with the latest APIs available in each version of VirtualBox, but I'm not sure what the status of that is today.
COM or C API. VirtualBox provides an XPCOM based API on non-Windows platforms and an MSCOM based API on Windows. If you can't use C++, you can also use the C API on Linux (but it is not available/exported on Windows). I used this API for over a year. Pros: Fast and complete. Since it is a C API it is very fast, communicating with the VirtualBox process directly. It is also complete, since this is the same API that VirtualBox GUI is using as well as using internally. The main con is that XPCOM is not easy, and the C API is not available on Windows, meaning you either have to pain through XPCOM, or you need to handle both C and MSCOM. I chose the latter and it turned out to be a nightmare of compatibility. Almost every minor release of VirtualBox (3.1, 3.2, etc.) will change the API in a backwards incompatible way (slightly) and a major release and you can completely forget about (3.0, 4.0, etc.). This makes handling older versions of VirtualBox... tricky. This is definitely an advanced use case.
VBoxManage is the CLI based frontend for VirtualBox. Under the covers VBoxManage is of course just using the COM-based API, but provides a much more user-friendly cover on top of it. I've found that for 99% of use cases, VBoxManage can cover it. VBoxManage also handles all error handling, does proper exit status (0 for success, non-zero for everything else), etc. After 1.5 years of the C API I've switched back to VBoxManage because its simply easier to use and does what I need to do. The downside is you must use a subprocess to talk to VBoxManage. The upside is VBoxManage changes relatively infrequently, and as such it makes it very easy to support many versions of VirtualBox.
I hope this helps!

Offscreen rendering to a texture in a win32 service

I'm trying to write a C++ windows service that can render to a texture. I've got the code working as a regular console app, but when run as a service wglGetProcAddress() returns NULL.
Can anyone tell me if this is possible, and if so, what do I need to do to make OpenGL work inside a service process?
Edit:
I still haven't got this to work under Vista, but it does work under XP.
You can get a fully capable software renderer by using Mesa3D.
Simply build Mesa3D and put the opengl32.dll built there alongside your application.
This should enable you to use OpenGL 2.1 and extensions.
We use this for testing Opengl applications in a Windows service.
Services run in non-interactive desktops. These desktops do not connect to the physical display device of the computer but rather to logical display devices. The logical display devices are very basic generic VGA devices set to 1024 X 768 with no bells and whistles.
Services can use most GDI functions but no advanced graphics functions such as DirectX or OpenGL. So you can create windows, create or retrieve device contexts and do some fairly complex drawing and rendering but you can't use anything but straightforward GDI (and some GDI+).
If you check GetLastError after wglGetProcAddress returns NULL you should get the reason for the failure.
OpenGL needs desktop access to create a render context and a service by default don't have desktop access.
You need to run the service in interactive mode. To do that go in the service properties in Administrative Tools. Where you set the service's log on user, you will have an option to run the service in interactive mode, or something similar as "Allow service to interact with desktop". You can try logging the service as another user too.
If you are working through a .Net IIS application, you will also have to force the managed part of the server to log as another user.
EDIT:
I forgot to say, a user must currently be logged on the accelerated hardware desktop and the machine must not be locked. That sucks but that's the only way I made it work before. We had dirty script that logged a user on as soon as the machine started.
As a side note, we were using DirectX, so it might not apply to OpenGL.