glCreateShader Fails over remote connection - opengl

I'm trying to run an OpenGl program on an Amazon EC2 instance. When run on local computers it works fine, but when run through the remote desktop the program crashes and I've narrowed it down to the glCreateShader(GL_VERTEX_SHADER) call.
I researched this previously when running over remote desktop on a computer in the local network and the solution I found was to use a batch script that disconnected the session and started the OpenGL exe. Then when you logged back on it was fine. tscon 1 /dest:console
Unfortunately now this seems not to work when trying to run on the Amazon instance. Does anyone have any experience with OpenGL issues over remote connections?

glCreateShader is one of the functions which location must be obtained at runtime using a …gl…GetProcAddress call. This call will give a valid pointer only if the function is actually supported by the installed OpenGL driver. Also even if the function is supported by the driver, the actual feature accessed by the function may not be supported by the device/OpenGL context you're using.
It's mandatory you're checking the validity of the function address assert(glCreateShader); and that the function is actually supported (OpenGL version >= OpenGL-2.0 or GL_ARB_vertex_shader and GL_ARB_fragment_shader in the list of extensions).
I'm trying to run an OpenGl program on an Amazon EC2 instance.
Virtual machines normally don't have a GPU available. The functionality you're requesting is not available without a GPU in a standard Windows installation. As a workaround, however with largely reduced performance, you can build and install the Mesa3D opengl32.dll software rasterizer alongside your program's .exe (do not install in the system path!).

Related

Is it possible to use OpenGL in Azure App Service Linux?

We are building an ASP.NET API running with .NET 5 that uses SkiaSharp to dynamically create and return images. We've noticed that using the GPU has a dramatic increase in performance. We know that in order to use the GPU we need an OpenGL Context instantiated, but with it comes some requirements. Our tests work well in our environments: Mac and Windows, but doesn't work when deployed to the Linux Azure App Service using P1v2 VM.
The error message Unable to create GL context: Unable to load shared library 'libX11'. Doing some research I realized the container doesn't have OpenGL installed and trying to install it through apt-get is not possible because of lack of permissions.
I went the route of running the KuduLite container locally in my machine and installing libgl1-mesa-glx and mesa-utils, but running glxinfo results in the error Error: unable to open display. I found this blog post that explains the requirements of running hardware accelerated OpenGL support in Docker. The blog post is from 2014 so I'm not sure if it is still valid, but if it is there quite a few requirements that before I try to solve locally in my machine I would like to know if they are even possible in Azure App Service Container.
So, is it even possible to have hardware accelerated OpenGL support in Azure App Service docker?
The problem you're running into is, that the machine you're running this on is headless and doesn't run a X11 display server. Most application-frameworks-for-use-with-OpenGL assume that they'll be running in some interactive graphical environment, i.e. having either an X11 server (which is configured to use the GPU) or a Wayland compositor around.
glxinfo doesn't have to do anything about this, BTW. It's just a little tool to query what kind of OpenGL capabilities a given X11 display (server) has. If you don't run X11 in the first place, you don't need it.
Up until a few years ago, this in fact was the only way to get GPU acceleration on Linux. Luckily those days are long gone. These days one can obtain fully headless, offscreen OpenGL contexts using EGL. Nvidia has a nice blog about how to do it:
https://developer.nvidia.com/blog/egl-eye-opengl-visualization-without-x-server/
Then there's this Github repo:
https://github.com/eduble/gl
You'll get the idea: Instead of opening a window you get a so called "surface" and draw on that.
With Vulkan it's even more straightforward, because you don't even have to bother setting up a surface suitable for pushing to a display if your goal is rendering an image to a buffer that you wrap up in a file or send out over a network (look at the offscreen sample in Sascha Willems' examples https://www.saschawillems.de/creations/vulkan-examples/)

How can I run Parsec streaming service on a GCP instance without a GPU? (Error Code 15000)

I am attempting to run Android Studio on a GCP n1-standard-4 instance following this article. Everything works fine until it comes to accessing the instance. However, Chrome RDP gives poor resolution and I would prefer to use something better, which is Parsec. Once I try to connect to the instance, I get the error 15000, 'The host encoder failed to initialize'. I do not have a GPU attached to this instance, so could this be the problem?
Have a look at the Parsec documentation:
Error Codes - 22006 and 15000 (Unable To Initialize Encoder On Your Server) This is an issue on the host's PC. Below are the things that
can cause it.
Check that your GPU is supported! The is the most common issue. If
your GPU is not supported, none of these solutions will help you, and
you will require a new GPU or PC to be able to host. If your GPU is
supported, continue on below.
As you can see, you should use supported GPU with supported drivers for Parsec.
To solve your issue you should use supported GPU, also be sure that you're using supported OS and drivers.
In addition, have a look at the documentation GPUs on Compute Engine and Adding or removing GPUs if you decided to use specific GPU.

MPI CUDA error code 38 cudaErrorNoDevice on Windows [duplicate]

I'm using Python/NumbaPro to use my CUDA complient GPU on a windows box. I use Cygwin as shell and from within a cygwin console it has no problems finding my CUDA device. I test with the simple command
numbapro.check_cuda()
But when I'm connection to the box over OpenSSH (as part of my Cygwin setup), I get the following error:
numba.cuda.cudadrv.error.CudaSupportError: Error at driver init:
Call to cuInit results in CUDA_ERROR_NO_DEVICE:
How to fix this?
The primary cause of this is Windows service session 0 isolation. When you run any application via a service which runs in session 0 (so sshd, or windows remote desktop, for example), the machines native display driver is unavailable. For CUDA applications, this means that you are get a no device available error at runtime because the sshd you use to login is running as a service and there is no available CUDA driver.
The are a few workarounds:
Run the sshd as a process rather than a service.
If you have a compatible GPU, use the TCC driver rather than the GPU display driver.
On the secondary problem, the Python runtime error you are seeing comes from the multiprocessing module. From this question it appears that the root cause is probably the NUMBER_OF_PROCESSORS environment variable not being set. You can use one of the workarounds in that thread to get around that problem

Running an MS unittest that requires a GPU from TFS Build

We have a series of unit tests that require an NVidia GPU for execution. These tests currently fail (I think) because TFSBuild runs as a windows service and Windows (Windows 7 and later) does not allow windows services to access GPUs. Any ideas on how to get around this problem?
You are correct in that the MS Test Execution Engine on a build server does run as a service (just like the MSBuild process) and that services by default cannot access the GPU because of the "Session 0 Isolation" concept that was introduced in Windows Vista.
From what I've researched the only official way to get around this is to purchase an Nvidia Tesla card and have it run in "Tesla Compute Cluster" (TCC) mode which allows services to access the GPU for computing (like CUDA). There is indirect evidence that some Quadro cards also support TCC mode, but I have not found anything official stating which ones do.
I have a question up on Nvidia's forums asking about an inexpensive card for this exact scenario but it does not have any replies as of yet.
EDIT:
I just acquired an Nvidia Quadro K2200 and can confirm that it does indeed support TCC mode and works great running CUDA Unit tests on my build server during the build process.

Offscreen rendering to a texture in a win32 service

I'm trying to write a C++ windows service that can render to a texture. I've got the code working as a regular console app, but when run as a service wglGetProcAddress() returns NULL.
Can anyone tell me if this is possible, and if so, what do I need to do to make OpenGL work inside a service process?
Edit:
I still haven't got this to work under Vista, but it does work under XP.
You can get a fully capable software renderer by using Mesa3D.
Simply build Mesa3D and put the opengl32.dll built there alongside your application.
This should enable you to use OpenGL 2.1 and extensions.
We use this for testing Opengl applications in a Windows service.
Services run in non-interactive desktops. These desktops do not connect to the physical display device of the computer but rather to logical display devices. The logical display devices are very basic generic VGA devices set to 1024 X 768 with no bells and whistles.
Services can use most GDI functions but no advanced graphics functions such as DirectX or OpenGL. So you can create windows, create or retrieve device contexts and do some fairly complex drawing and rendering but you can't use anything but straightforward GDI (and some GDI+).
If you check GetLastError after wglGetProcAddress returns NULL you should get the reason for the failure.
OpenGL needs desktop access to create a render context and a service by default don't have desktop access.
You need to run the service in interactive mode. To do that go in the service properties in Administrative Tools. Where you set the service's log on user, you will have an option to run the service in interactive mode, or something similar as "Allow service to interact with desktop". You can try logging the service as another user too.
If you are working through a .Net IIS application, you will also have to force the managed part of the server to log as another user.
EDIT:
I forgot to say, a user must currently be logged on the accelerated hardware desktop and the machine must not be locked. That sucks but that's the only way I made it work before. We had dirty script that logged a user on as soon as the machine started.
As a side note, we were using DirectX, so it might not apply to OpenGL.