We have a series of unit tests that require an NVidia GPU for execution. These tests currently fail (I think) because TFSBuild runs as a windows service and Windows (Windows 7 and later) does not allow windows services to access GPUs. Any ideas on how to get around this problem?
You are correct in that the MS Test Execution Engine on a build server does run as a service (just like the MSBuild process) and that services by default cannot access the GPU because of the "Session 0 Isolation" concept that was introduced in Windows Vista.
From what I've researched the only official way to get around this is to purchase an Nvidia Tesla card and have it run in "Tesla Compute Cluster" (TCC) mode which allows services to access the GPU for computing (like CUDA). There is indirect evidence that some Quadro cards also support TCC mode, but I have not found anything official stating which ones do.
I have a question up on Nvidia's forums asking about an inexpensive card for this exact scenario but it does not have any replies as of yet.
EDIT:
I just acquired an Nvidia Quadro K2200 and can confirm that it does indeed support TCC mode and works great running CUDA Unit tests on my build server during the build process.
Related
We are building an ASP.NET API running with .NET 5 that uses SkiaSharp to dynamically create and return images. We've noticed that using the GPU has a dramatic increase in performance. We know that in order to use the GPU we need an OpenGL Context instantiated, but with it comes some requirements. Our tests work well in our environments: Mac and Windows, but doesn't work when deployed to the Linux Azure App Service using P1v2 VM.
The error message Unable to create GL context: Unable to load shared library 'libX11'. Doing some research I realized the container doesn't have OpenGL installed and trying to install it through apt-get is not possible because of lack of permissions.
I went the route of running the KuduLite container locally in my machine and installing libgl1-mesa-glx and mesa-utils, but running glxinfo results in the error Error: unable to open display. I found this blog post that explains the requirements of running hardware accelerated OpenGL support in Docker. The blog post is from 2014 so I'm not sure if it is still valid, but if it is there quite a few requirements that before I try to solve locally in my machine I would like to know if they are even possible in Azure App Service Container.
So, is it even possible to have hardware accelerated OpenGL support in Azure App Service docker?
The problem you're running into is, that the machine you're running this on is headless and doesn't run a X11 display server. Most application-frameworks-for-use-with-OpenGL assume that they'll be running in some interactive graphical environment, i.e. having either an X11 server (which is configured to use the GPU) or a Wayland compositor around.
glxinfo doesn't have to do anything about this, BTW. It's just a little tool to query what kind of OpenGL capabilities a given X11 display (server) has. If you don't run X11 in the first place, you don't need it.
Up until a few years ago, this in fact was the only way to get GPU acceleration on Linux. Luckily those days are long gone. These days one can obtain fully headless, offscreen OpenGL contexts using EGL. Nvidia has a nice blog about how to do it:
https://developer.nvidia.com/blog/egl-eye-opengl-visualization-without-x-server/
Then there's this Github repo:
https://github.com/eduble/gl
You'll get the idea: Instead of opening a window you get a so called "surface" and draw on that.
With Vulkan it's even more straightforward, because you don't even have to bother setting up a surface suitable for pushing to a display if your goal is rendering an image to a buffer that you wrap up in a file or send out over a network (look at the offscreen sample in Sascha Willems' examples https://www.saschawillems.de/creations/vulkan-examples/)
I am attempting to run Android Studio on a GCP n1-standard-4 instance following this article. Everything works fine until it comes to accessing the instance. However, Chrome RDP gives poor resolution and I would prefer to use something better, which is Parsec. Once I try to connect to the instance, I get the error 15000, 'The host encoder failed to initialize'. I do not have a GPU attached to this instance, so could this be the problem?
Have a look at the Parsec documentation:
Error Codes - 22006 and 15000 (Unable To Initialize Encoder On Your Server) This is an issue on the host's PC. Below are the things that
can cause it.
Check that your GPU is supported! The is the most common issue. If
your GPU is not supported, none of these solutions will help you, and
you will require a new GPU or PC to be able to host. If your GPU is
supported, continue on below.
As you can see, you should use supported GPU with supported drivers for Parsec.
To solve your issue you should use supported GPU, also be sure that you're using supported OS and drivers.
In addition, have a look at the documentation GPUs on Compute Engine and Adding or removing GPUs if you decided to use specific GPU.
I am trying to run Carla simulator in Google Linux VM instance (Ubuntu 20.4 with GPU NVIDIA Tesla P100 Virtual Workstation). I used NoMachine to remotely connect to the instance.
All the installation steps are done perfectly but when I run Carla simulator, it will show below error
I run the vulkaninfo command on NoMachine then an exception is thrown
build/vulkan-tools-KEbD_A/vulkan-tools-1.2.131.1+dfsg1/vulkaninfo/vulkaninfo.h:477: failed with ERROR_INITIALIZATION_FAILED
However, if I run vulkaninfo command on SSH connection then it results correctly.
I guess that is because there is no physic display for Google VM instance so NoMachine cannot detect it (I even used NoMachine workstation version already).
So, I just wonder if it is possible to graphic display for Google Linux instance? Or is there any better way to do visualize remote connection rather than NoMachine? I would appreciate any suggestions.
Thank you very much in advance
It seems to be an issue with no machine and vulkan as per this other thread.
You could try using Teradici for PCoIP as suggested in GCP's documentation for "Creating a virtual GPU-accelerated Linux workstation". A Free trial period can be requested here.
I'm trying to run an OpenGl program on an Amazon EC2 instance. When run on local computers it works fine, but when run through the remote desktop the program crashes and I've narrowed it down to the glCreateShader(GL_VERTEX_SHADER) call.
I researched this previously when running over remote desktop on a computer in the local network and the solution I found was to use a batch script that disconnected the session and started the OpenGL exe. Then when you logged back on it was fine. tscon 1 /dest:console
Unfortunately now this seems not to work when trying to run on the Amazon instance. Does anyone have any experience with OpenGL issues over remote connections?
glCreateShader is one of the functions which location must be obtained at runtime using a …gl…GetProcAddress call. This call will give a valid pointer only if the function is actually supported by the installed OpenGL driver. Also even if the function is supported by the driver, the actual feature accessed by the function may not be supported by the device/OpenGL context you're using.
It's mandatory you're checking the validity of the function address assert(glCreateShader); and that the function is actually supported (OpenGL version >= OpenGL-2.0 or GL_ARB_vertex_shader and GL_ARB_fragment_shader in the list of extensions).
I'm trying to run an OpenGl program on an Amazon EC2 instance.
Virtual machines normally don't have a GPU available. The functionality you're requesting is not available without a GPU in a standard Windows installation. As a workaround, however with largely reduced performance, you can build and install the Mesa3D opengl32.dll software rasterizer alongside your program's .exe (do not install in the system path!).
I'm trying to write an application that connects to my company's wireless network automatically on windows XP.
I've found the Wireless LAN API but it requires me to have some hotfix installed on the machine, and you need to have sp2 or higher(There are machines with SP1, and I'm required to support any XP machine).
I've tried to find some samples about Wireless Zero Configuration on MSDN but with no luck, only samples I've found are for WinCE, I think Microsoft stopped supporting it. In addition I couldn't find where to download the dll and header file for working with the WZC.
There must be a way to do it and work on any service pack because I've found Zwlancfg by ENGL
Point out that any change you'll have to introduce to these old XP machines will be similar in magnitude to the SP2 update, except that (1) you don't have the insight into the network stack that Microsoft has, (2) you don't have the experience in Windows development that Microsoft collectively has and (3) you don't have the testing resources (including beta testers) that Microsoft has. So your change will be more risky and less stable than the SP2 update.
Couldn't you just setup the wireless password and tell XP to auto-join when it sees the network?
Maybe I'm missing something but it happens automatically, so I don't see why you need to code an app to do this.
I would encourage you to advocate for upgrading those XP machines at least to Service Pack 2 as it was a major upgrade in terms of functionality and security. It's also been at least 5 years since it was rolled out so I can't imagine you'd have compatibility issues with 3rd party software.
That being said.
Wireless for XP was seriously reworked with Service Pack 2 and the Wireless Network Policy was created that allows you to push out policy to all machines on your network via the Group Policy MMC.
You should try native wifi api but it will work with XP SP2
There is one WLANCONNECT() methos try that one
with that you will be able to connect to network with your program