I'm using Python/NumbaPro to use my CUDA complient GPU on a windows box. I use Cygwin as shell and from within a cygwin console it has no problems finding my CUDA device. I test with the simple command
numbapro.check_cuda()
But when I'm connection to the box over OpenSSH (as part of my Cygwin setup), I get the following error:
numba.cuda.cudadrv.error.CudaSupportError: Error at driver init:
Call to cuInit results in CUDA_ERROR_NO_DEVICE:
How to fix this?
The primary cause of this is Windows service session 0 isolation. When you run any application via a service which runs in session 0 (so sshd, or windows remote desktop, for example), the machines native display driver is unavailable. For CUDA applications, this means that you are get a no device available error at runtime because the sshd you use to login is running as a service and there is no available CUDA driver.
The are a few workarounds:
Run the sshd as a process rather than a service.
If you have a compatible GPU, use the TCC driver rather than the GPU display driver.
On the secondary problem, the Python runtime error you are seeing comes from the multiprocessing module. From this question it appears that the root cause is probably the NUMBER_OF_PROCESSORS environment variable not being set. You can use one of the workarounds in that thread to get around that problem
Related
We are building an ASP.NET API running with .NET 5 that uses SkiaSharp to dynamically create and return images. We've noticed that using the GPU has a dramatic increase in performance. We know that in order to use the GPU we need an OpenGL Context instantiated, but with it comes some requirements. Our tests work well in our environments: Mac and Windows, but doesn't work when deployed to the Linux Azure App Service using P1v2 VM.
The error message Unable to create GL context: Unable to load shared library 'libX11'. Doing some research I realized the container doesn't have OpenGL installed and trying to install it through apt-get is not possible because of lack of permissions.
I went the route of running the KuduLite container locally in my machine and installing libgl1-mesa-glx and mesa-utils, but running glxinfo results in the error Error: unable to open display. I found this blog post that explains the requirements of running hardware accelerated OpenGL support in Docker. The blog post is from 2014 so I'm not sure if it is still valid, but if it is there quite a few requirements that before I try to solve locally in my machine I would like to know if they are even possible in Azure App Service Container.
So, is it even possible to have hardware accelerated OpenGL support in Azure App Service docker?
The problem you're running into is, that the machine you're running this on is headless and doesn't run a X11 display server. Most application-frameworks-for-use-with-OpenGL assume that they'll be running in some interactive graphical environment, i.e. having either an X11 server (which is configured to use the GPU) or a Wayland compositor around.
glxinfo doesn't have to do anything about this, BTW. It's just a little tool to query what kind of OpenGL capabilities a given X11 display (server) has. If you don't run X11 in the first place, you don't need it.
Up until a few years ago, this in fact was the only way to get GPU acceleration on Linux. Luckily those days are long gone. These days one can obtain fully headless, offscreen OpenGL contexts using EGL. Nvidia has a nice blog about how to do it:
https://developer.nvidia.com/blog/egl-eye-opengl-visualization-without-x-server/
Then there's this Github repo:
https://github.com/eduble/gl
You'll get the idea: Instead of opening a window you get a so called "surface" and draw on that.
With Vulkan it's even more straightforward, because you don't even have to bother setting up a surface suitable for pushing to a display if your goal is rendering an image to a buffer that you wrap up in a file or send out over a network (look at the offscreen sample in Sascha Willems' examples https://www.saschawillems.de/creations/vulkan-examples/)
The problem that I have is that after I have built my Unity project using Microsoft's Mixed Reality Toolkit and the Windows SDK 10.0.18362.0 I try to deploy it using the Hololens 2 emulator (version: 10.0.18362.1019). The result is that even though the emulator opens, my Unity application does not get deployed and the following error is being shown in Visual Studio's error list:
Please ensure that target device has developer mode enabled. Could not
obtain a developer license on 192.168.9.57 due to error 80004005
I found several articles online that had the same problem like me, and they either referred to resetting the HoloLens device (which I do not need to do, since it is an emulator) or enabling the Developer Mode on the host machine (in my case a fully updated Windows 10 Enterprise Edition computer), which I already have. Nevertheless the error persists.
I just hope that there will be a way to get rid of this error and manage to deploy my Unity application onto the HoloLens emulator.
It seems that the solution is very simple. If you actually run Visual Studio as an Administrator the application is successfully deployed onto the emulator.
I am trying to create an object using GetObject("winmgmts:\.root\cimv2") and I get invalid access to memory location message. Any suggestion?
the script runs fine in Main Windows OS environment however the same script fails from windows recovery environment.
The Windows Recovery Environment is using WinPE. WinPE does not natively have WMI support in it. You could update that environment to support WMI.
You probably would need to install "WinPE-WMI.cab"
https://technet.microsoft.com/en-us/library/hh824926.aspx
This article's instructions would be pertinent to any of the newer Windows 8,8.1,10
I'm trying to run an OpenGl program on an Amazon EC2 instance. When run on local computers it works fine, but when run through the remote desktop the program crashes and I've narrowed it down to the glCreateShader(GL_VERTEX_SHADER) call.
I researched this previously when running over remote desktop on a computer in the local network and the solution I found was to use a batch script that disconnected the session and started the OpenGL exe. Then when you logged back on it was fine. tscon 1 /dest:console
Unfortunately now this seems not to work when trying to run on the Amazon instance. Does anyone have any experience with OpenGL issues over remote connections?
glCreateShader is one of the functions which location must be obtained at runtime using a …gl…GetProcAddress call. This call will give a valid pointer only if the function is actually supported by the installed OpenGL driver. Also even if the function is supported by the driver, the actual feature accessed by the function may not be supported by the device/OpenGL context you're using.
It's mandatory you're checking the validity of the function address assert(glCreateShader); and that the function is actually supported (OpenGL version >= OpenGL-2.0 or GL_ARB_vertex_shader and GL_ARB_fragment_shader in the list of extensions).
I'm trying to run an OpenGl program on an Amazon EC2 instance.
Virtual machines normally don't have a GPU available. The functionality you're requesting is not available without a GPU in a standard Windows installation. As a workaround, however with largely reduced performance, you can build and install the Mesa3D opengl32.dll software rasterizer alongside your program's .exe (do not install in the system path!).
I am developing an application in ubuntu to access the other system remotely through QT. Both system are running some Qt applications.I want to check / make changes to the other system remotely using Qt programming.
I want to add a pushbutton (as a quit screen) at remote system that should be "Enable only if the system is remotely accessed", so that i can use it to close the remote access screen.
Is there any way through programming we can get the status whenvever it is remotely accessed???
I got through some solutions on forum but they are particularly for Windows. I am looking for some solution in Linux.
Please provide suggestion/links so that i can overcome this issue.
Thanks in Advance
If you are using the remote display abilities of the X11 protocol, you could check the value of the DISPLAY variable. For a local connection, it usually starts with :0; for a distant connection, it contains the hostname of the displaying server. For a connection thru ssh -X it could be localhost:10 and ssh is also setting SSH_CLIENT and SSH_CONNECTION environment variables.
Otherwise, you should define better what is a remote access for you (i.e. explain more your application). Your Qt application may also be e.g. some TCP/IP server. Perhaps the getpeername(2) syscall might be relevant.
If you just are interested in what remote connections flow into your box (independently of a particular application) you could read (e.g. using popen) the output of command netstat -a -n or use some /proc/net/ directory.