How do I fake OpenGL in an Azure Virtual Machine? - opengl

I'd like to run some programs in my Azure VM (Windows Server 2008), that require OpenGL 2.0.
However the VM has no video hardware :), how can I fake the programs into believing I have a good enough video card?
How am I supposed to get to the point of all development in the cloud, if I can't have virtual video cards? :)

You could place a Mesa softpipe (software rasterizer) build opengl32.dll beside your program's executable. Heck, on a machine without a proper graphics card it would be even acceptable to replace the system opengl32.dll (though this is not recommended).

Check the OpenGL section here... and make sure u r using openscad 64 bit

Related

Current state and solutions for OpenGL over Windows Remote [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
The community reviewed whether to reopen this question 5 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
OpenGL and Windows Remote don't play along nicely.
Solutions for this are dependent on the use case and answers are fragmented across the vast depths of the net.
This is a write-up I wish existed when I started researching this, both for coders and non-coders.
Problem:
A RDP session of Windows does not expose the graphics card, at least not directly. For instance you cannot change the desktop resolution and GraphicsCard drivers usually just disable their setting menus. Starting a OpenGL context higher than v1.1 fails because of this. The, especially in support IRCs, often suggested "Don't use WindowsRemote" is unfortunately not an option for many. In many corporate environments Windows Remote is a constantly used tool and an app has to work there as well.
Non-Coder workarounds
You can start the OpenGL program, allowing it to see the graphics card, create an opengl context and then connect via WindowsRemote. This always works, as Windows remote just transfers the window content. This can be accomplished by:
A batch script, that closes the session and starts the program, allowing you to connect to the program already running. (Source)
Using VNC or other to remote into the machine, start the program and then switch to Windows Remote. (Simple VNC programm, also with a portable client)
Coder workarounds
(Only for OpenGL ES)Translate OpenGL to DirectX. DirectX works under Windows Remote flawselly and even has a Software rendering fallback built into DX11 if something fails.
Use the ANGLE Project to do this at run-time. This is what QT officially suggests you do and how Chrome and Firefox implement WebGL. (Source)
Switch to software rendering as a fall back. Some CAD software like 3dsMax does this for instance:
Under SDL2 you can use SDL_CreateSoftwareRenderer (Source)
Under GLFW version 3.3 will release OSMesa (Mesa's off screen rendering), in the mean time you can build the Github version with -DGLFW_USE_OSMESA=TRUE, but I personally still struggle to get that running (Source)
Directly use Mesa's LLVM pipe for a fast OpenGL implementation. (Source)
Misc:
Use OpenGL 1.1: Windows has a built in implementation of OpenGL 1.1 and
earlier. Some game engines have a built in fall back to this and thus
work under Windows Remote.
Apparently there is a middle-ware, that allows for even OpenGL 4 over Windows Remote, but it's part of a bigger package and is a commercial solution. (Source)
Any other solutions or corrections are greatly appreciated.
[10] Nvidia -> https://www.khronos.org/news/permalink/nvidia-provides-opengl-accelerated-remote-desktop-for-geforce-5e88fc2035e342.98417181
According to this article it seems that now RDP handles newer versions of Direct3D and OpenGL on Windows 10 and Windows Server 2016, but by default it is disabled by Group Policy.
I suppose that for performance reasons, using a hardware graphics card is disabled, and RDP uses a software-emulated graphics card driver that provides only some baseline features.
I stumbled upon this problem when trying to run Ultimaker CURA over standard Remote Desktop from a Windows 10 client to a Windows 10 host. Cura shouted "cannot initialize OpenGL 2.0 context". I also noticed that Repetier Host's "preview" window runs terribly slow, and Repetier detects only an OpenGL 1.1 card. Pretty much fits the "only baseline features" description.
By running gpedit.msc then navigating to
Local Computer Policy\Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Remote Session Environment
and changing the value of
Use hardware graphics adapters for all Remote Desktop Services sessions
I was able to successfully run Ultimaker CURA via with no issues, and Repetier-Host now displays OpenGL 4.6, and everything finally runs fast as it should.
Note from genpfault:
As usual, this Policy is kept in the HKLM registry group in
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services
Set REG_DWORD:bEnumerateHWBeforeSW to 1 to turn ON using GPUs in RDP.
OpenGL works great by RDP with professional Nvidia cards without anything like virtual machines and RemoteFX. For Quadro (Quadro 4000 tested) you need driver 377.xx. For M60 you can use the same driver. If you want to use last driver with M60, you have to change the driver mode to WDDM mode (see c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.1.pdf). It is possible that there are some problems with licensing in this last case.
Some people recommend using "tscon.exe" if you can: https://stackoverflow.com/a/45723167/32453 or using a scheduler to do it on native hardware: https://stackoverflow.com/a/41839102/32453 or creating a group policy:
https://community.esri.com/thread/225251-enabling-gpu-rendering-on-windows-server-2016-windows-10-rdp
maybe copy opengl32.dll (or opengl64.dll) to your executable's dir: https://blender.stackexchange.com/a/73014 and newer version of the dll: https://fdossena.com/?p=mesa/index.frag
Remote Desktop and OpenGL does not play very well. When you connect to a Windows box the OpenGL Driver is unloaded and you end up with software emulation of OpenGL.
When you disconnect from the Windows box the OpenGL driver is not reloaded. This causes issues when you are running tests on the machine as you have to physically login to the machine to reset the drivers.
The solution I ended up using was to:
Disable Remote Desktop.
Delete all other software for remote desktop access. Because if it's used for logging in remotely the current set of drivers loaded may be messed up.
Install NoMachine
NoMachine is my personal favourite (when it does not play up) for a number of reasons:
Hardware acceleration of compression (video of desktop).
Works on Windows and Linux.
Works well on low-bandwidth connections especially if the client and server have the necessary hardware for compression of the data stream.
On Linux you get your desktop as you last left it when you were sitting in front of the machine.
On Windows it does not affect OpenGL.
currently free for personal and commercial use. Do check the licence in case it's changed.
When NoMachine plays up it hogs the CPU but this happens rarely. It is however in active development
Others to consider:
TurboVNC
TightVNC
TeamViewer - only free for personal use.

Is there any good way to get a indication if a computer can run a specific program/software?

Is there any good way too get a indication if a computer is capable to run a program/software without any performance problem, using pure JavaScript (Google V8), C++ (Windows, Mac OS & Linux), by requiring as little information as possible from the software creator (like CPU score, GPU score)?
That way can I give my users a good indication whether their computer is good enough to run the software or not, so the user doesn't need to download and install it from the first place if she/he will not be able to run it anyway.
I thinking of something like "score" based indications:
CPU: 230 000 (generic processor score)
GPU: 40 000 (generic GPU score)
+ Network/File I/O read/write requirements
That way can I only calculate those scores on the users computer and then compare them, as long as I'm using the same algorithm, but I have no clue about any such algorithm, whose would be sufficient for real-world software for desktop usage.
I would suggest testing on existence of specific libraries and environment (OS version, video card presence, working sound drivers, DirectX, OpenGL, Gnome, KDE). Assign priorities to these libraries and make comparison using the priorities, e.g. video card presence is more important than KDE presence.
The problem is, even outdated hardware can run most software without issues (just slower), but newest hardware cannot run some software without installing requirements.
For example, I can run Firefox 11 on my Pentium III coppermine (using FreeBSD and X server), but if you install windows XP on the newest hardware with six-core i7 and nVidia GTX 640 it still cannot run DirectX 11 games.
This method requires no assistance from the software creator, but is not 100% accurate.
If you want 90+% accurate information, make the software creator check 5-6 checkboxes before uploading. Example:
My application requires DirectX/OpenGL/3D acceleration
My application requires sound
My application requires Windows Vista or later
My application requires [high bandwith] network connection
then you can test specific applications using information from these checkboxes.
Edit:
I think additional checks could be:
video/audio codecs
pixel/vertex/geometry shader version, GPU physics acceleration (may be crucial for games)
not so much related anymore: processor extensions (SSE2 MMX etc)
third party software such as pdf, flash, etc
system libraries (libpng, libjpeg, svg)
system version (Service Pack number, OS edition (premium professional etc)
window manager (some apps on OSX require X11 for functioning, some apps on Linux work only on KDE, etc)
These are actual requirements I (and many others) have seen when installing different software.
As for old hardware, if the computer satisfies hardware requirements (pixel shader version, processor extensions, etc), then there's a strong reason to believe the software will run on the system (possibly slower, but that's what benchmarks are for if you need them).
For GPUs I do not think getting a score is usable/possible without running some code on the machine to test if the machine is up to spec.
With GPU's this is typically checking what Shader Models it is able to use, and either defaulting to a lower shader model (thus the complexity of the application is of less "quality") or telling them they have no hope of running the code and thus quitting.

cuda program on VMware

i wrote a cuda program and i am testing it on ubuntu as a virtual machine. the reason for this is i have windows 7, i don't want to install ubuntu as a secondary operating system, and i need to use a linux operating system for testing.
my question is: will the virtual machine limit the gpu resources? So will my cuda code be faster if i run it under my primary operating system than running it on a virtual machine?
I faced a similar task once. What I ended up doing was installing Ubuntu on a 8GB thumb drive with persistent mode enabled.
That gave me 4GB to install CUDA and everything else I needed.
Having a bootable USB stick around can be very useful. I recommend reading this.
Also, this link has some very interesting material if you're looking for other distros.
Unfortunately the virtual machine simulates a graphics device and as such you won't have access to the real GPU. This is because of the way the virtualisation handles multiple VMs accessing the same device - it provides a layer in between to share the real device.
It is possible to get true access to the hardware, but only if you have the right combination of software and hardware, see the SLI Multi-OS site for details.
So you're probably out of luck with the virtualisation route - if you really can't run your app in Windows then you're limited to the following:
Unrealistic: Install Linux instead
Unrealistic: Install Linux alongside (not an option)
Boot into a live CD, you could prepare a disk image with CUDA and mount the image each time
Setup (or beg/borrow) a separate box with Linux and access it remotely
I just heard a talk at NVIDIA's GPU technology conference by a researcher named Xiaohui Cui (Oak Ridge National Laboratory). Among other things, he described accessing GPUs from Virtual machines using something called gVirtuS. He did not create gVirtuS, but described it as an opensource "virtual cuda" driver. See following link:
http://osl.uniparthenope.it/projects/gvirtus/
I have not tried gVirtuS, but sounds like it might do what you want.
As of CUDA 3.1 it's virtualization capabilities are not vivid, so the only usable approach is to run CUDA programs directly on the target HW+SW
Use rCUDA to add a virtual GPU to your VM.

How to get started with Drivers Programming under windows

I want to start learning drivers programming under windows .
I never programed drivers , and i am looking for information how to get started .
Any tutorials ,links ,book recommendations , and what development tool kit i should start with ? (WDF will be good one ?)
I really want to program following clock link text
Thanks for your help .
I would start by downloading the windows driver kit (WDK).
Afterwards, you decide which kind of driver you want. FileSystem driver? (probably not), RS-232 driver? usb driver? They all follow different rules and quirks.
The WDK comes with examples drivers for most kinds of drivers and should get you on track fast.
To interact with USB hardware you would be best served by looking at WinUSB or the Usermode Driver Framework. Usermode drivers are orders of magnitude easier, being able to use a C++/COM(kind of) framework and a normal debugging environment.
Writing kernelmode drivers should be reserved for stuff like video card, disk, and other latency/throughput sensitive drivers.
An even easier method would be to use libusb-win32 which is a C library that makes talking to a USB endpoint almost as easy as writing data to a file.
Must see resource for windows driver development, of course as addition to the WDK mentioned by Eric.

OpenCV on Embedded Platform

Can some suggest a test/development embedded platform to use with OpenCV.
I would like to develop an embedded video analytics solution, but I don't know where to start.
Some suggestion/ideas/hw starter kits?
Maybe some Pc-104 solutions with Intel Atom? Has someone made some test about performances on this platform or any other embedded platform?
Thanks
A Pentium/PC built OpenCV application will run on any Atom platform with the same OS unmodified. This is because Atoms natively run Pentium executables.
If you are looking for a more embedded solution, there are OpenCV ports for the BeagleBoard. SInce OpenCV is portable code, it can be compiled to most systems that provide a C/C++ compiler. I have successfully used OpenCV on ARM, MIPS and XScale processors.
As for mobile platforms, there are ports to the iPhone, Android and various Windows CE/Mobile/Embdeed versions.
If you're looking for a very small option, I strongly recommend the Gumstix Overo series. I use them for my Computer Vision research, and they work really well. There are a couple of options for processors, I'd recommend the Overo Tide module, which has 512 MB of RAM, and an onboard DSP for offloading some CV operations. Combine this with a Tobi expansion board and a few cables, and you've got a full embedded computer vision research platform for ~$350. They also sell a small camera, which I'm still getting around to trying out. What's nice about the Gumstix is you can just build OpenCV onboard, which saves you some of the headaches with BitBake type solutions.
I'd personally recommend TI OMAP platforms - Beagleboard xM and PandaBoard.
Those boards have embedded video input, run Linux, and have more than enough performance to run OpenCV. They are also extremely portable and have good community support.
Do you mean OpenCV the computer vision library originally developed by Intel? I would be inclined to start with Moblin, Intel's embedded Linux, at moblin.org and for hw use a netbook or any PC that Moblin supports. Hook up a supported webcam from the list at www.qbik.ch/usb/devices/search_res.php?pattern=webcam .
There is a Wikipedia entry that might help. Your project sounds like fun!
cheers -- Rick
You can use the Blackfin kit from Analog Devices. Analog Devices have created a library similar to opencv for the blackfin DSP processor.
you can use Symbian Simulator for this they Nokia have there Open CV for Symbian for hardware testing you have to drop the mail to them they will provide u the hardware through the telnet for given time of time
OpenCV does not need any "special" hardware to function. You can use it fully using images from normal files (e.g. JPG)
Have you looked at some of the tutorials/code? Do they require something specific that you do not have?
Vision Components seem to support the OpenCV in their Smart Cameras (see this article).
I guess I am late to answer.
I have recently used opencv3.4.6 with PC-104 boards (PCM3365) for an INDUSTRIAL Application.
Only thing to note is that when i start webcamera using cv::Videocapture, it takes a long time to open (around 30-40secs), otherwise everything is fine.
Good Luck