Where is opengl located(gpu software or os)? [duplicate] - c++

This question already has answers here:
How does OpenGL work at the lowest level? [closed]
(4 answers)
Closed 5 years ago.
I am writing an x86 os and a question popped into my mind:
If I would want to create a simple opengl game in my os, would I be able to do that without reinventing opengl?
SO what I am asking is, is opengl included in e.g. the nvideo drivers, or is it located in the gpu firmware?
If it would be in the gpu, i could simple port/create a opengl wrapper right?
Can someone elaborate on this?
Thanks

OpenGL is the API of the GPU driver. Taking nVidia as an example, they release closed source drivers for supported operating systems. There are also open source drivers (the nouveou project) that try to reverse engineer the nVidia graphics cards and implement an open source driver for them. The same is also true for other vendors to some extent.
So considering your scenario, you should either implement an ABI compatibility layer in your OS with a widely supported OS so that you could run the closed-source drivers, or port the open-source community drivers to your OS.

The GPU hardware executes specific code. Some of this code is programmable, which means that you write special code that runs inside the GPU card.
The instructions to pass this special code (shaders in OpenGL parlance) and the data they handle are the graphics API (OpenGL, DirectX). There are also more instructions for the GPU, they are also handled by the API.
This API lives in the graphics card driver.
First, an app asks the OS to provide the function pointers to the API commands. These pointers are retrieved from the driver. Then the app use these pointers to comunicate with the GPU (via driver).
Two details: Retriving pointers is not needed in MAC, they provide them as any C++ instruction. This is also true in Windows, but just for OpenGL 1.1
The drivers for Windows and Mac are propietary software.
In Linux nVidia, AMD and Intel provide their drivers (but mostly as closed source). Also in Linux, there are open source drivers, which some developers wrote on their own.
Finally, There is a software inplementation of the OpenGL API done by Mesa. Mesa also is one of those that writes open source drivers for Linux.

Related

Current state and solutions for OpenGL over Windows Remote [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
The community reviewed whether to reopen this question 5 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
OpenGL and Windows Remote don't play along nicely.
Solutions for this are dependent on the use case and answers are fragmented across the vast depths of the net.
This is a write-up I wish existed when I started researching this, both for coders and non-coders.
Problem:
A RDP session of Windows does not expose the graphics card, at least not directly. For instance you cannot change the desktop resolution and GraphicsCard drivers usually just disable their setting menus. Starting a OpenGL context higher than v1.1 fails because of this. The, especially in support IRCs, often suggested "Don't use WindowsRemote" is unfortunately not an option for many. In many corporate environments Windows Remote is a constantly used tool and an app has to work there as well.
Non-Coder workarounds
You can start the OpenGL program, allowing it to see the graphics card, create an opengl context and then connect via WindowsRemote. This always works, as Windows remote just transfers the window content. This can be accomplished by:
A batch script, that closes the session and starts the program, allowing you to connect to the program already running. (Source)
Using VNC or other to remote into the machine, start the program and then switch to Windows Remote. (Simple VNC programm, also with a portable client)
Coder workarounds
(Only for OpenGL ES)Translate OpenGL to DirectX. DirectX works under Windows Remote flawselly and even has a Software rendering fallback built into DX11 if something fails.
Use the ANGLE Project to do this at run-time. This is what QT officially suggests you do and how Chrome and Firefox implement WebGL. (Source)
Switch to software rendering as a fall back. Some CAD software like 3dsMax does this for instance:
Under SDL2 you can use SDL_CreateSoftwareRenderer (Source)
Under GLFW version 3.3 will release OSMesa (Mesa's off screen rendering), in the mean time you can build the Github version with -DGLFW_USE_OSMESA=TRUE, but I personally still struggle to get that running (Source)
Directly use Mesa's LLVM pipe for a fast OpenGL implementation. (Source)
Misc:
Use OpenGL 1.1: Windows has a built in implementation of OpenGL 1.1 and
earlier. Some game engines have a built in fall back to this and thus
work under Windows Remote.
Apparently there is a middle-ware, that allows for even OpenGL 4 over Windows Remote, but it's part of a bigger package and is a commercial solution. (Source)
Any other solutions or corrections are greatly appreciated.
[10] Nvidia -> https://www.khronos.org/news/permalink/nvidia-provides-opengl-accelerated-remote-desktop-for-geforce-5e88fc2035e342.98417181
According to this article it seems that now RDP handles newer versions of Direct3D and OpenGL on Windows 10 and Windows Server 2016, but by default it is disabled by Group Policy.
I suppose that for performance reasons, using a hardware graphics card is disabled, and RDP uses a software-emulated graphics card driver that provides only some baseline features.
I stumbled upon this problem when trying to run Ultimaker CURA over standard Remote Desktop from a Windows 10 client to a Windows 10 host. Cura shouted "cannot initialize OpenGL 2.0 context". I also noticed that Repetier Host's "preview" window runs terribly slow, and Repetier detects only an OpenGL 1.1 card. Pretty much fits the "only baseline features" description.
By running gpedit.msc then navigating to
Local Computer Policy\Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Remote Session Environment
and changing the value of
Use hardware graphics adapters for all Remote Desktop Services sessions
I was able to successfully run Ultimaker CURA via with no issues, and Repetier-Host now displays OpenGL 4.6, and everything finally runs fast as it should.
Note from genpfault:
As usual, this Policy is kept in the HKLM registry group in
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services
Set REG_DWORD:bEnumerateHWBeforeSW to 1 to turn ON using GPUs in RDP.
OpenGL works great by RDP with professional Nvidia cards without anything like virtual machines and RemoteFX. For Quadro (Quadro 4000 tested) you need driver 377.xx. For M60 you can use the same driver. If you want to use last driver with M60, you have to change the driver mode to WDDM mode (see c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.1.pdf). It is possible that there are some problems with licensing in this last case.
Some people recommend using "tscon.exe" if you can: https://stackoverflow.com/a/45723167/32453 or using a scheduler to do it on native hardware: https://stackoverflow.com/a/41839102/32453 or creating a group policy:
https://community.esri.com/thread/225251-enabling-gpu-rendering-on-windows-server-2016-windows-10-rdp
maybe copy opengl32.dll (or opengl64.dll) to your executable's dir: https://blender.stackexchange.com/a/73014 and newer version of the dll: https://fdossena.com/?p=mesa/index.frag
Remote Desktop and OpenGL does not play very well. When you connect to a Windows box the OpenGL Driver is unloaded and you end up with software emulation of OpenGL.
When you disconnect from the Windows box the OpenGL driver is not reloaded. This causes issues when you are running tests on the machine as you have to physically login to the machine to reset the drivers.
The solution I ended up using was to:
Disable Remote Desktop.
Delete all other software for remote desktop access. Because if it's used for logging in remotely the current set of drivers loaded may be messed up.
Install NoMachine
NoMachine is my personal favourite (when it does not play up) for a number of reasons:
Hardware acceleration of compression (video of desktop).
Works on Windows and Linux.
Works well on low-bandwidth connections especially if the client and server have the necessary hardware for compression of the data stream.
On Linux you get your desktop as you last left it when you were sitting in front of the machine.
On Windows it does not affect OpenGL.
currently free for personal and commercial use. Do check the licence in case it's changed.
When NoMachine plays up it hogs the CPU but this happens rarely. It is however in active development
Others to consider:
TurboVNC
TightVNC
TeamViewer - only free for personal use.

Getting Started with OpenCL with NVIDIA graphics cards and Ubuntu Linux

I am looking to start programming using OpenCL. I currently have a laptop running Ubuntu Linux. (More specifically it's Linux Mint however they are similar in many respects and I will be changing back to Xubuntu shortly, so I am hoping any info will work for both.)
This laptop is a "difficult" laptop because it has both an on-chip Intel graphics processor (side by side with the CPU) and a dedicated NVIDIA Graphics Card. (I believe it is a GTX 670?) I say difficult because it was pretty complicated to install the drivers to allow me to develop using OpenGL... Even now I get confused sometimes when I run my program and it explodes because I didn't run it using 'optirun'.
Anyway, back to the question in hand, I researched the required software, and continually keep being pointed at NVIDIA's site to download their OpenCL drivers / toolkits. However I would prefer to use Khronos OpenCL rather than NVIDIA's Cuda. I don't fully understand what the difference is however*, and online info is either limited or cryptic.
The actual programming / problem vectorization I have already done, I'm just a bit lost at the moment as to what software I should / must install and how to go about doing so.
*Edit: I find the OpenCL syntax more intuitive.

How to set a pixel in C++? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am attempting to create my own operating system and I am just wondering if there is a way to tell the BIOS to set a VGA pixel on my screen in C++.
C as a language does not provide any built-in graphics capabilities. If you want graphics, you have to use some OS-specific library.
Aside from that, modern operating systems generally don't allow any old program to poke around in memory however it wants to. Instead, they use intermediates called drivers and, yes, graphics libraries and APIs such as OpenGL.
If you really want to do it yourself get a copy of MS-DOS and dig up some old VGA specs and start from there.
You can turn on a given pixel, but this requires platform specific code and may not portable when the OS or platform changes.
My understanding is you want direct access to the screen buffer with nothing between stopping you. Here's the way to go.
Common Research
On your platform, find out the graphics controller, brand name and model name, if you are using one. Search the web for the data sheets on the graphics controller chip. Most likely, the screen memory is inside the chip and not directly accessible by the CPU.
Next, find out how to access the board that the Graphics Controller resides on. You may be able to access the Graphics Controller chip directly by I/O ports or memory addresses; or you may have to use an interrupt system. Research the hardware.
Linux
Download a source distribution for the Linux kernel. Find the graphic driver. Search the code in the graphic driver to see how the Graphics Controller is manipulated.
For Linux, you will have to write your own graphics driver and rebuild the kernel. Next you will need to write a program that accesses your driver and turns on the pixel. Research "Linux driver API". There are books available on writing Linux drivers and the standard API that they use.
Windows
Windows uses the same concept of drivers. You will have to write your own Windows driver and let the OS know you want to use it. Your driver will talk to the Graphics Controller. There are books available about writing Windows drivers. After writing the driver, you will need to write a demo program that uses your driver.
Embedded Systems
Embedded systems range from simple to complex as far as displays go. This simplest embedded system uses memory that the display views. Any writes to this memory are immediately reflected on the display.
The more complex embedded systems use Graphic Controllers to control the display. You would need to get the data sheets on the Graphic Controller, figure out how to set it up, then how to turn on a pixel.
Driver Writers
Drivers are not an easy thing to write. Most drivers are written by teams of experts and take months to produce. Graphic Controller chips are becoming more and more complex as new features are added. The driver must be able to support new features and the older models. Not an easie issue.
Summary
If you really want to access a pixel directly, go ahead. It may require more research and effort that using an Off The Shelf (OTS) library. Most people in the industry use OTS libraries or frameworks (such as QT, wxWidgets and XWindows). Drivers are only rewritten or modified for performance reasons or to support new graphics hardware. Driver writing is not a simple task and requires a quality development process as well as a verification strategy.
Good luck on writing your pixel. I hope your library has something better to offer than the many graphic libraries already in existence.

What is Linux’s native GUI API?

Both Windows (Win32 API) and OS X (Cocoa) have their own APIs to handle windows, events and other OS stuff. I have never really got a clear answer as to what Linux’s equivalent is?
I have heard some people say GTK+, but GTK+ being cross platform. How can it be native?
In Linux the graphical user interface is not a part of the operating system. The graphical user interface found on most Linux desktops is provided by software called the X Window System, which defines a device independent way of dealing with screens, keyboards and pointer devices.
X Window defines a network protocol for communication, and any program that knows how to "speak" this protocol can use it. There is a C library called Xlib that makes it easier to use this protocol, so Xlib is kind of the native GUI API. Xlib is not the only way to access an X Window server; there is also XCB.
Toolkit libraries such as GTK+ (used by GNOME) and Qt (used by KDE), built on top of Xlib, are used because they are easier to program with. For example they give you a consistent look and feel across applications, make it easier to use drag-and-drop, provide components standard to a modern desktop environment, and so on.
How X draws on the screen internally depends on the implementation. X.org has a device independent part and a device dependent part. The former manages screen resources such as windows, while the latter communicates with the graphics card driver, usually a kernel module. The communication may happen over direct memory access or through system calls to the kernel. The driver translates the commands into a form that the hardware on the card understands.
As of 2013, a new window system called Wayland is starting to become usable, and many distributions have said they will at some point migrate to it, though there is still no clear schedule. This system is based on OpenGL/ES API, which means that in the future OpenGL will be the "native GUI API" in Linux. Work is being done to port GTK+ and QT to Wayland, so that current popular applications and desktop systems would need minimal changes. The applications that cannot be ported will be supported through an X11 server, much like OS X supports X11 apps through Xquartz. The GTK+ port is expected to be finished within a year, while Qt 5 already has complete Wayland support.
To further complicate matters, Ubuntu has announced they are developing a new system called Mir because of problems they perceive with Wayland. This window system is also based on the OpenGL/ES API.
Linux is a kernel, not a full operating system. There are different windowing systems and gui's that run on top of Linux to provide windowing. Typically X11 is the windowing system used by Linux distros.
Wayland is also worth mentioning as it is mostly referred as a "future X11 killer".
Also note that Android and some other mobile operating systems don't include X11 although they have a Linux kernel, so in that sense X11 is not native to all Linux systems.
Being cross-platform has nothing to do with being native. Cocoa has also been ported to other platforms via GNUStep but it is still native to OS X / macOS.
Strictly speaking, the API of Linux consists of its system calls. These are all of the kernel functions that can be called by a user-mode (non-kernel) program. This is a very low-level interface that allows programs to do things like open and read files. See http://en.wikipedia.org/wiki/System_call for a general introduction.
A real Linux system will also have an entire "stack" of other software running on it, in order to provide a graphical user interface and other features. Each element of this stack will offer its own API.
To aid in what has already been mentioned there is a very good overview of the Linux graphics stack at this blog: http://blog.mecheye.net/2012/06/the-linux-graphics-stack/
This explains X11/Wayland etc and how it all fits together. In addition to what has already been mentioned I think it's worth adding a bit about the following API's you can use for graphics in Linux:
Mesa - "Mesa is many things, but one of the major things it provides that it is most famous for is its OpenGL implementation. It is an open-source implementation of the OpenGL API."
Cairo - "cairo is a drawing library used either by applications like Firefox directly, or through libraries like GTK+, to draw vector shapes."
DRM (Direct Rendering Manager) - I understand this the least but its basically the kernel drivers that let you write graphics directly to framebuffer without going through X
I suppose the question is more like "What is linux's native GUI API".
In most cases X (aka X11) will be used for that: http://en.wikipedia.org/wiki/X_Window_System.
You can find the API documentation here
XWindows is probably the closest to what could be called 'native' :)
The linux kernel graphical operations are in /include/linux/fb.h as struct fb_ops. Eventually this is what add-ons like X11, Wayland, or DRM appear to reference. As these operations are only for video cards, not vector or raster hardcopy or tty oriented terminal devices, their usefulness as a GUI is limited; it's just not entirely true you need those add-ons to get graphical output if you don't mind using some assembler to bypass syscall as necessary.
Wayland
As you might hear, wayland is the featured choice of many distros these days, because of its protocol is simpler than the X.
Toolkits of wayland
Toolkits or gui libraries that wayland suggests are:
QT 5
GTK+
LSD
Clutter
EFL
The closest thing to Win32 in linux would be the libc, as you mention not only the UI but events and "other os stuff"
GUI is a high level abstraction of capability, so almost everything from XOrg server to OpenGL is ported cross-platform, including for Windows platform. But if by GUI API you mean *nix graphics API then you might be wandering around "Direct Rendering Infrastructure".

How to get started with Drivers Programming under windows

I want to start learning drivers programming under windows .
I never programed drivers , and i am looking for information how to get started .
Any tutorials ,links ,book recommendations , and what development tool kit i should start with ? (WDF will be good one ?)
I really want to program following clock link text
Thanks for your help .
I would start by downloading the windows driver kit (WDK).
Afterwards, you decide which kind of driver you want. FileSystem driver? (probably not), RS-232 driver? usb driver? They all follow different rules and quirks.
The WDK comes with examples drivers for most kinds of drivers and should get you on track fast.
To interact with USB hardware you would be best served by looking at WinUSB or the Usermode Driver Framework. Usermode drivers are orders of magnitude easier, being able to use a C++/COM(kind of) framework and a normal debugging environment.
Writing kernelmode drivers should be reserved for stuff like video card, disk, and other latency/throughput sensitive drivers.
An even easier method would be to use libusb-win32 which is a C library that makes talking to a USB endpoint almost as easy as writing data to a file.
Must see resource for windows driver development, of course as addition to the WDK mentioned by Eric.