Can I use what I'd call "Raw OpenGL"? [duplicate] - c++

This question already has answers here:
How does OpenGL work at the lowest level? [closed]
(4 answers)
Closed 9 years ago.
I was wondering about OpenGL's main interface. Simply, how does the OpenGL DLL call graphics functions? Is there some secret hidden rendering code in C++? If it can call the GPU from a DLL, it should be possible in any C++ program. If so, could I make some API of my own for my programs? Or what? I'm hoping someone here knows. Can someone shed some light on this subject? Thanks in advance!

First and foremost: Modern OpenGL is not a library and on Windows the DLL doesn't contain a OpenGL implementation that talks to the hardware. The opengl32.dll merely acts as a placeholder, into which the drivers hook their low level functions (called ICD).
I answered it in detail here: https://stackoverflow.com/a/6401607/524368

The OpenGL DLL's communicate with Ring 0 like any other application module does, with calls like DeviceIoControl. The exact details of the data passed to those calls is not publicly documented and that's not likely to change. The GPU manufacturers just aren't willing to part with that information all willy nilly like. While it's possible you could create your own API, the details to talk to the driver are not going to be readily available.

In general sense the answer is "yes", but to make it viable, it must necessary be somewhat "Hardware dependent"
What you call "Graphics functions" (something you suppose OGL is based on) at the very bottom level depends on the way the hardware structures the image frames into itself an communicate with the processor.
There are hardware that are just a plane frame buffer and hardware that are capable to manage themselves the rasterization process of a vectorial scene.
There are operating system API that are plane 2d vector and imaging support (like GDI) or even three-dimensional modeling system (like direc3d).
OGL is just an API: it define a consistent set of function prototypes to accomplish a task (describe a 3D scene). The renderimg process is implemented into DLLS that differ depending on the nature of the system they have to work with.
Some of them just operate on their own buffers that treat as raw data for bitmaps to be Blit-ted on the screen via OS native api (see BitBlit), some other translate the OGL calls into calls to specific op-code to specific io-ports of hardware device.
Due to the popularity of OGL, there are also manufacturers that are standardizing the "language" between the library and the devices. So things are not so "linear" as they can seem...

Writing directly to hardware registers is how graphics programming was done before OpenGL and other standardised graphics APIs were introduced.
Generally speaking, it was a nightmare to write for, and almost impossible to debug. Higher level APIs were invented for a reason.
The closest you can get to hardware these days is on the consoles, where you still have much lower level access than on the PC, but even that access is abstracted away more then it was in the past.
If you really want to do it, you can, but you'll basically be writing your own driver if you're not writing your own OS as well, and you wont find much publicly available documentation on modern GPUs.

Related

How to create my own opengl binding or library

I am relatively new to graphic programming so I wanted to start from the very basic. I see there is library like PyOpenGl which provides binding to the opengl api itself. Now, I really want to create things like PyOpenGl on my own so I can understand how everything work in the process.
Is it possible for me to creates library like PyOpenGl or GLFW? If so please give me some general tips of what should I do.
If not please explain to me why I can't create my own binding and I do apologize if my question above sounds absurd.
PyOpenGL is a fairly thin wrapper that, for the most part, simply turns Python function calls into calls of native machine code functions of the same name. There are a few little details like calling conventions in the mix, but these are actually boring stuff. The fact is that (as far as OpenGL is concerned) the source code you write in Python with PyOpenGL looks almost identical to the source code you'd write in C. There are a few "smart" things PyOpenGL does, like providing means to interface NumPy arrays to OpenGL calls that take a data pointer parameter, but that's just house keeping.
And when you do OpenGL calls in C or – even more extreme – assembly language (perfectly possible) that's the lowest level you can go (with OpenGL), short of writing your own GPU device driver. And writing a GPU device driver is super hard work; it takes literally millions of lines of C code (NVidia's OpenGL implementation is said to consist of about ~40M LoC, there are open source drivers for AMD and Intel GPUs, and each of them have MLoC, too).
If you're interested in some middle ground, have a look at the Vulkan API. If writing higher level wrappers for graphics is your thing I'd suggest you implement some higher level API / renderer for Vulkan and interface it to Python. This is likely to be much more rewarding, as a learning experience (IMHO).
The OpenGL API lives in the driver for the graphics card. All OpenGL functions are there. You only need to know how to get them. As Spektre said, the proccess is:
Create an OpenGL context. This is a job for the OS. Each OS has its
way and its issues. Read https://www.khronos.org/opengl/wiki/Load_OpenGL_Functions
Define function pointers as glext.h does and then extract them from
the driver. Apart from standard OpenGL funcs, vendors add their own
ones, called "extensions". You can see how GLEW does this job. If you want to set all functions and extensions then make a script that uses glext.h because there are about one thousand of them.
You can download glext.h from https://www.opengl.org/registry/
Doing something like GLFW requires, added to the previous two points, knowing how to create a window and handle its messages for keyboard and mouse. Again this is OS dependant. On Windows there is a way. On Linux it depends on the window manager used, GTK+ for example. Or X11 directly. Or...
Anyhow my best advise is that you can read how GLEW and GLFW did, looking into their code. BUT don't lose much time on it. Prefer getting experience on OpenGL and let those "diggins" for later time.

Is OpenGL typed the same regardless of platform?

I'm interested in picking up OpenGL and I know that it is defined as cross-platform. Does this mean that I would type the code the same as I would on Windows, a Mac or Linux?
//example pseudocode to make a circle with a radius of 500 pixels and 5 pixels wide
createCircle(500,5);
If it is typed the same does this mean that OpenGL has API sets stored for Windows and Mac and based on the platform the program is executed on it calls the appropriate one? If it is not, what is the process that goes on here?
For the 3D drawing aspects, yes. But practically, unless there is a middleware being used, the APIs/usage necessary for a complete OpenGL application vary. Specifically in 3 areas,
1) OS dependency - The way the graphics gets rendered on the screen depends on the display drivers, and this makes the Graphics also dependent on OS. So you will use different APIs for creating the connection to the display adapter.
2) Window system dependency - Again dependent on the OS. For example on Linux, you can have Xorg, Wayland, or plain framebuffer etc. Depending on these, the way you create a surface for drawing changes
3) Platform specific extensions - Some high-performance extensions rely on specific OS behaviour, and are not cross platform. These take the form of GL_ARB, GL_OES etc.
OpenGL will display in the exact same way across platforms. It is also typed the same for each platform. Upon compile time, the API will select the correct platform-dependent code for the right platform to insert into the final executable.
From the official documentation:
https://www.opengl.org/documentation/implementations/
Yes, it seems so. Its API bindings for most languages seem to be the same, for the most part. However, the behaviour for certain languages still might be different. You will have to look reference manual.
For a single language such as Java, Python, etc, which use interpreters like JVM. It should be same for any system, as that was part of the design decision.
OpenGL is not a library, it's a specification. The API is the same on all plattforms, but it adheres to the host OS's calling conventions and base types.
So the function void glPixelStorei(GLenum, GLint) is named the same on all plattforms, but GLenum and GLint exact types may depend on the target platform.

Native graphic card function

If i understand correcty, the graphics card are programmed to display 2D&3D graphics and these cards have native functions, but because these functions are obsolete and hard to use, we nou use drivers, that makes the programmers life easier.
Now, my question is that if there are some native graphics card function tutorials and if these are universal, that works on every graphics card or differ from one to another like ASM language does. And if there exists tutorials, can i use C language or C++ or i need to have asm knowledge ?
The way GPUs are programmed (at least the advanced functions) is typically through either MMIO (as in, an address in virtual memory corresponds to a register in the GPU instead of actual DRAM), or more often, through command buffers (as in, a chunk of memory is used to store commands for the GPU, that the GPU reads sequentially.
Now, those commands and registers are very hardware dependent (even within a single hardware vendor): see for example ATI R600 registers. They are not "universal" at all.
Those types of interfaces are what driver developers use to implement the DirectX and OpenGL APIs that typical programs use.
Probably the best source of "tutorial" for that level of programming are the open source drivers in linux.
There's a good reason there are now more standardised ways of talking to the graphics subsystems in computers. Unless you have a specific platform in mind I'd suggest you stick to using the standard API's I.e. go through OpenGL or DirectX.
If i understand correcty, the graphics card are programmed to display
2D&3D graphics and these cards have native functions, but because
these functions are obsolete and hard to use, we nou use drivers, that
makes the programmers life easier.
in a sense yes, although not obsolete, it is all about abstraction.
there are several tutorials on the web, for instance for OpenGL there
is nehe.gamedev.net DirectX has also a number of tutorials, just
use your favorite search engine although OpenGL has the big advantage
of being portable.
Generally you can use either C or C++ and do not need to know any ASM if you
do not have some extreme requirement.

Inner Workings of C++ Graphics Libraries

As you probably know, C++ has no standard graphics library. Most games use DirectX or OpenGL.
But how do those libraries actually work? In other words, how can the third-party libraries draw graphics if there's no mechanism for it in C++? Are they just written in another language?
Specifically DirectX and OpenGL work by calling the operating system and/or the video hardware driver. The driver, in turn, does the actual drawing by interacting with the graphical device. The exact details of interaction vary from one video card to another.
Back in the DOS days, C++ programmers could work with the hardware directly. This was accomplished in two ways. First, by writing to/reading from a special memory block ("framebuffer") where the pixels or text were stored. It was a span of memory at a known address, so to work with it you had to cast an integer constant to a pointer and work with that pointer. Purely C++ mechanism. The other way of interaction was reading from/writing to I/O ports. Now, this is a mechanism that is not directly supported by C, unless you count inline assembly or compiler intrinsics. There were two library functions that would wrap these two operations (literally, wrappers around two CPU commands - INP and OUTP), but most programmers would just use a one-line inline assembly snippet instead.
Even now, most video hardware interaction boils down to these two pathways - framebuffer and I/O ports (DMA and interrupts are typically not used for graphics). But we application-level programmers don't get to see those. This is driver stuff.
One modern caveat has to do with protected mode; in protected mode, the C pointers are not the same as underlying physical addresses. Simply typecasting 0xA0000 to a pointer won't get you to a framebuffer, even in kernel mode. However, kernel-level code (i. e. a driver) can request that the memory manager give it a pointer that corresponds to a specific physical address.
They will transfer their calls to the driver which will send them to the graphic card.
DirectX and OpenGL opperate on a pretty low level. That is you say "draw a triangle, now draw a square". Normally programmers then wrap these calls into C++ classes that provide higher level functionality, such as "Draw a model, draw a tree". This is known as an engine.
I'd recommend taking a look at the NeHe tutorials for more info: http://nehe.gamedev.net/
So for example a call to draw a triangle in OpenGL looks like this:
GLBegin(BEGIN_POLYGONS);
GLVector3(0,0,0);
GLVector3(10,10,0);
GLVector3(-10,10,0);
GLEnd();
And like I said, see the above link to the NeHe tutorials, they are all in C/C++

Internal workings of OpenGL

How does OpenGL work, internally?
We will use OpenGL for our 2D game project, and think that it is important for us to first find out more about how OpenGL actually works before diving right into it.
What we need isn't some getting-started tutorial, rather basic information on how OpenGL internally handles textures, draws, interacts with the graphics card, and so on.
We have already searched for a while yet couldn't find anything suitable.
OpenGL is just an interface. How it works depends on the implementation, that is drivers and hardware. For example: if the hardware doesn't support some feature then the implementation is free to implement it on the client side (CPU) rather than on the GPU. Moreover there is software only implementation.
In general you can think of it as you sending commands to the graphics card that are buffered somewhere and executed with some ordering constraints on the graphics card.
Note: your question is too general.
You might be interested in mesa. It is an open source Opengl implementation. Most implementations are trade secret so you will never know how ATI\Nvidia implemented anything except what you can infer by the results produced by interacting with their implementations. You might find Intel's drivers informative as they are open source as well.
If by internally, you mean what is the works that opengl does with what you draw, you would be interested in pipeline.