threads in OpenGL application - opengl

When running a simple OpenGL application in windows there are two unknown threads .I want to know what is these threads in application ?are there any documentation about them? our application crash in one of this threads in first step i want to know what is these thread? .
and this is dump of nvoglv64:

Those threads are not something specific to OpenGL; OpenGL doesn't know anything about threads, because technically it's just a piece of text, namely the specification.
However in your case it's very likely that those threads are created by the OpenGL implementation (aka your graphics driver). As you can see those threads seem to be tasked with copying some data. Which suggest they crash, because you either give OpenGL
some invalid pointer
or invalid metrics for the pointer (size of the buffer, stride, etc.)
or you're deallocating / freeing memory in a different thread while OpenGL still access it from the OpenGL context thread.
In either case it's not the threads fault that the program crashes, but your lack of either supplying OpenGL with valid data, or to properly lock/synchronize with OpenGL so that you don't invalidate the buffers in mid-operation.
Update
And this crash happening with Application Verifier suggests, that something about Application Verifier messes up memory used some way by OpenGL. This is very likely a bug in Application Verifier, but I think the best course of action would be to inform NVidia of the problem, so that they can address the problem with a workaround in their drivers.

Related

Is it OK NOT to do glDeleteBuffers and other OpenGL (3.3) cleanups?

I sometimes forgot to do cleanup and afraid if the resources of them resides inside the GPU memory.
Things I use: shader programs, vertex array objects and buffer objects, textures
All OpenGL resources are supposed to be automatically released on destruction of OpenGL context (all shared contexts, to be precise). So practically, there should be no risk in leaking GPU memory when closing OpenGL context with some unreleased objects as far as your application does not trigger some video driver bug.
System will also take care on releasing all resources of closed application, even if some OpenGL contexts have been forgotten to be closed. Otherwise, it would be a total nightmare debugging 3D applications if GPU would keep allocated resources on an application crash.
To prove the idea - just write a simple test application allocating large portions of GPU memory (textures/VBOs) and track memory usage via external tools. Luckily, Task Manager in Windows 10 has been significantly improved and shows detailed GPU memory statistics.
From design point of view, however, it sounds like a bad idea tolerating incomplete clean-ups as the same release procedures used in other renderer code will cause real problems.
All resources you create for an OpenGL context are associated with that context (and any other context that you share resources with). If you destroy that context, all resources associated with it will be freed.
If you don't destroy the context when your program exits, then the OS will clean up after you.
That being said, destroying resources when you're done with them is a good habit to get into.

Is OpenGL 4.3 API and glsl language safe?

I'm developing a Java graphical application with jogl and OpenGL at the Linux. My application contains over 30 shaders and they work fine in most cases. But about once a week there is a driver (amdgpu pro) error (SIGSEGV).
Please tell me, is OpenGL safe language: It is protected from errors by the application program or incorrect actions of the application can cause damage to the memory of the driver (writing to someone else's memory or data race). In what do I look for the cause of the error (SIGSEGV) in the incorrect driver (amdgpu pro) or in the errors of the application itself? (The glGetError show that all fine at each application step).
In general, there are going to be plenty of ways to crash your program when you are using the OpenGL API. Buggy drivers are an unfortunate reality that you cannot avoid completely, and misuse of the API in creative ways can cause crashes instead of errors. In fact, I have personally caused computers to completely hang (unresponsive) on multiple platforms and different GPU vendors, even when using WebGL which is supposedly "safe".
So the only possible answer is "no, OpenGL is not safe."
Some tips for debugging OpenGL:
Don't use glGetError, use KHR_debug instead (unless it's not available).
Create a debug context with GL_CONTEXT_FLAG_DEBUG_BIT.
Use a less buggy OpenGL implementation when you are testing. In my experience, the Mesa implementation is very stable.
Is OpenGL 4.3 "safe"? Absolutely not. There are many things you can do that can crash the program. Having a rendering operation read from past the boundaries of a buffer, for example. 4.3 has plenty of ways of doing that.
Indeed, simply writing a shader that executes for too long can cause a GPU failure.
You could in theory read GPU memory that was written by some other application, just by reading from an uninitialized buffer.
There is no simple way to tell whether a particular crash was caused by a driver bug or by user error. You have to actually debug it and have a working understanding of the OpenGL specification to know for sure.

Accessing video RAM with mmap() knowing OpenGL context and visual ID

is it possible to learn the allocated memory range of an OpenGL context? Supposedly this memory range should then be accessed with mmap() from another process. Can this technique work, or are there fundamental problems with it?
Update We're using a GNU/Linux system with a modern X11 installation and can pick the video card manufacturer whose drivers support such a trick.
Well, there are innumerable reasons why it won't work.
First, the "allocated memory range of an OpenGL context" is always changing. OpenGL contexts allocate new memory and deallocate it as it decides to.
Second, I would not trust an OpenGL driver to survive under memory mapped conditions like this. Multiple OpenGL contexts can coexist, but only because they all know about each other and the driver can therefore compensate for them. It is highly unlikely that a context can assimilate changes made by another context.
Third, GPUs often work with graphics memory. Even if you can use mmap on GPU memory (which itself is unlikely), you're probably going to lose a lot of performance when you do. And GPU memory gets shuffled around a lot more than CPU memory.
You appear to be trying to do IPC-based graphics. Your best bet would be to have the graphics system be its own process that you communicate with via IPC methods, rather than trying to talk to OpenGL via IPC.
Depends on the OS and driver. It's possible with an X-server. Although a combination of the X server, display driver and openGL means it could move the memory for a particular object around on the card when it draws it.
An easier way is probably to use an openGL pixel/vertex buffer and get the buffer pointer
is it possible to learn the allocated memory range of an OpenGL context?
I think you're asking for accessing the memory where a OpenGL context keeps its objects and the render output.
No. The OpenGL context is an abstract construct and have it's memory on an entirely different machine and/or architecture.
In addition to that there is no standard or even common memory layout for the contents of an OpenGL context. If you're interested in the rendering outcome only, you could tap the framebuffer device (/dev/fb…), though the performance will be inferior to just read back the framebuffer contents with glReadPixels. Similar goes for tapping the PCI memory range, which is virtually the same as tapping the framebuffer device.

SDL / OpenGL: Implementing a "Loading thread"

I currently try to implement a "Loading thread" for a very basic gaming engine, which takes care of loading e.g. textures or audio while the main thread keeps rendering a proper message/screen until the operation is finished or even render regular game scenes while loading of smaller objects occurs in background.
Now, I am by far no OpenGL expert, but as I implemented such a "Loading" mechanism I quickly found out that OGL doesn't like access to the rendering context from a thread other than the one it was created on very much. I googled around and the solution seems to be:
"Create a second rendering context on the thread and share it with the context of the main thread"
The problem with this is that I use SDL to take care of my window management and context creation, and as far as I can tell from inspecting the API there is no way to tell SDL to share contexts between each other :(
I came to the conclusion that the best solutions for my case are:
Approach A) Alter the SDL library to support context sharing with the platform specific functions (wglShareLists() and glXCreateContext() I assume)
Approach B) Let the "Loading Thread" only load the data into memory and process it to be in a OpenGL-friendly format and pass it to the main thread which e.g. takes care of uploading the texture to the graphics adapter. This, of course, only applies to data that needs a valid OpenGL context to be done
The first solution is the least efficient one I guess. I don't really want to mangle with SDL and beside that I read that context sharing is not a high-performance operation. So my next take would be on the second approach so far.
EDIT: Regarding the "high-performance operation": I read the article wrong, it actually isn't that performance intensive. The article suggested shifting the CPU intensive operations to the second thread with a second context. Sorry for that
After all this introduction I would really appreciate if anyone could give me some hints and comments to the following questions:
1) Is there any way to share contexts with SDL and would it be any good anyway to do so?
2) Is there any other more "elegant" way to load my data in the background that I may have missed or didn't think about?
3) Can my intention of going with approach B considered to be a good choice? There would still be slight overhead from the OpenGL operations on my main thread which blocks rendering, or is it that small that it can be ignored?
Is there any way to share contexts with SDL
No.
Yes!
You have to get the current context, using platform-specific calls. From there, you can create a new context and make it shared, also with platform-specific calls.
Is there any other more "elegant" way to load my data in the background that I may have missed or didn't think about?
Not really. You enumerated the options quite well: hack SDL to get the data you need, or load data inefficiently.
However, you can load the data into mapped buffer objects and transfer the data to OpenGL. You can only do the mapping/unmapping on the OpenGL thread, but the pointer you get when you map can be used on any thread. So map a buffer, pass it to the worker thread. It loads data into the mapped memory, and flips a switch. The GL thread unmaps the pointer (the worker thread should forget about the pointer now) and uploads the texture data.
Can my intention of going with approach B considered to be a good choice?
Define "good"? There's no way to answer this without knowing more about your problem domain.

OpenGL GPU Memory cleanup, required?

Do I have to clean up all DisplayLists, Textures, (Geometry-)Shaders and so on by hand via the glDelete* functions, or does the GPU mem get freed automagically when my Program exits/crashes?
Note: GPU mem refers to dedicated memory on a dedicated Graphics card, not CPU memory.
Free the context, everything else is local to the context (unless you enabled display list sharing) and will go away along with it.
As others mentioned, your OS (in collaboration with the driver resource manager) should release the resources. That's what OSes are for. It's worth noting that this has nothing to do with OpenGL, but is something that is part of the charter of well behaved OSes and their associated drivers. The OS is there to handle all the system resources. OpenGL ones are just but a subset of them, and they are no different from, say a file handle. Now to get more concrete, you should specify which OS you care about.
BTW, This is where I take exception with ChrisF's answer. It should not be up to the driver to decide it needs to do cleanup. OS driver models will have a clear interface between the user-mode OpenGL driver (that shouldn't do actual gfx resource allocation, since it's shared in the machine), the OS (that provides the equivalent of system calls to allocate resources) and the kernel-mode driver (that is merely there to execute the OS orders in a way that is compatible with the gpu). This is at least the case with the WIN2K and WDDM models.
So... if your process crashes or otherwise terminates, in those models, it's the OS responsibility to call the kernel-mode driver to free all the resources that were associated with the process.
Now, whether you should or not is really something that is a little like asking tabs-or-spaces in source code. Different people have different beliefs here. "the OS will do it anyways, quitting immediately is a better end-user experience" vs "I want to know whether I'm leaking memory because if my program is long-running, I really don't want it to hit OOM errors. Best way to do that is to be leak-free throughout" are the 2 main lines of thought I'm aware of.
When your program exits (or crashes) then any memory it currently has allocated should be freed eventually in the same way that main memory is usually freed when a program exits. It may be some time before the GPU "realises" that the memory is available for use again.
However, you shouldn't rely on this behaviour as it may depend on how the graphics card drivers have been implemented. It's far better to make explicit clean up calls when you (as programmer) know that you won't need that memory again.
All your GPU resources will be released when your program exits. An easy way to test is to not delete things, and run your app repeatedly to see if it fails the allocations after a few iterations.
In Opengl, there is no memory to store the drawing information. here, when we execute the opengl program, that time calling draw frame method calling at sequentially. Anyway, if we draw a line or circle means, anytime its calling draw frame method for drawing at specified place.but, opengl does not store that line in memory.
Anytime, its drawing only. but, when we saw, that line is dawned successfully.
Ex:
In Android Opengl es2.0 used this renderer class inside drawframe method(inside draw method) to draw the lines or circles etc..
i used this Opengl es2.0 Program in android autocad app development..
if you want to clear the dawned lines use this method in renderer class inside onDrawframe method
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);