I have a library, which is engaged in rendering on opengl and prinimaet streams from the network.
I write under a poppy, but plans to use on linux
so the window is created for objective c
I start drawing in a separate thread in the other receiving and decoding the data.
I crash bug (EXT_BAD_ACCESS) on methods of opengl, even if I use them only in a single thread.
my code
main glut:
int main(int argc, const char * argv[]){
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA);
int win = glutGetWindow();
glutInitWindowSize(800, 600);
glutCreateWindow("OpenGL lesson 1");
client_init(1280, 720, win, "192.168.0.98", 8000, 2222);
return 0;
}
or objective c
- (id)initWithFrame:(NSRect)frameRect pixelFormat:(NSOpenGLPixelFormat*)format{
self = [super initWithFrame:frameRect];
if (self != nil) {
NSOpenGLPixelFormatAttribute attributes[] = {
NSOpenGLPFANoRecovery,
NSOpenGLPFAFullScreen,
NSOpenGLPFAScreenMask,
CGDisplayIDToOpenGLDisplayMask(kCGDirectMainDisplay),
(NSOpenGLPixelFormatAttribute) 0
};
_pixelFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:attributes];
if (!_pixelFormat)
{
return nil;
}
//_pixelFormat = [format retain];
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(_surfaceNeedsUpdate:)
name:NSViewGlobalFrameDidChangeNotification
object:self];
_openGLContext = [self openGLContext];
client_init(1280, 720, win, "192.168.0.98", 8000, 2222);
}
return self;
}
client_init code
// pthread_create(&posixThreadID, NULL, (void*(*)(void*))ShowThread, dh_tmp);
pthread_create(&posixThreadID, NULL, (void*(*)(void*))ShowThread, NULL);
void* ShowThread(struct drawhandle * dh){
//glViewport(0, 0, dh->swidth, dh->sheight);//EXT_BAD_ACCESS
glViewport(0, 0, 1280, 720);//EXT_BAD_ACCESS
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
//gluOrtho2D(0, dh->swidth, 0, dh->sheight);
gluOrtho2D(0, 1280, 0, 720);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
...
return 0;
}
I think the problem is? that uncreated context opengl.
How to create it in macos / linux?
This thread has no current OpenGL context. Even if you did create a context earlier in the program (not visible in your snippet), it will not be current in the thread you launch.
An OpenGL context is always, with no exceptions, "current" for exactly one thread at a time. By default this is the thread that created the context. Any thread calling OpenGL must be made "current" first.
You must either create the context in this thread, or call glXMakeCurrent (Unix/Linux) or aglMakeCurrent (Mac) or wglMakeCurrent (Windows) inside ShowThread (before doing anything else related to OpenGL).
(probably not the reason for the crash, though... see datenwolf's answer for the likely reason of the crash -- nevertheless it's wrong)`
OpenGL and multithreading are on difficult terms. It can be done, but it requires some care. First and foremost, a OpenGL context can be active in only one thread at a time. And on some systems, like Windows extension function pointers are per context, so with different contexts in different threads you may end up with different extension function pointers, which must be provisioned for.
So there's problem number one: You've probably got no OpenGL context on this thread, but this should not crash on calling a non-extension function, it would just do nothing.
If it really crashes on the line you indicated, then the dh pointer is invalid for sure. It's the only explanation. A pointer in C is just some number that's interpreted in a special way. If you pass around pointers – especially if used as a parameter to a callback, or thread function – then the object to pointer points to must not go invalid until it's made sure this pointer can no longer be accessed. Which means: You must not use this on things you create on the stack, i.e. in C auto storage.
This will break:
void foo(void)
{
struct drawhandle dh_tmp;
pthread_create(&posixThreadID, NULL, (void*(*)(void*))ShowThread, &dh_tmp);
}
why? Because the moment foo returns the object dh_tmp goes invalid. But &dh_tmp (the pointer to it) is just a number and this number will not "magically" turn zero, the moment dh_tmp gets invalid.
You must allocate it on the heap for this to work. Of course there's the problem, when to free the memory again.
Related
In my Unity game, I have to modify a lot of graphic resources like textures and vertex buffers via native code to keep good performance.
The problems start when code calls ID3D11ImmediateContext::Map several times in a very short time (I mean very short - called from different threads running parallel). There is no rule if mapping is successful or not. Method call looks like
ID3D11DeviceContext* sU_m_D_context;
void* BeginModifyingVBO(void* bufferHandle)
{
ID3D11Buffer* d3dbuf = static_cast<ID3D11Buffer*>(bufferHandle);
D3D11_MAPPED_SUBRESOURCE mapped;
HRESULT res = sU_m_D_context->Map(d3dbuf, 0, D3D11_MAP_WRITE_DISCARD, 0, &mapped);
assert(mapped.pData);
return mapped.pData;
}
void FinishModifyingVBO(void* bufferHandle)
{
ID3D11Buffer* d3dbuf = static_cast<ID3D11Buffer*>(bufferHandle);
sU_m_D_context->Unmap(d3dbuf, 0);
}
std::mutex sU_m_D_locker;
void Mesh::ApplyBuffer()
{
sU_m_D_locker.lock();
// map buffer
VBVertex* mappedBuffer = (VBVertex*)BeginModifyingVBO(this->currentBufferPtr);
memcpy(mappedBuffer, this->mainBuffer, this->mainBufferLength * sizeof(VBVertex));
// unmap buffer
FinishModifyingVBO(this->currentBufferPtr);
sU_m_D_locker.unlock();
this->markedAsChanged = false;
}
where d3dbuf is dynamic vertex buffer. I don't know why, but sometimes result is E_OUTOFMEMORY, despite there is a lot of free memory. I tried to surround code with mutexes - no effect.
Is this really memory problem or maybe something less obvious?
None of the device context methods are thread safe. If you are going to use them from several threads you will need to either manually sync all the calls, or use multiple (deferred) contexts, one per thread. See Introduction to Multithreading in Direct3D 11.
Also error checking should be better: you need to always check returned HRESULT values because in case of failure something like assert(mapped.pData); may still succeed.
I am trying to parallelize a program I have made in OpenGL. I have fully tested the single threaded version of my code and it works. I ran it with valgrind and things were fine, no errors and no memory leaks, and the code behaved exactly as expected in all tests I managed to do.
In the single threaded version, I am sending a bunch of cubes to be rendered. I do this by creating the cubes in a data structure called "world", sending the OpenGL information to another structure called "Renderer" by appending them to a stack, and then finally I iterate through the queue and render every object.
Since the single threaded version works I think my issue is that I am not using the multiple OpenGL contexts properly.
These are the 3 functions that pipeline my entire process:
The main function, which initializes the global structures and threads.
int main(int argc, char **argv)
{
//Init OpenGL
GLFWwindow* window = create_context();
Rendering_Handler = new Renderer();
int width, height;
glfwGetWindowSize(window, &width, &height);
Rendering_Handler->set_camera(new Camera(mat3(1),
vec3(5*CHUNK_DIMS,5*CHUNK_DIMS,2*CHUNK_DIMS), width, height));
thread world_thread(world_handling, window);
//Render loop
render_loop(window);
//cleanup
world_thread.join();
end_rendering(window);
}
The world handling, which should run as it's own thread:
void world_handling(GLFWwindow* window)
{
GLFWwindow* inv_window = create_inv_context(window);
glfwMakeContextCurrent(inv_window);
World c = World();
//TODO: this is temprorary, implement this correctly
loadTexture(Rendering_Handler->current_program, *(Cube::textures[0]));
while (!glfwWindowShouldClose(window))
{
c.center_frame(ivec3(Rendering_Handler->cam->getPosition()));
c.send_render_data(Rendering_Handler);
openGLerror();
}
}
And the render loop, which runs in the main thread:
void render_loop(GLFWwindow* window)
{
//Set default OpenGL values for rendering
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glPointSize(10.f);
//World c = World();
//loadTexture(Rendering_Handler->current_program, *(Cube::textures[0]));
while (!glfwWindowShouldClose(window))
{
glfwPollEvents();
Rendering_Handler->update(window);
//c.center_frame(ivec3(Rendering_Handler->cam->getPosition()));
//c.send_render_data(Rendering_Handler);
Rendering_Handler->render();
openGLerror();
}
}
Notice the comments on the third function, if I uncomment those out and then comment out the multi-threading statemnts on the main function (i.e single thread my program) everything works.
I don't think this is caused by a race condition, because the queue, where the OpenGL info is being put before rendering, is always locked before being used (i.e whenever a thread needs to read or write to the queue, the thread locks a mutex, reads or writes to the queue, then unlocks the mutex).
Does anybody have an intuition on what I could be doing wrong? Is it the OpenGL context?
I am creating an opengl application and it works well when I didn't use multithreading. The original code is shown below:
void Controller::BeginLoop()
{
while (!glfwWindowShouldClose(_windowHandle)) {
/* Render here */
Render();
/* Swap front and back buffers */
glfwSwapBuffers(_windowHandle);
/* Poll for and process events */
glfwPollEvents();
}
}
int main()
{
//do some initialization
g_controller->BeginLoop();
}
The above code works well, however, when I tried to put the eventpolling and rendering into two different threads, the OpenGL won't draw anything in the window. Below is the multithread code I used:
void Controller::BeginLoop()
{
while (!glfwWindowShouldClose(_windowHandle)) {
glfwMakeContextCurrent(_windowHandle);
/* Render here */
Render();
/* Swap front and back buffers */
glfwSwapBuffers(_windowHandle);
}
}
void Render(int argc, char **argv)
{
::g_controller->BeginLoop();
}
int main()
{
std::thread renderThread(Render, argc, argv);
while (true) {
glfwPollEvents();
}
renderThread.join();
return 0;
}
In the Render function, I do some physics and draw the result points onto the window.
I totally have no idea what is going wrong.
After creating a GLFW window the OpenGL context created by this will be made current in the thread that did create the window. Before you can make an OpenGL context current in another thread is must be release (made un-current) in the thread currently holding it. So the thread holding the context must call glfwMakeContextCurrent(NULL) before the new thread is calling glfwMakeCurrent(windowHandle) – either before launching the new thread or by using a synchronization object (mutex, semaphore).
BTW: Symbols starting with an underscore _ are reserved for the compiler on the global namespace, so either make sure _windowHandle is a class member variable or use underscored symbols only for function parameters.
I am creating an SDL-OpenGL application in D. I am using the Derelict SDL binding to accomplish this.
When I am finished running my application, I want to unload SDL. To do this I run the following function:
public ~this() {
SDL_GL_DeleteContext(renderContext);
SDL_DestroyWindow(window);
}
For some reason however, that'll give me a vague segmentation fault (no traces in GDB) and return -11. Can I not destroy SDL in a destructor, do I even have to destroy SDL after use?
My constructor:
window = SDL_CreateWindow("TEST", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 1280, 720, SDL_WINDOW_OPENGL | SDL_WINDOW_FULLSCREEN_DESKTOP);
if(window == null) {
string error = to!string(SDL_GetError());
throw new Exception(error);
}
renderContext = SDL_GL_CreateContext(window);
if(renderContext == null) {
string error = to!string(SDL_GetError());
throw new Exception(error);
}
Class destructors may run in a different thread than the thread where the class was created. The crash may occur because OpenGL or SDL may not handle cleanup from a different thread properly.
Destructors for heap-allocated (GC-managed) objects are not a good way to perform cleanup, because their invocation is not guaranteed. Instead, move the code to a cleanup function, or use a deterministic way to finalize the object (reference counting, or manual memory management).
I want to pass some data around threads but want to refrain from using global variables if I can manage it. The way I wrote my thread routine has the user passing in a separate function for each "phase" of a thread's life cycle: For instance this would be a typical usage of spawning a thread:
void init_thread(void *arg) {
graphics_init();
}
void process_msg_thread(message *msg, void *arg) {
if (msg->ID == MESSAGE_DRAW) {
graphics_draw();
}
}
void cleanup_thread(void *arg) {
graphics_cleanup();
}
int main () {
threadCreator factory;
factory.createThread(init_thread, 0, process_msg_thread, 0, cleanup_thread, 0);
// even indexed arguments are the args to be passed into their respective functions
// this is why each of those functions must have a fixed function signature is so they can be passed in this way to the factory
}
// Behind the scenes: in the newly spawned thread, the first argument given to
// createThread() is called, then a message pumping loop which will call the third
// argument is entered. Upon receiving a special exit message via another function
// of threadCreator, the fifth argument is called.
The most straightforward way to do it is using globals. I'd like to avoid doing that though because it is bad programming practice because it generates clutter.
A certain problem arises when I try to refine my example slightly:
void init_thread(void *arg) {
GLuint tex_handle[50]; // suppose I've got 50 textures to deal with.
graphics_init(&tex_handle); // fill up the array with them during graphics init which loads my textures
}
void process_msg_thread(message *msg, void *arg) {
if (msg->ID == MESSAGE_DRAW) { // this message indicates which texture my thread was told to draw
graphics_draw_this_texture(tex_handle[msg->texturehandleindex]); // send back the handle so it knows what to draw
}
}
void cleanup_thread(void *arg) {
graphics_cleanup();
}
I am greatly simplifying the interaction with the graphics system here but you get the point. In this example code tex_handle is an automatic variable, and all its values are lost when init_thread completes, so will not be available when process_msg_thread needs to reference it.
I can fix this by using globals but that means I can't have (for instance) two of these threads simultaneously since they would trample on each other's texture handle list since they use the same one.
I can use thread-local globals but is that a good idea?
I came up with one last idea. I can allocate storage on the heap in my parent thread, and send a pointer to in to the children to mess with. So I can just free it when parent thread leaves away since I intend for it to clean up its children threads before it exits anyway. So, something like this:
void init_thread(void *arg) {
GLuint *tex_handle = (GLuint*)arg; // my storage space passed as arg
graphics_init(tex_handle);
}
void process_msg_thread(message *msg, void *arg) {
GLuint *tex_handle = (GLuint*)arg; // same thing here
if (msg->ID == MESSAGE_DRAW) {
graphics_draw_this_texture(tex_handle[msg->texturehandleindex]);
}
}
int main () {
threadCreator factory;
GLuint *tex_handle = new GLuint[50];
factory.createThread(init_thread, tex_handle, process_msg_thread, tex_handle, cleanup_thread, 0);
// do stuff, wait etc
...
delete[] tex_handle;
}
This looks more or less safe because my values go on the heap, my main thread allocates it then lets children mess with it as they wish. The children can use the storage freely since the pointer was given to all the functions that need access.
So this got me thinking why not just have it be an automatic variable:
int main () {
threadCreator factory;
GLuint tex_handle[50];
factory.createThread(init_thread, &tex_handle, process_msg_thread, &tex_handle, cleanup_thread, 0);
// do stuff, wait etc
...
} // tex_handle automatically cleaned up at this point
This means children thread directly access parent's stack. I wonder if this is kosher.
I found this on the internets: http://software.intel.com/sites/products/documentation/hpc/inspectorxe/en-us/win/ug_docs/olh/common/Problem_Type__Potential_Privacy_Infringement.htm
it seems Intel Inspector XE detects this behavior. So maybe I shouldn't do it? Is it just simply a warning of potential privacy infringement as suggested by the the URL or are there other potential issues that may arise that I am not aware of?
P.S. After thinking through all this I realize that maybe this architecture of splitting a thread into a bunch of functions that get called independently wasn't such a great idea. My intention was to remove the complexity of requiring coding up a message handling loop for each thread that gets spawned. I had anticipated possible problems, and if I had a generalized thread implementation that always checked for messages (like my custom one that specifies the thread is to be terminated) then I could guarantee that some future user could not accidentally forget to check for that condition in each and every message loop of theirs.
The problem with my solution to that is that those individual functions are now separate and cannot communicate with each other. They may do so only via globals and thread local globals. I guess thread local globals may be my best option.
P.P.S. This got me thinking about RAII and how the concept of the thread at least as I have ended up representing it has a certain similarity with that of a resource. Maybe I could build an object that represents a thread more naturally than traditional ways... somehow. I think I will go sleep on it.
Put your thread functions into a class. Then they can communicate using instance variables. This requires your thread factory to be changed, but is the cleanest way to solve your problem.
Your idea of using automatic variables will work too as long as you can guarantee that the function whose stack frame contains the data will never return before your child threads exit. This is not really easy to achieve, even after main() returns child threads can still run.