SDL2 Threading Seg Fault - c++

I am working on a rendering library for SDL2 and I am running into a seg fault when I try to do anything with the renderer. Through debugging I have determined that it is created correctly from the window (now...) However I cannot figure out why SDL2 is seg faulting when I call SDL_RenderClear(data->renderer)
The main thread calls:
int RenderThread::start(std::string title, int x, int y, int w, int h, Uint32 flags) {
data.window = SDL_CreateWindow(title.c_str(), x, y, w, h, flags);
if(data.window == NULL) return -2;
data.renderer = SDL_CreateRenderer(data.window, -1, 0);
if(data.renderer == NULL) return -3;
data.rlist->setRenderer(data.renderer);
data.run = true;
if(thread == NULL)
thread = SDL_CreateThread(renderThread, "RenderThread", (void*)(&data));
else return 1;
return 0;
}
Then the actual thread is:
int RenderThread::renderThread(void* d) {
RenderData* data = (RenderData*)d;
data->rlist->render(true);
SDL_SetRenderDrawColor(data->renderer, 0xFF, 0xFF, 0xFF, 0xFF);
SDL_RenderClear(data->renderer);
while(data->run) {
data->rlist->render();
SDL_RenderPresent(data->renderer);
SDL_Delay(data->interval);
}
return 0;
}
If you need to see more of the code it is all on github.

Some platforms (e.g. Windows) don't allow interacting with windows from threads other than the one that created them.
The documentation explicitly says this:
NOTE: You should not expect to be able to create a window, render, or receive events on any thread other than the main one.
From a design's perspective, trying to render from another thread becomes the source of many problems. For instance:
Is it desirable to (unpredictably) update an object more than once per frame? What's preventing a logic thread from trying to make many updates that can't be rendered?
Is it desirable to risk re-rendering without having the chance to update an object?
Will you lock the entire scene while the update happens? Or will each object get its own lock, so you don't try to render an object that's in the middle of an update? Is it desirable for the frame rate to be unpredictable, due to other threads locking the objects?
Not to mention the costs of synchronization primitives.

Related

Xlib : Segmentation fault on multithreading

My attempt is to write an Xlib wrapper for the purpose of implementing triple buffering methods. Using which a person needs only compute their display matrix and forward it to the API for displaying. I have two separate threads for handling events and display. The events thread seems to execute without any issue, however, the display thread, when used with standard Xlib functions such as XDrawRectangle, XFillArc, XSetForeground, etc. seems to cause a segmentation fault of an unknown nature.
This is my thread execution part :
int startx(){
pthread_t eventsThread, displayThread;
char msg1[15] ="Events Thread", msg2[15] = "Display Thread";
int pid1, pid2;
pid1 = pthread_create( &eventsThread, NULL, eventsHandler, (void*) msg1);
pid2 = pthread_create( &displayThread, NULL, displayHandler, (void*) msg2);
pthread_join(eventsThread, NULL);
pthread_join(displayThread, NULL);
return 0;
};
This is my displayHandler :
void *displayHandler(void* args){
cout<<connectionNumber<<endl;
Color c(50,50,250);
int width = 40, height = 60,x = 500, y = 100;
for(int i=0;i<1300;i++){
XSetForeground(display, xgraphics, c.decimal);
XDrawRectangle(display, mainWindow, xgraphics, x, y, width, height);
XFlush(display);
}
}
The eventsThread seems to be executing without error. Also, I have tried making the display function a part of the main program, with the same results.
If somebody could tell me an alternative/correct method to paint the window using matrices, it would be most appreciated.
Note : Color is a self made class for ease of colour computation.
This crashes for me before the howdy line. Uncommenting the return NULL; line makes it work.
#include <iostream>
#include <pthread.h>
void *displayHandler(void* args) {
char* txt = reinterpret_cast<char*>(args);
std::cout << txt << "\n";
// return NULL;
}
int startx(){
pthread_t displayThread;
char msg2[15] = "Display Thread";
int pid2;
pid2 = pthread_create( &displayThread, NULL, displayHandler, (void*) msg2);
pthread_join(displayThread, NULL);
return 0;
}
int main() {
startx();
std::cout << "howdy\n";
}
As Ted Lyngmo points out. The problem lay with the fact that Xlib has no thread safety implemented for writing to the display. So writing a mutex presented a solution.
If any of the event masks are set to write to the screen, separate threads for both become pointless. Instead making the masks toggle variables, allow them to work simultaneously.

SFML thread synchronization?

I'm new to SFML, been trying to have a multi-threading game system (all of the game logic on the main thread, and the rendering in a dedicated thread using sf::Thread; mainly for practicing with threads) as explained in this page ("Drawing with threads" section):
Unfortunately my program has a long processing time during it's update() and makes the rendering process completely out of control, showing some frames painted and some others completely empty. If it isn't obvious my rendering thread is trying to paint something that isn't even calculated, leaving this epileptic effect.
What I'm looking for is to allow the thread to render only when the main logic has been calculated. Here's what I got so far:
void renderThread()
{
while (window->isOpen())
{
//some other gl stuff
//window clear
//window draw
//window display
}
}
void update()
{
while (window->isOpen() && isRunning)
{
while (window->pollEvent(event))
{
if (event.type == sf::Event::Closed || sf::Keyboard::isKeyPressed(sf::Keyboard::Escape))
{
isRunning = false;
}
else if (m_event.type == sf::Event::Resized)
{
glViewport(0, 0, m_event.size.width, m_event.size.height);
}
}
// really resource intensive process here
time = m_clock.getElapsedTime();
clock.restart().asSeconds();
}
}
Thanks in advance.
I guess the errors happen because you manipulate elements that are getting rendered at the same time in parallel. You need to look into mutexes.
Mutexes lock the element you want to manipulate (or draw in the the other thread) for as long as the manipulation takes and frees it afterwards.
While the element is locked it can not be accessed by another thread.
Example in pseudo-code:
updateThread(){
renderMutex.lock();
globalEntity.manipulate();
renderMutex.unlock();
}
renderThread(){
renderMutex.lock();
window.draw(globalEntity);
renderMutex.unlock();
}

Sharing opengl resources (OpenGL ES 2.0 Multithreading)

I have developed an OpenGL ES 2.0 win32 application, that works fine in a single thread. But I also understand that UI thread and a rendering thread should be separate.
Currently my game loop looks something like that:
done = 0;
while(!done)
{
msg = GetMessage(..); // getting messages from OS
if(msg == QUIT) // the window has been closed
{
done = 1;
}
DispatchMessage(msg,..); //Calling KeyDown, KeyUp events to handle user input;
DrawCall(...); //Render a frame
Update(..); // Update
}
Please view it as a pseudo code, cause i don't want to bother you with details at this point.
So my next step was to turn done into an std::atomic_int and create a function
RenderThreadMain()
{
while(!done.load())
{
Draw(...);
}
}
and create a std::unique_ptr<std::thread> m_renderThread variable. As you can guess nothing has worked for me so far, so i made my code as stupid and simple as possible in order to make sure i don't break anything with the order i call methods in. So right now my game loop works like this.
done.store(0);
bool created = false;
while(!done)
{
msg = GetMessage(..); // getting messages from OS
if(msg == QUIT) // the window has been closed
{
done.store(1);
}
DispatchMessage(msg,..); //Calling KeyDown, KeyUp events to handle user input;
// to make sure, that my problem is not related to the fact, that i'm rendering too early.
if(!created)
{
m_renderThread = std::make_unique<std::thread>(RenderThreadMain, ...);
created = true;
}
Update(..); // Update
}
But this doesn't work. On every draw call, when i try to somehow access or use my buffers \ textures anything else, i get the GL_INVALID_OPERATION error code.
So my guess would be, that the problem is in me calling glGenBuffers(mk_bufferNumber, m_bufferIds); in the main thread during initialization and then calling glBindBuffer(GL_ARRAY_BUFFER, m_bufferIds[0]); in a render thread during the draw call. (the same applies to every openGL object i have)
But I don't now if i'm right or wrong.

Exeception in application when Threading used to saveImage simultaneously from multiple IP Camera

I am working on application where I connect the application and display multiple video feed from IP cameras. I am able to get the video feed which is too laggy(working to get solution to remove the lag). And in the application I have Button which when clicked it takes 50 pictures from all the IP cameras connected. But the code I have implemented gives an exception when threading is implemented. When used without threading it works fine.Heres the code for the Button Event.
void CDialogBasedVideoDlg::OnBnClickedButtonTakePic()
{
int nIndex = m_CamListBox.GetCurSel();
CStaticEx *objStaticEx = (CStaticEx*)m_StaticArray.GetAt(nIndex);
objStaticEx->StartThreadToCaptureUSBCam();//threading implementation gives exception.
//objStaticEx->CapturePicture();//this func works fine(without threading)
// TODO: Add your control notification handler code here
}
I have overrriden Static class which dynamically creates a picture control and displays the live video feed, threading is implemented in this class where the images are saved. Here's the code for capturing images and threading function.
void CStaticEx::CapturePicture(void)
{
CString csFileDir;
CString csFileName;
csFileDir.Format(DIR_USB_CAM_NAME,m_IpAddr);
if(IsDirExist(csFileDir)== false){
CreateDirectory(csFileDir, NULL);
}
CString csStr = csFileDir;
csStr += RANDOM_FILE_SEARCH;
int nNoOfFile = CountFileNumInDir((char*)csStr.GetBuffer());
csFileDir += DBL_SLASH;
int i = 0;
do{
csFileName.Format(FILE_NAME, csFileDir, (m_nCamID+1));
CString csCount;
csCount.Format(_T("%d"),(nNoOfFile+1));
csFileName += csCount;
csFileName += JPG;
m_pFrameImg = cvQueryFrame( m_pCamera ); //<----Exception come at this point
if(m_pFrameImg){
cvSaveImage(csFileName, m_pFrameImg);
i++;
nNoOfFile++;
csFileName = _T("");
}
}while(i < 50);
}
Threading Control function.
void CStaticEx::StartThreadToCaptureUSBCam(){
THREADSTRUCT *_param = new THREADSTRUCT;
_param->_this = this;
AfxBeginThread(StartThread,_param);
}
UINT CStaticEx::StartThread (LPVOID param)
{
THREADSTRUCT* ts = (THREADSTRUCT*)param;
//AfxMessageBox("Thread Started");
ts->_this->CapturePicture();
return 1;
}
Exception thrown is as follows.
Windows has treggered a breakpoint in DialogBasedVideo.exe.
This may be due to a corruption of heap, which indicates a bug in DialogBasedVideo.exe or any of the DLLs it has loaded.
This may also be due to the user pressing F12 while dialogbasedvideo.exe has focus.
The output window may have more diagnostic information. How do i get rid of this exception.
All the experts out their please help me.I am using VS2010 and Windows7, OpenCv2.4.6 Thanks in advance.

OpenGL VBO not updating properly

I am having an OpenGL VBO problem. I downloaded the old VBO example called Lesson45 from NeHe and I modified it to check something.
My end result is to create about 9 tiles, one of them being the origin. Then as the player moves on the screen, the top/bottom rows/columns update the data. But for now I want something basic:
I create one VBO and then I want to update the data in another thread. While the data is being uploaded, I do not want to draw the VBO because that would cause problems.
Here I create the VBO:
glGenBuffersARB( 1, &m_nVBOVertices );
glBindBufferARB(GL_ARRAY_BUFFER_ARB, m_nVBOVertices);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, m_nVertexCount*3*sizeof(float), m_pVertices, GL_DYNAMIC_DRAW_ARB);
I create a thread, I set up an OpenGL context, I share lists. Then I process the data, when the user presses "R" on the keyboard:
while(TerrainThreadRun)
{
//look for R
if(window.keys->keyDown[82] == TRUE && keyactivated == false)
{
keyactivated = true;
window.keys->keyDown[82] = FALSE;
}
if(keyactivated)
{
for(int i = 0; i < g_pMesh->m_nVertexCount; i++)
{
g_pMesh->m_pVertices[i].y = 800.0f;
}
while(!wglMakeCurrent(window.hDCThread,window.hRCThread))//This was removed
Sleep(5);//This was removed
glBindBufferARB(GL_ARRAY_BUFFER_ARB, g_pMesh->m_nVBOVertices);
glBufferSubDataARB(GL_ARRAY_BUFFER_ARB, 0, g_pMesh->m_nVertexCount*3*sizeof(float), g_pMesh->m_pVertices);
keyactivated = false;
}
}
To draw the data:
if(!keyactivated)
{
glEnableClientState( GL_VERTEX_ARRAY );
glBindBufferARB(GL_ARRAY_BUFFER_ARB, g_pMesh->m_nVBOVertices);
glVertexPointer(3, GL_FLOAT, 0, (char*)NULL);
glDrawArrays(GL_TRIANGLES, 0, g_pMesh->m_nVertexCount);
glDisableClientState(GL_VERTEX_ARRAY);
}
I know that using ARB extensions is not recommended, but this is just for a quick basic example.
The problem is that when I first press "R", the data does not get updated. The VBO draws the same. The second time that I press "R", it updates the data. What can I do to force the draw. Am I doing something wrong?
Does the data need to be forced to the video card? Am I missing something?
Update: I looked over my code and now I use wglMakeCurrent only once, when the context is initialized. In the thread, I use it after sharing the lists and on the main thread as soon as the lists are shared, like this:
window->hRC = wglCreateContext (window->hDC);
if (window->hRC ==0)
{
// Failed
}
TerrainThreadRun = true;
TerrainThread = CreateThread(NULL, NULL,(LPTHREAD_START_ROUTINE)TerrainThreadProc, 0, NULL, NULL);
while(!sharedContext)
Sleep(100);
if (wglMakeCurrent (window->hDC, window->hRC) == FALSE)
And in the thread:
if (!(window.hRCThread=wglCreateContext(window.hDCThread)))
{
//Error
}
while(wglShareLists(window.hRC, window.hRCThread) == 0)
{
DWORD err = GetLastError();
Sleep(5);
}
sharedContext = true;
int cnt = 0;
while(!wglMakeCurrent(window.hDCThread,window.hRCThread))
Sleep(5);
while(TerrainThreadRun)
{
//look for R
Second update: I tried using glMapBuffer instead of glBuferSubData, but the application behaves the same. Here is the code:
void *ptr = (void*)glMapBuffer(GL_ARRAY_BUFFER_ARB, GL_READ_WRITE_ARB);
if(ptr)
{
memcpy(ptr, g_pMesh->m_pVertices, g_pMesh->m_nVertexCount*3*sizeof(float));
glUnmapMapBuffer(GL_ARRAY_BUFFER_ARB);
}
Update three:
I was doing some things wrong, so I modified them, but the problem remains the same. Here is how I do everything now:
When the application loads, I create two windows, each with its own HWND. Based on them, I create two device contexts.
Then I share the lists between them:
wglShareLists(window.hRC, window.hRCThread);
This is done only once when I initialize.
After that I show the OGL window, which renders; I make the context active. Then I load the function pointers and create the VBO.
After the main rendering OGL is done, I create the thread. When the thread is loaded, I make its device context active.
Then we do normal stuff.
So my question is: Do I need to update the function pointers for each device context? Could this be my problem?
As an update, if I run my test app in gDEBugger and I first press "R" and then pause, it doesn't display correctly. I take a look at the memory (Textures, Buffers and Image Viewers) and GLContext1(I think the main rendering thread) device context has the OLD data. While GLContext2 (Shared-GL1) (I think the thread context) has the correct data.
The odd part, if I look back at GLContext1, with the program still in pause mode, now it displays the new data, like it "refreshed" it somehow. And then if I press play, it starts drawing correctly.
I found the solution, I need to call glFinish() in the worker thread after calling glUnmapBuffer. This will solve the problem and everything will render just fine.