OpenGL calls segfault when called from OpenMP thread - c++

Let me start by trying to specify what I want to do:
Given a grey scale image, I want to create 256 layers (assuming 8bit images), where each layer is the image thresholded with a grey scale i -- which is also the i'th layer (so, i=0:255). For all of these layers I want to compute various other things which are not very relevant to my problem, but this should explain the structure of my code.
The problem is that I need to execute the code very often, so I want to speed things up as much as possible, using a short amount of time (so, simple speedup tricks only). Therefore I figured I could use the OpenMP library, as I have a quad core, and everything is CPU-based at the moment.
This brings me to the following code, which executes fine (at least, it looks fine :) ):
#pragma omp parallel for private(i,out,tmp,cc)
for(i=0; i< numLayers; i++){
cc=new ConnectedComponents(255);
out = (unsigned int *) malloc(in->dimX()* in->dimY()*sizeof(int));
tmp = (*in).dupe();
tmp->threshold((float) i);
if(!tmp){ printf("Could not allocate enough memory\n"); exit(-1); }
cc->connected(tmp->data(),out,tmp->dimX(),tmp->dimY(),std::equal_to<unsigned int>(), true);
free(out);
delete tmp;
delete cc;
}
ConnectedComponents is just some library which implements the 2-pass floodfill, just there for illustration, it is not really part of the problem.
This code finishes fine with 2,3,4,8 threads (didn't test any other number).
So, now the weird part. I wanted to add some visual feedback, helping me to debug. The object tmp contains a method called saveAsTexture(), which basically does all the work for me, and returns the texture ID. This function works fine single threaded, and also works fine with 2 threads. However, as soon as I go beyond 2 threads, the method causes a segmentation fault.
Even with #pragma omp critical around it (just in case saveAsTexture() is not thread-safe), or executing it only once, it still crashes. This is the code I have added to the previous loop:
if(i==100){
#pragma omp critical
{
tmp->saveToTexture();
}
}
which is only executed once, since i is the iterator, and it is a critical section... Still, the code ALWAYS segfaults at the first openGL call (bruteforce tests with printf(), fflush(stdout)).
So, just to make sure I am not leaving out relevant information, here is the saveAsTexture function:
template <class T> GLuint FIELD<T>::saveToTexture() {
unsigned char *buf = (unsigned char*)malloc(dimX()*dimY()*3*sizeof(unsigned char));
if(!buf){ printf("Could not allocate memory\n"); exit(-1); }
float m,M,avg;
minmax(m,M,avg);
const float* d = data();
int j=0;
for(int i=dimY()-1; i>=0; i--) {
for(const float *s=d+dimX()*i, *e=s+dimX(); s<e; s++) {
float r,g,b,v = ((*s)-m)/(M-m);
v = (v>0)?v:0;
if (v>M) { r=g=b=1; }
else { v = (v<1)?v:1; }
r=g=b=v;
buf[j++] = (unsigned char)(int)(255*r);
buf[j++] = (unsigned char)(int)(255*g);
buf[j++] = (unsigned char)(int)(255*b);
}
}
GLuint texid;
glPixelStorei(GL_UNPACK_ALIGNMENT,1);
glDisable(GL_TEXTURE_3D);
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &texid);
printf("TextureID: %d\n", texid);
fflush(stdout);
glBindTexture(GL_TEXTURE_2D, texid);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, dimX(), dimY(), 0, GL_RGB, GL_UNSIGNED_BYTE, buf);
glBindTexture(GL_TEXTURE_2D, 0);
glDisable(GL_TEXTURE_2D);
free(buf);
return texid;
}
It is good to note here that T is ALWAYS a float in my program.
So, I do not understand why this program works fine when executed with 1 or 2 threads (executed ~25 times, 100% success), but segfaults when using more threads (executed ~25 times, 0% success). And ALWAYS at the first openGL call (e.g. if I remove glPixelStorei(), it segfaults at glDisable()).
Am I overlooking something really obvious, am I encountering a weird OpenMP bug, or... what is happening?

You can only make OpenGL calls from one thread at a time, and the thread has to have the current context active.

An OpenGL context can only be used by one thread at a time (limitation imposed by wglMakeCurrent/glxMakeCurrent).
However, you said you're using layers. I think you can use different contexts for different layers, with the WGL_ARB_create_context extension (I think there's one for linux too) and setting the WGL_CONTEXT_LAYER_PLANE_ARB parameter. Then you could have a different context per thread, and things should work out.

Thank you very much for all the answers! Now I know why it fails I have decided to simply store everything in a big 3D texture (because this was an even easier solution), and just send all the data to the GPU at once. That works fine in this case.

Related

Pointer "this" of a class turns into null when calling a method of that class C++

I know that this question has been asked before but didnt see a proper solution to my problem, my problem is that when i create a pointer of a class (PAGTexture) everything is normal but then, that pointer is passed to another class as variable and in that new class (PAGRevolutionObject) i call a method from PAGTexture and then it throws an exception. Debugging i realized that it actually enter in that method but "this" pointer is null "0x00000000 ".
This is my cpp of PAGTexture:
#include "PAGTexture.h"
PAGTexture::PAGTexture()
{}
PAGTexture::~PAGTexture()
{
}
void PAGTexture::loadTexture(char * path_img, GLuint min_filter, GLuint mag_filter)
{
int imgWidth, imgHeight;
unsigned char *image;
image = SOIL_load_image(path_img,
&imgWidth,
&imgHeight,
0,
SOIL_LOAD_RGBA);
if (imgWidth == 0) {
std::cout << "Failed to load image." << std::endl;
}
GLuint id_img;
glGenTextures(1, &id_img);
glBindTexture(GL_TEXTURE_2D, id_img);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, min_filter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, mag_filter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imgWidth, imgHeight,
0, GL_RGBA, GL_UNSIGNED_BYTE, image);
glGenerateMipmap(GL_TEXTURE_2D);
ep = id_img;
id_imgs.push_back(id_img);
SOIL_free_image_data(image);
}
void PAGTexture::applyTexture(GLuint id, int pos)
{
glActiveTexture(id);
glBindTexture(GL_TEXTURE_2D, id_imgs.at(pos));
}
It turns into a null pointer when "applyTexture" is called because it tries to get the id from id_imgs.at(pos) and this one is null.
And this is the part where i called "applyTexture":
GLuint t = 33984;
if (id_revol.size() != 0) {
for (int i = 0; i < id_revol.size(); i++) {
shader_used->setUniform("TexSamplerColor", i);
textures->applyTexture(t + i, id_revol.at(i));
}
}
Notice that textures is not null pointer because i can access to "applyTexture" from there.
And this is an image of what i got debugging :
Image Debugging the process
I also tried to set a data breakpoint to know which part of my code is making that pointer corrupted following this "tutorial" but i cant set that breakpoint for my vs version (VS2015) .
To be honest, i think my problem is that i'm overwritting memory from another class( my proyect is pretty big ) but even so i wanted to ask to see if there is something that i'm missing.
By the way i'm using Visual Studio 2015 in debug mode (not release).
Thanks in advance for any answer.
EDIT1:
As requested here is where i defined PAGTexture:
PAGTexture *texturesPack = new PAGTexture();
texturesPack->loadTexture("Textures/check.png", GL_LINEAR, GL_LINEAR);
texturesPack->loadTexture("Textures/HeadSpider.png", GL_LINEAR, GL_LINEAR);
texturesPack->loadTexture("Textures/wood.jpg", GL_LINEAR, GL_LINEAR);
objeto2->drawLowerBodySpiderAndHead(texturesPack);
FINAL EDIT:
Thanks everybody to answer, my problem was that i was binding that pointer in a inline function and it seems that you cant do that (still dunno why), anyway i have learned some points that i think it could be important so thanks again :D
The assumption: "textures is not null pointer because i can access to "applyTexture" from there." is wrong. applyTexture is not a function pointer, it's a class method.
The compiler actually converts your object notation call into:
PAGTexture::applyTexture(textures, t + i, id_revol.at(i));
passing nullptr doesn't bother the compiler like at all. But when entering the method, this==nullptr and it crashes when you access a member.
It's different with data members (would probably have crashed at once), hence your confusion.
To play it safe, you could do things like:
void PAGTexture::loadTexture(char * path_img, GLuint min_filter, GLuint mag_filter)
{
assert(this!=nullptr);
that will crash with a failed assertion if method is called with a null pointer (note: only works if optimizations are turned off, else the compiler assumes that this cannot be nullptr when optimizing even with -O1)

Generating a very large array of objects freezes program

when I generate a huge array of 13k+ tiles that I want to render as textures onto the screen it crashes and I have no idea why, this is the method that is causing the issue
public ArrayList<Tile> getNewChunk(int width, int height)
{
int amountOfTilesY = height * 32;
int amountOfTilesX = width * 32;
int amountOfTiles = (amountOfTilesX + amountOfTilesY) / 32;
ArrayList<Tile> tiles = new ArrayList<Tile>(amountOfTiles);
for(int i = 0; i < amountOfTilesX; i += 32)
{
for(int j = 0; j < amountOfTilesY; j += 32)
{
Tile tempTile = new DirtTile(i, j, "res/tile/DirtTile.png");
tiles.add(tempTile);
}
}
return tiles;
}
So if ou could please help :D
the engine I am using to render the game with is lwjgl 2, using opengl
I can provide more code if needed
Based on what you provided, the issue could come from multiple things. #javec is correct in that loading the texture once per object would drastically reduce performance. You should be loading the single unit and distributing its single unit ID throughout the list.
Also, it is unclear based on what you provided whether or not you are generating buffers for each tile separately here. If so, you should be generating a single buffer and sharing it in the same way.
Depending on what you plan on using these tiles for, you may consider a different method. If the tiles are connected as a terrain, you can always scale your mesh appropriately and tile your texture using something like
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);

Strange Texture Behavior C++ OpenGL

I'm currently taking a C++ Game Libraries class, and for this class it's been our quarter long project to build a renderer that supports a variety of things. For the current lab our instructor gave us a tutorial on loading in a bmp into OpenGL manually, and applying it to our geometries.
Tutorial: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/
After following this tutorial step by step my textures are having some interesting behaviors. I've gone to other classmates, upperclassmen, and multiple instructors. None of them have an idea of what is happening. Considering that practically every one's code is identical for this Lab, and I'm the only one having this problem, I can't help but be confused.
I'm using the following OBJ, and texture. I convert the OBJ into a binary file in an OBJ converter that I built myself. My renderer takes in this binary file and sends down the data to OpenGL vertex buffers.
OBJ and Texture: http://www.models-resource.com/playstation_2/ratchetclank/model/6662/
My friend and I have the same binary file structure, so I gave him a copy of my binary file to check if the UVs were correct. His renders a perfectly textured chicken, while mind renders a chicken that looks like the texture was squished horizontally to 1/16th the length of the model, then repeated a bunch of times. I would post images, but I'm new here and don't have enough reputation to do so. Over the weekend I'l do my best to increase my reputation, because I really think that it would help to visually see my problem.
I would post my source code, however this project is approaching about 16,000 lines of code, and I doubt anyone is willing to search through that to find a stranger's problem.
Any suggestions would be helpful, I'm primarily curious on common mistakes that can be made when working with OpenGL textures, or .bmps in general.
Thank Ya.
//-----Edit One-----//
My friend's result
My result
I'm afraid that I'm not allowed to use other libraries. I probably should have mentioned that in my initial post.
Here is the code where I am loading in the bmp, I heard from one of the upper class-man at my school that I was ignoring something called bit depth. I know that the tutorial is pretty bad, and I'd rather learn to do it right, than to just barely scrape by. If any one has a good resource on this subject, I would greatly appreciate being pointed in that direction.
unsigned char header[54];
unsigned int dataPos;
unsigned int width, height;
unsigned int imageSize;
unsigned char * data;
FILE * file = fopen(filePath, "rb");
if (!file)
{
printf("\nImage could not be opened");
exit(1);
}
if (fread(header, 1, 54, file) != 54){
printf("\nNot a correct BMP file");
exit(1);
}
if (header[0] != 'B' || header[1] != 'M'){
printf("\nNot a correct BMP file");
exit(1);
}
dataPos = *(int*)&(header[0x0A]);
imageSize = *(int*)&(header[0x22]);
width = *(int*)&(header[0x12]);
height = *(int*)&(header[0x16]);
if (imageSize == 0) imageSize = width * height * 3;
if (dataPos == 0) dataPos = 54;
data = new unsigned char[imageSize];
fread(data, 1, imageSize, file);
fclose(file);
glGenTextures(1, &m_textureID);
glBindTexture(GL_TEXTURE_2D, m_textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
I currently am using shaders. I have both a fragment, and a vertex shader that are identical to the shaders described in the tutorial. I am both verifying each of them and making sure that they are compiling.
//-----Edit Two-----//
So I took durhass' suggestion, and set my color equal to a vec3(0.0, uv.x, uv.y) where uv is a vec2 that holds my texture coordinates, and this is what I get.
So I think I can see the root of the problem. I think that I am not storing my uvs correctly in my gl buffer. I don't think its a problem with the binary file's UVs considering that it works fine with my friend's engine. Il look into this, thank you for the suggestion, this just might lead to a fix!

Error C2664: 'auxDIBImageLoadW' : cannot convert parameter 1 from 'LPSTR' to 'LPCWSTR'

I was just modifying the code after reinstalling windows and VS 2012 Ultimate. The code (shown below) works perfectly fine before, but when I try to run the code right now, it gives following errors:
Error 1 error C2664: 'auxDIBImageLoadW' : cannot convert parameter 1 from 'LPSTR' to 'LPCWSTR'
Code:
void CreateTexture(GLuint textureArray[], LPSTR strFileName, int textureID)
{
AUX_RGBImageRec *pBitmap = NULL;
if (!strFileName) // Return from the function if no file name was passed in
return;
pBitmap = auxDIBImageLoad(strFileName); //<-Error in this line // Load the bitmap and store the data
if (pBitmap == NULL) // If we can't load the file, quit!
exit(0);
// Generate a texture with the associative texture ID stored in the array
glGenTextures(1, &textureArray[textureID]);
// This sets the alignment requirements for the start of each pixel row in memory.
// glPixelStorei (GL_UNPACK_ALIGNMENT, 1);
// Bind the texture to the texture arrays index and init the texture
glBindTexture(GL_TEXTURE_2D, textureArray[textureID]);
// Build Mipmaps (builds different versions of the picture for distances - looks better)
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pBitmap->sizeX, pBitmap->sizeY, GL_RGB, GL_UNSIGNED_BYTE, pBitmap->data);
// Lastly, we need to tell OpenGL the quality of our texture map. GL_LINEAR is the smoothest.
// GL_NEAREST is faster than GL_LINEAR, but looks blochy and pixelated. Good for slower computers though.
// Read more about the MIN and MAG filters at the bottom of main.cpp
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
// glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
// Now we need to free the bitmap data that we loaded since openGL stored it as a texture
if (pBitmap) // If we loaded the bitmap
{
if (pBitmap->data) // If there is texture data
{
free(pBitmap->data); // Free the texture data, we don't need it anymore
}
free(pBitmap); // Free the bitmap structure
}
}
I tried this Link, This one too and also Tried this one too. but still getting error.
This function is used after initialization as:
LPCWSTR k =L"grass.bmp";
CreateTexture(g_Texture, "building1.bmp", 0);
CreateTexture(g_Texture, "clock.bmp", 0);
//list goes on
Can you help me out?
Change "LPSTR strFileName" to "LPCWSTR strFileName", "building1.bmp" to L"building1.bmp and "clock.bmp" to L"clock.bmp".
Always be careful because LPSTR is ASCII and LPCWSTR is Unicode. So if the function needs a Unicode variable (like this: L"String here") you can't give it a ASCII string.
The solutions are either:
Change your function prototype to take wide strings:
void CreateTexture(GLuint textureArray[], LPWSTR strFileName, int textureID)
//...
LPCWSTR k =L"grass.bmp";
CreateTexture(g_Texture, L"building1.bmp", 0);
CreateTexture(g_Texture, L"clock.bmp", 0);
or
Don't change your function prototype, but call the A version of the API function:
pBitmap = auxDIBImageLoadA(strFileName);
Recommended: Stick to wide strings and use the correct string types.

Visual Studio Fallback error - Programming C++

I have some code I am trying to run on my laptop, but it keeps giving a 'FALLBACK' error. I don't know what it is, but it is quite annoying. It should just print 'Hello world!', but it prints it twice and changes the colours a little bit.
The same code is running perfectly on my PC.
I've searched a long time to solve this problem, but couldn't find anything. I hope some people out here can help me?
Here is my code:
// Template, major revision 3
#include "string.h"
#include "surface.h"
#include "stdlib.h"
#include "template.h"
#include "game.h"
using namespace Tmpl8;
void Game::Init()
{
// put your initialization code here; will be executed once
}
void Game::Tick( float a_DT )
{
m_Screen->Clear( 0 );
m_Screen->Print( "hello world", 2, 2, 0xffffff );
m_Screen->Line( 2, 10, 66, 10, 0xffffff );
}
Thanks in advance! :-)
Edit:
It gives an error on this line:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, SCRWIDTH, SCRHEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
Maybe this could help?
Looking at this post from OpenGl Forums and seeing that you're using OpenGL, I may have an idea.
You say that the code works fine on your computer but not on your notebook. There you have a possible hardware (different video cards) and software (different OpenGL version/support).
What may be happening is that the feature you want to use from OpenGL is not supported on your notebook. Also, you are creating a texture without data (the NULL on the last parameter), this will probably give you errors such as buffer overflow.
EDIT:
You may take a look on GLEW. It has a tool called "glewinfo" that looks for all features available on your hardware/driver. It generates a file by the same name on the same path of the executable. For the power of two textures, look for GL_ARB_texture_non_power_of_two.
EDIT 2:
As you said on the comments, without the GL_ARB_texture_non_power_of_two extension, and the texture having size of 640x480, glTexture will give you an error, and all the code that depends on it will likely fail. To fix it, you have to stretch the dimensions of the image to the next power of two. In this case, it would become 1024x512. Remember that the data that you supply to glTexture MUST have these dimensions.
Seeing that the error comes from the line:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, SCRWIDTH, SCRHEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
Here are the reasons why that function could return GL_INVALID_VALUE. Since I can't check it for sure, you'll have to go through this list and make sure which one of them caused this issue.
GL_INVALID_VALUE is generated if level is less than 0.
GL_INVALID_VALUE may be generated if level is greater than log 2 ⁡ max , where max is the returned value of GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if internalFormat is not 1, 2, 3, 4, or one of the accepted resolution and format symbolic constants.
GL_INVALID_VALUE is generated if width or height is less than 0 or greater than 2 + GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if non-power-of-two textures are not supported and the width or height cannot be represented as 2 k + 2 ⁡ border for some integer value of k.
GL_INVALID_VALUE is generated if border is not 0 or 1.
EDIT: I believe it could be the non-power-of-two texture size that's causing the problem. Rounding your texture size to the nearest power-of-two should probably fix the issue.
EDIT2: To test which of these is causing an issue, let's start with the most common issue; trying to create a texture of non-power-of-two size. Create an image of size 256x256 and call this function with 256 for width and height. If the function still fails I would try putting the level to 0 (keeping the power-of-two size still in place).
BUT DANG you don't have data for your image? It's set as NULL. You need to load the image data into memory and pass it to the function to create the texture. And you aren't doing that. Read how to load images from a file or how to render to texture, whichever is relevant to you.
This is to give you a better answer as a fresh post. First you need this helper function to load a bmp file into memory.
unsigned int LoadTex(string Image)
{
unsigned int Texture;
FILE* img = NULL;
img = fopen(Image.c_str(),"rb");
unsigned long bWidth = 0;
unsigned long bHeight = 0;
DWORD size = 0;
fseek(img,18,SEEK_SET);
fread(&bWidth,4,1,img);
fread(&bHeight,4,1,img);
fseek(img,0,SEEK_END);
size = ftell(img.file) - 54;
unsigned char *data = (unsigned char*)malloc(size);
fseek(img,54,SEEK_SET); // image data
fread(data,size,1,img);
fclose(img);
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, bWidth, bHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
if (data)
free(data);
return Texture;
}
Courtesy: Post by b0x in Game Deception.
Then you need to call it in your code likewise:
unsigned int texture = LoadTex("example_tex.bmp");