I was just modifying the code after reinstalling windows and VS 2012 Ultimate. The code (shown below) works perfectly fine before, but when I try to run the code right now, it gives following errors:
Error 1 error C2664: 'auxDIBImageLoadW' : cannot convert parameter 1 from 'LPSTR' to 'LPCWSTR'
Code:
void CreateTexture(GLuint textureArray[], LPSTR strFileName, int textureID)
{
AUX_RGBImageRec *pBitmap = NULL;
if (!strFileName) // Return from the function if no file name was passed in
return;
pBitmap = auxDIBImageLoad(strFileName); //<-Error in this line // Load the bitmap and store the data
if (pBitmap == NULL) // If we can't load the file, quit!
exit(0);
// Generate a texture with the associative texture ID stored in the array
glGenTextures(1, &textureArray[textureID]);
// This sets the alignment requirements for the start of each pixel row in memory.
// glPixelStorei (GL_UNPACK_ALIGNMENT, 1);
// Bind the texture to the texture arrays index and init the texture
glBindTexture(GL_TEXTURE_2D, textureArray[textureID]);
// Build Mipmaps (builds different versions of the picture for distances - looks better)
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pBitmap->sizeX, pBitmap->sizeY, GL_RGB, GL_UNSIGNED_BYTE, pBitmap->data);
// Lastly, we need to tell OpenGL the quality of our texture map. GL_LINEAR is the smoothest.
// GL_NEAREST is faster than GL_LINEAR, but looks blochy and pixelated. Good for slower computers though.
// Read more about the MIN and MAG filters at the bottom of main.cpp
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
// glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
// Now we need to free the bitmap data that we loaded since openGL stored it as a texture
if (pBitmap) // If we loaded the bitmap
{
if (pBitmap->data) // If there is texture data
{
free(pBitmap->data); // Free the texture data, we don't need it anymore
}
free(pBitmap); // Free the bitmap structure
}
}
I tried this Link, This one too and also Tried this one too. but still getting error.
This function is used after initialization as:
LPCWSTR k =L"grass.bmp";
CreateTexture(g_Texture, "building1.bmp", 0);
CreateTexture(g_Texture, "clock.bmp", 0);
//list goes on
Can you help me out?
Change "LPSTR strFileName" to "LPCWSTR strFileName", "building1.bmp" to L"building1.bmp and "clock.bmp" to L"clock.bmp".
Always be careful because LPSTR is ASCII and LPCWSTR is Unicode. So if the function needs a Unicode variable (like this: L"String here") you can't give it a ASCII string.
The solutions are either:
Change your function prototype to take wide strings:
void CreateTexture(GLuint textureArray[], LPWSTR strFileName, int textureID)
//...
LPCWSTR k =L"grass.bmp";
CreateTexture(g_Texture, L"building1.bmp", 0);
CreateTexture(g_Texture, L"clock.bmp", 0);
or
Don't change your function prototype, but call the A version of the API function:
pBitmap = auxDIBImageLoadA(strFileName);
Recommended: Stick to wide strings and use the correct string types.
Related
I know that this question has been asked before but didnt see a proper solution to my problem, my problem is that when i create a pointer of a class (PAGTexture) everything is normal but then, that pointer is passed to another class as variable and in that new class (PAGRevolutionObject) i call a method from PAGTexture and then it throws an exception. Debugging i realized that it actually enter in that method but "this" pointer is null "0x00000000 ".
This is my cpp of PAGTexture:
#include "PAGTexture.h"
PAGTexture::PAGTexture()
{}
PAGTexture::~PAGTexture()
{
}
void PAGTexture::loadTexture(char * path_img, GLuint min_filter, GLuint mag_filter)
{
int imgWidth, imgHeight;
unsigned char *image;
image = SOIL_load_image(path_img,
&imgWidth,
&imgHeight,
0,
SOIL_LOAD_RGBA);
if (imgWidth == 0) {
std::cout << "Failed to load image." << std::endl;
}
GLuint id_img;
glGenTextures(1, &id_img);
glBindTexture(GL_TEXTURE_2D, id_img);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, min_filter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, mag_filter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imgWidth, imgHeight,
0, GL_RGBA, GL_UNSIGNED_BYTE, image);
glGenerateMipmap(GL_TEXTURE_2D);
ep = id_img;
id_imgs.push_back(id_img);
SOIL_free_image_data(image);
}
void PAGTexture::applyTexture(GLuint id, int pos)
{
glActiveTexture(id);
glBindTexture(GL_TEXTURE_2D, id_imgs.at(pos));
}
It turns into a null pointer when "applyTexture" is called because it tries to get the id from id_imgs.at(pos) and this one is null.
And this is the part where i called "applyTexture":
GLuint t = 33984;
if (id_revol.size() != 0) {
for (int i = 0; i < id_revol.size(); i++) {
shader_used->setUniform("TexSamplerColor", i);
textures->applyTexture(t + i, id_revol.at(i));
}
}
Notice that textures is not null pointer because i can access to "applyTexture" from there.
And this is an image of what i got debugging :
Image Debugging the process
I also tried to set a data breakpoint to know which part of my code is making that pointer corrupted following this "tutorial" but i cant set that breakpoint for my vs version (VS2015) .
To be honest, i think my problem is that i'm overwritting memory from another class( my proyect is pretty big ) but even so i wanted to ask to see if there is something that i'm missing.
By the way i'm using Visual Studio 2015 in debug mode (not release).
Thanks in advance for any answer.
EDIT1:
As requested here is where i defined PAGTexture:
PAGTexture *texturesPack = new PAGTexture();
texturesPack->loadTexture("Textures/check.png", GL_LINEAR, GL_LINEAR);
texturesPack->loadTexture("Textures/HeadSpider.png", GL_LINEAR, GL_LINEAR);
texturesPack->loadTexture("Textures/wood.jpg", GL_LINEAR, GL_LINEAR);
objeto2->drawLowerBodySpiderAndHead(texturesPack);
FINAL EDIT:
Thanks everybody to answer, my problem was that i was binding that pointer in a inline function and it seems that you cant do that (still dunno why), anyway i have learned some points that i think it could be important so thanks again :D
The assumption: "textures is not null pointer because i can access to "applyTexture" from there." is wrong. applyTexture is not a function pointer, it's a class method.
The compiler actually converts your object notation call into:
PAGTexture::applyTexture(textures, t + i, id_revol.at(i));
passing nullptr doesn't bother the compiler like at all. But when entering the method, this==nullptr and it crashes when you access a member.
It's different with data members (would probably have crashed at once), hence your confusion.
To play it safe, you could do things like:
void PAGTexture::loadTexture(char * path_img, GLuint min_filter, GLuint mag_filter)
{
assert(this!=nullptr);
that will crash with a failed assertion if method is called with a null pointer (note: only works if optimizations are turned off, else the compiler assumes that this cannot be nullptr when optimizing even with -O1)
I would like to create devIL image from raw texture data, but I can't seem to find a way to do it. The proper way seems to be ilLoadL with IL_RAW, but I can't get it to work. The documentation in here says that that there should be 13-byte header in the data, so i just put meaningless data there. If I put 0 to "size" parameter of ilLoadL,
I'll get black texture, no matter what. Otherwise my program refuses to draw anything. ilIsImage returns true, and I can create openGL texture from it just fine. The code works if I load texture from file.
It's not much, but here's my code so far:
//Loading:
ilInit();
iluInit();
ILuint ilID;
ilGenImages(1, &ilID);
ilBindImage(ilID);
ilEnable(IL_ORIGIN_SET);
ilOriginFunc(IL_ORIGIN_LOWER_LEFT);
//Generate 13-byte header and fill it with meaningless numbers
for (int i = 0; i < 13; ++i){
data.insert(data.begin() + i, i);
}
//This fails.
if (!ilLoadL(IL_RAW, &data[0], size)){
std::cout << "Fail" << std::endl;
}
Texture creation:
ilBindImage(ilId[i]);
ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE);
glBindTexture(textureTarget, id[i]);
glTexParameteri(textureTarget, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(textureTarget, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(textureTarget, GL_TEXTURE_MIN_FILTER, filters[i]);
glTexParameterf(textureTarget, GL_TEXTURE_MAG_FILTER, filters[i]);
glTexImage2D(textureTarget, 0, GL_RGBA,
ilGetInteger(IL_IMAGE_WIDTH), ilGetInteger(IL_IMAGE_HEIGHT),
0, GL_RGBA, GL_UNSIGNED_BYTE, ilGetData());
If an image format has a header, you can generally assume it contains some important information necessary to correctly read the rest of the file. Filling it with "meaningless data" is inadvisable at best.
Since there is no actual struct in DevIL for the .raw header, let us take a look at the implementation of iLoadRawInternal () to figure out what those first 13-bytes are supposed to be.
// Internal function to load a raw image
ILboolean iLoadRawInternal()
{
if (iCurImage == NULL) {
ilSetError(IL_ILLEGAL_OPERATION);
return IL_FALSE;
}
iCurImage->Width = GetLittleUInt(); /* Bytes: 0-3 {Image Width} */
iCurImage->Height = GetLittleUInt(); /* Bytes: 4-7 {Image Height} */
iCurImage->Depth = GetLittleUInt(); /* Bytes: 8-11 {Image Depth} */
iCurImage->Bpp = (ILubyte)igetc(); /* Byte: 12 {Bytes per-pixel} */
NOTE: The /* comments */ are my own
GetLittleUInt () reads a 32-bit unsigned integer in little-endian order and advances the read location appropriately. igetc () does the same for a single byte.
This is equivalent to the following C structure (minus the byte order consideration):
struct RAW_HEADER {
uint32_t width;
uint32_t height;
uint32_t depth; // This is depth as in the number of 3D slices (not bit depth)
uint8_t bpp; // **Bytes** per-pixel (1 = Luminance, 3 = RGB, 4 = RGBA)
};
If you read the rest of the implementation of iLoadRawInternal () in il_raw.c, you will see that without proper values in the header DevIL will not be able to calculate the correct file size. Filling in the correct values should help.
I have a single texture working, but I cannot figure out how to switch between 2, or how glBindTexture actually works.
I copied this from somewhere and it works, and I believe that I understand most of it. Problem is, I can uncomment glBindTexture(GL_TEXTURE_2D, texture[0].texID); and it works. Which I don't understand. This code shouldn't be a problem, I think it's something simple I am missing.
bool LoadTGA(TextureImage *texture, char *filename) // Loads A TGA File Into Memory
{
GLubyte TGAheader[12]={0,0,2,0,0,0,0,0,0,0,0,0}; // Uncompressed TGA Header
GLubyte TGAcompare[12]; // Used To Compare TGA Header
GLubyte header[6]; // First 6 Useful Bytes From The Header
GLuint bytesPerPixel; // Holds Number Of Bytes Per Pixel Used In The TGA File
GLuint imageSize; // Used To Store The Image Size When Setting Aside Ram
GLuint temp; // Temporary Variable
GLuint type=GL_RGBA; // Set The Default GL Mode To RBGA (32 BPP)
system("cd");
FILE *file = fopen(filename, "r"); // Open The TGA File
if( file==NULL || // Does File Even Exist?
fread(TGAcompare,1,sizeof(TGAcompare),file)!=sizeof(TGAcompare) || // Are There 12 Bytes To Read?
memcmp(TGAheader,TGAcompare,sizeof(TGAheader))!=0 || // Does The Header Match What We Want?
fread(header,1,sizeof(header),file)!=sizeof(header)) // If So Read Next 6 Header Bytes
{
if (file == NULL) // Did The File Even Exist? *Added Jim Strong*
{
perror("Error");
return false; // Return False
}
else
{
fclose(file); // If Anything Failed, Close The File
perror("Error");
return false; // Return False
}
}
texture->width = header[1] * 256 + header[0]; // Determine The TGA Width (highbyte*256+lowbyte)
texture->height = header[3] * 256 + header[2]; // Determine The TGA Height (highbyte*256+lowbyte)
if( texture->width <=0 || // Is The Width Less Than Or Equal To Zero
texture->height <=0 || // Is The Height Less Than Or Equal To Zero
(header[4]!=24 && header[4]!=32)) // Is The TGA 24 or 32 Bit?
{
fclose(file); // If Anything Failed, Close The File
return false; // Return False
}
texture->bpp = header[4]; // Grab The TGA's Bits Per Pixel (24 or 32)
bytesPerPixel = texture->bpp/8; // Divide By 8 To Get The Bytes Per Pixel
imageSize = texture->width*texture->height*bytesPerPixel; // Calculate The Memory Required For The TGA Data
texture->imageData=(GLubyte *)malloc(imageSize); // Reserve Memory To Hold The TGA Data
if( texture->imageData==NULL || // Does The Storage Memory Exist?
fread(texture->imageData, 1, imageSize, file)!=imageSize) // Does The Image Size Match The Memory Reserved?
{
if(texture->imageData!=NULL) // Was Image Data Loaded
free(texture->imageData); // If So, Release The Image Data
fclose(file); // Close The File
return false; // Return False
}
for(GLuint i=0; i<int(imageSize); i+=bytesPerPixel) // Loop Through The Image Data
{ // Swaps The 1st And 3rd Bytes ('R'ed and 'B'lue)
temp=texture->imageData[i]; // Temporarily Store The Value At Image Data 'i'
texture->imageData[i] = texture->imageData[i + 2]; // Set The 1st Byte To The Value Of The 3rd Byte
texture->imageData[i + 2] = temp; // Set The 3rd Byte To The Value In 'temp' (1st Byte Value)
}
fclose (file); // Close The File
// Build A Texture From The Data
glGenTextures(1, &texture[0].texID); // Generate OpenGL texture IDs
//glBindTexture(GL_TEXTURE_2D, texture[0].texID); // Bind Our Texture
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // Linear Filtered
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // Linear Filtered
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
if (texture[0].bpp==24) // Was The TGA 24 Bits
{
type=GL_RGB; // If So Set The 'type' To GL_RGB
}
glTexImage2D(GL_TEXTURE_2D, 0, type, texture[0].width, texture[0].height, 0, type, GL_UNSIGNED_BYTE, texture[0].imageData);
return true;
Now when I draw I have this:
glEnable(GL_TEXTURE_2D);
//glBindTexture(GL_TEXTURE_2D, texturesList[0].texID);
glColor4f(1, 1, 1, 1);
glBegin(GL_POLYGON);
glTexCoord2f(0.0f, 0.0f);
glVertex4f(-50, 0, 50, 1);
glTexCoord2f(50.0f, 0.0f);
glVertex4f(-50, 0, -50, 1);
glTexCoord2f(50.0f, 50.0f);
glVertex4f(50, 0, -50, 1);
glTexCoord2f(0.0f, 50.0f);
glVertex4f(50, 0, 50, 1);
glEnd();
glDisable(GL_TEXTURE_2D);
And this at the start of the program:
LoadTGA(&texturesList[0], "\snow.tga");
LoadTGA(&texturesList[1], "\snow2.tga");
So after it loads them texturesList contains 2 textures with ids of 1 and 2.
So do I not call glBindTexture(GL_TEXTURE_2D, texturesList[0].texID); before I draw to choose the right texture? Because I have to tell glTexCoord2f what to operate on?
It works perfectly if I never call glBind in my draw, but if I do nothing shows up. What I am more confused about is that it glBind doesn't need to be called for it to work.
But the last texture I create gets shown(snow2.tga).
If I can clear anything up let me know.
So do I not call glBindTexture (GL_TEXTURE_2D, texturesList[0].texID); before I draw to choose the right texture? Because I have to tell glTexCoord2f what to operate on?
glTexCoord2f (...) operates at the per-vertex level. It is independent of what texture you have loaded, that is actually the whole point. You can map any texture you want simply by changing which texture is bound when you draw.
It works perfectly if I never call glBind in my draw, but if I do nothing shows up. What I am more confused about is that it glBind doesn't need to be called for it to work.
You need to bind your texture in LoadTGA (...) because "generating" a name alone is insufficient.
All that glGenTextures (...) does is return one or more unused names from the list of names OpenGL has for textures and reserve them so that a subsequent call does not give out the same name.
It does not actually create a texture, the name returned does not become a texture until you bind it. Until that time the name is merely in a reserved state. Commands such as glTexParameterf (...) and glTexImage2D (...) operate on the currently bound texture, so in addition to generating a texture you must also bind one before making those calls.
Now, onto some other serious issues that are not related to OpenGL:
Do whatever possible to get rid of your system ("cd"); line. There are much better ways of changing the working directory.
SetCurrentDirectory (...) (Windows)
chdir (...) (Linux/OS X/BSD/POSIX)
Do not use the file name "\snow.tga" as a string literal, because a C compiler may see "\" and interpret whatever comes after it as part of an escape sequence. Consider "\\snow.tga" instead or "/snow.tga" (yes, this even works on Windows - "\" is a terrible character to use as a path separator).
"\s" is not actually a recognized escape sequence by C compilers, but using "\" to begin your path is playing with fire because there are a handful of reserved characters where it will actually matter. "\fire.tga", for instance, is actually shorthand for {0x0c} "ire.tga". The compiler will replace your string literal with that sequence of bytes and will leave you scratching your head trying to figure out what went wrong.
I have some code I am trying to run on my laptop, but it keeps giving a 'FALLBACK' error. I don't know what it is, but it is quite annoying. It should just print 'Hello world!', but it prints it twice and changes the colours a little bit.
The same code is running perfectly on my PC.
I've searched a long time to solve this problem, but couldn't find anything. I hope some people out here can help me?
Here is my code:
// Template, major revision 3
#include "string.h"
#include "surface.h"
#include "stdlib.h"
#include "template.h"
#include "game.h"
using namespace Tmpl8;
void Game::Init()
{
// put your initialization code here; will be executed once
}
void Game::Tick( float a_DT )
{
m_Screen->Clear( 0 );
m_Screen->Print( "hello world", 2, 2, 0xffffff );
m_Screen->Line( 2, 10, 66, 10, 0xffffff );
}
Thanks in advance! :-)
Edit:
It gives an error on this line:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, SCRWIDTH, SCRHEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
Maybe this could help?
Looking at this post from OpenGl Forums and seeing that you're using OpenGL, I may have an idea.
You say that the code works fine on your computer but not on your notebook. There you have a possible hardware (different video cards) and software (different OpenGL version/support).
What may be happening is that the feature you want to use from OpenGL is not supported on your notebook. Also, you are creating a texture without data (the NULL on the last parameter), this will probably give you errors such as buffer overflow.
EDIT:
You may take a look on GLEW. It has a tool called "glewinfo" that looks for all features available on your hardware/driver. It generates a file by the same name on the same path of the executable. For the power of two textures, look for GL_ARB_texture_non_power_of_two.
EDIT 2:
As you said on the comments, without the GL_ARB_texture_non_power_of_two extension, and the texture having size of 640x480, glTexture will give you an error, and all the code that depends on it will likely fail. To fix it, you have to stretch the dimensions of the image to the next power of two. In this case, it would become 1024x512. Remember that the data that you supply to glTexture MUST have these dimensions.
Seeing that the error comes from the line:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, SCRWIDTH, SCRHEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
Here are the reasons why that function could return GL_INVALID_VALUE. Since I can't check it for sure, you'll have to go through this list and make sure which one of them caused this issue.
GL_INVALID_VALUE is generated if level is less than 0.
GL_INVALID_VALUE may be generated if level is greater than log 2 max , where max is the returned value of GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if internalFormat is not 1, 2, 3, 4, or one of the accepted resolution and format symbolic constants.
GL_INVALID_VALUE is generated if width or height is less than 0 or greater than 2 + GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if non-power-of-two textures are not supported and the width or height cannot be represented as 2 k + 2 border for some integer value of k.
GL_INVALID_VALUE is generated if border is not 0 or 1.
EDIT: I believe it could be the non-power-of-two texture size that's causing the problem. Rounding your texture size to the nearest power-of-two should probably fix the issue.
EDIT2: To test which of these is causing an issue, let's start with the most common issue; trying to create a texture of non-power-of-two size. Create an image of size 256x256 and call this function with 256 for width and height. If the function still fails I would try putting the level to 0 (keeping the power-of-two size still in place).
BUT DANG you don't have data for your image? It's set as NULL. You need to load the image data into memory and pass it to the function to create the texture. And you aren't doing that. Read how to load images from a file or how to render to texture, whichever is relevant to you.
This is to give you a better answer as a fresh post. First you need this helper function to load a bmp file into memory.
unsigned int LoadTex(string Image)
{
unsigned int Texture;
FILE* img = NULL;
img = fopen(Image.c_str(),"rb");
unsigned long bWidth = 0;
unsigned long bHeight = 0;
DWORD size = 0;
fseek(img,18,SEEK_SET);
fread(&bWidth,4,1,img);
fread(&bHeight,4,1,img);
fseek(img,0,SEEK_END);
size = ftell(img.file) - 54;
unsigned char *data = (unsigned char*)malloc(size);
fseek(img,54,SEEK_SET); // image data
fread(data,size,1,img);
fclose(img);
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, bWidth, bHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
if (data)
free(data);
return Texture;
}
Courtesy: Post by b0x in Game Deception.
Then you need to call it in your code likewise:
unsigned int texture = LoadTex("example_tex.bmp");
Let me start by trying to specify what I want to do:
Given a grey scale image, I want to create 256 layers (assuming 8bit images), where each layer is the image thresholded with a grey scale i -- which is also the i'th layer (so, i=0:255). For all of these layers I want to compute various other things which are not very relevant to my problem, but this should explain the structure of my code.
The problem is that I need to execute the code very often, so I want to speed things up as much as possible, using a short amount of time (so, simple speedup tricks only). Therefore I figured I could use the OpenMP library, as I have a quad core, and everything is CPU-based at the moment.
This brings me to the following code, which executes fine (at least, it looks fine :) ):
#pragma omp parallel for private(i,out,tmp,cc)
for(i=0; i< numLayers; i++){
cc=new ConnectedComponents(255);
out = (unsigned int *) malloc(in->dimX()* in->dimY()*sizeof(int));
tmp = (*in).dupe();
tmp->threshold((float) i);
if(!tmp){ printf("Could not allocate enough memory\n"); exit(-1); }
cc->connected(tmp->data(),out,tmp->dimX(),tmp->dimY(),std::equal_to<unsigned int>(), true);
free(out);
delete tmp;
delete cc;
}
ConnectedComponents is just some library which implements the 2-pass floodfill, just there for illustration, it is not really part of the problem.
This code finishes fine with 2,3,4,8 threads (didn't test any other number).
So, now the weird part. I wanted to add some visual feedback, helping me to debug. The object tmp contains a method called saveAsTexture(), which basically does all the work for me, and returns the texture ID. This function works fine single threaded, and also works fine with 2 threads. However, as soon as I go beyond 2 threads, the method causes a segmentation fault.
Even with #pragma omp critical around it (just in case saveAsTexture() is not thread-safe), or executing it only once, it still crashes. This is the code I have added to the previous loop:
if(i==100){
#pragma omp critical
{
tmp->saveToTexture();
}
}
which is only executed once, since i is the iterator, and it is a critical section... Still, the code ALWAYS segfaults at the first openGL call (bruteforce tests with printf(), fflush(stdout)).
So, just to make sure I am not leaving out relevant information, here is the saveAsTexture function:
template <class T> GLuint FIELD<T>::saveToTexture() {
unsigned char *buf = (unsigned char*)malloc(dimX()*dimY()*3*sizeof(unsigned char));
if(!buf){ printf("Could not allocate memory\n"); exit(-1); }
float m,M,avg;
minmax(m,M,avg);
const float* d = data();
int j=0;
for(int i=dimY()-1; i>=0; i--) {
for(const float *s=d+dimX()*i, *e=s+dimX(); s<e; s++) {
float r,g,b,v = ((*s)-m)/(M-m);
v = (v>0)?v:0;
if (v>M) { r=g=b=1; }
else { v = (v<1)?v:1; }
r=g=b=v;
buf[j++] = (unsigned char)(int)(255*r);
buf[j++] = (unsigned char)(int)(255*g);
buf[j++] = (unsigned char)(int)(255*b);
}
}
GLuint texid;
glPixelStorei(GL_UNPACK_ALIGNMENT,1);
glDisable(GL_TEXTURE_3D);
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &texid);
printf("TextureID: %d\n", texid);
fflush(stdout);
glBindTexture(GL_TEXTURE_2D, texid);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, dimX(), dimY(), 0, GL_RGB, GL_UNSIGNED_BYTE, buf);
glBindTexture(GL_TEXTURE_2D, 0);
glDisable(GL_TEXTURE_2D);
free(buf);
return texid;
}
It is good to note here that T is ALWAYS a float in my program.
So, I do not understand why this program works fine when executed with 1 or 2 threads (executed ~25 times, 100% success), but segfaults when using more threads (executed ~25 times, 0% success). And ALWAYS at the first openGL call (e.g. if I remove glPixelStorei(), it segfaults at glDisable()).
Am I overlooking something really obvious, am I encountering a weird OpenMP bug, or... what is happening?
You can only make OpenGL calls from one thread at a time, and the thread has to have the current context active.
An OpenGL context can only be used by one thread at a time (limitation imposed by wglMakeCurrent/glxMakeCurrent).
However, you said you're using layers. I think you can use different contexts for different layers, with the WGL_ARB_create_context extension (I think there's one for linux too) and setting the WGL_CONTEXT_LAYER_PLANE_ARB parameter. Then you could have a different context per thread, and things should work out.
Thank you very much for all the answers! Now I know why it fails I have decided to simply store everything in a big 3D texture (because this was an even easier solution), and just send all the data to the GPU at once. That works fine in this case.