Can't generate mipmaps with off-screen OpenGL context on Linux - c++

This question is a continuation of the problem I described here .This is one of the weirdest bugs I have ever seen.I have my engine running in 2 modes:display mode and offscreen.The OS is Linux.I generate mipmaps for the textures and in Display mode it all works fine.In that mode I use GLFW3 for context creation.Now,the funny part:in the offscreen mode,context for which I create manually with the code below,the mipmap generation fails OCCASIONALLY!That's on some runs the resulting output looks ok,and in other the missing levels are clearly seen as the frame is full of texture junk data or entirely empty.
At first I though I had my mipmap gen routine wrong which goes like this:
glGenTextures(1, &textureName);
glBindTexture(GL_TEXTURE_2D, textureName);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, imageInfo.Width, imageInfo.Height, 0, imageInfo.Format, imageInfo.Type, imageInfo.Data);
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0 );
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
I also tried to play with this param:
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, XXX);
including Max level detection formula:
int numMipmaps = 1 + floor(log2(glm::max(imageInfoOut.width, imageInfoOut.height)));
But all this stuff didn't work consistently.Out of 10-15 runs 3-4 come out with broken Mipmaps.What I then found was that switching to GL_LINEAR solved it.Also in mipmap mode,setting just 1 level worked as well.Finally I started thinking there could a problem on a context level because in screen mode it works!I switched context creation to GLFW3 and it works.So I wonder what's going on here?Do I miss something in Pbuffer setup which breaks mipmaps generation?I doubt it because AFAIK it is done by the driver.
Here is my custom off-screen context creation setup:
int visual_attribs[] = {
GLX_RENDER_TYPE,
GLX_RGBA_BIT,
GLX_RED_SIZE, 8,
GLX_GREEN_SIZE, 8,
GLX_BLUE_SIZE, 8,
GLX_ALPHA_SIZE, 8,
GLX_DEPTH_SIZE, 24,
GLX_STENCIL_SIZE, 8,
None
};
int context_attribs[] = {
GLX_CONTEXT_MAJOR_VERSION_ARB, vmaj,
GLX_CONTEXT_MINOR_VERSION_ARB, vmin,
GLX_CONTEXT_FLAGS_ARB,
GLX_CONTEXT_ROBUST_ACCESS_BIT_ARB
#ifdef DEBUG
| GLX_CONTEXT_DEBUG_BIT_ARB
#endif
,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB,
None
};
_xdisplay = XOpenDisplay(NULL);
int fbcount = 0;
_fbconfig = NULL;
// _render_context
if (!_xdisplay) {
throw();
}
/* get framebuffer configs, any is usable (might want to add proper attribs) */
if (!(_fbconfig = glXChooseFBConfig(_xdisplay, DefaultScreen(_xdisplay), visual_attribs, &fbcount))) {
throw();
}
/* get the required extensions */
glXCreateContextAttribsARB = (glXCreateContextAttribsARBProc) glXGetProcAddressARB((const GLubyte *) "glXCreateContextAttribsARB");
glXMakeContextCurrentARB = (glXMakeContextCurrentARBProc) glXGetProcAddressARB((const GLubyte *) "glXMakeContextCurrent");
if (!(glXCreateContextAttribsARB && glXMakeContextCurrentARB)) {
XFree(_fbconfig);
throw();
}
/* create a context using glXCreateContextAttribsARB */
if (!(_render_context = glXCreateContextAttribsARB(_xdisplay, _fbconfig[0], 0, True, context_attribs))) {
XFree(_fbconfig);
throw();
}
// GLX_MIPMAP_TEXTURE_EXT
/* create temporary pbuffer */
int pbuffer_attribs[] = {
GLX_PBUFFER_WIDTH, 128,
GLX_PBUFFER_HEIGHT, 128,
None
};
_pbuff = glXCreatePbuffer(_xdisplay, _fbconfig[0], pbuffer_attribs);
XFree(_fbconfig);
XSync(_xdisplay, False);
/* try to make it the current context */
if (!glXMakeContextCurrent(_xdisplay, _pbuff, _pbuff, _render_context)) {
/* some drivers does not support context without default framebuffer, so fallback on
* using the default window.
*/
if (!glXMakeContextCurrent(_xdisplay, DefaultRootWindow(_xdisplay),
DefaultRootWindow(_xdisplay), _render_context)) {
throw();
}
}
Almost forgot:My system and hardware:
Kubuntu 13.04 64bit. GPU: NVidia Geforce GTX 680 . The engine uses OpenGL 4.2 API
Full OpenGL info:
**OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 680/PCIe/SSE2
OpenGL version string: 4.4.0 NVIDIA 331.49
OpenGL shading language version string: 4.40 NVIDIA via Cg compiler**
Btw,I used also older drivers and it doesn't matter.
UPDATE:
Seems like my assumption regarding GLFW was wrong.When I compile the engine and run it from the terminal the same is happening.BUT - if I run the engine from IDE (debug or release )there are no issues with the mipmaps.Is it possible the standalone app works against different SOs?
To make it clear,I dont't use Pbuffers to render into.I render into custom Frame buffers.
UPDATE1:
I have read that non-power of 2 textures can be tricky to auto generate mipmaps.And that in case OpenGL fails to generate all the levels it turns of texture usage.Is it possible that's what I am experiencing here?Because once the mipmapped texture goes wrong the rest of textures (non mipmapped) disappear too.But if this is the case then why this behavior is inconsistent?

Uh, why are you using PBuffers in the first place? PBuffers have just too many caveats as that there was only one valid reason to use them in a new project?
You want offscreen rendering? Then use Framebuffer Objects (FBOs).
You need a purely off-screen context? Then create a normal window which you simply don't show and create an FBO on it.

Related

Why is OpenGL simple loop faster than Vulkan one?

I have 2 graphics applications for OpenGL and Vulkan.
OpenGL loop looks something like this:
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
static int test = 0;
// "if" statement here is to ensure that there is no any caching or optimizations
// made by OpenGL driver (if such things exist),
// and commands are re-recorded to the buffer every frame
if ((test = 1 - test) == 0) {
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer1);
glUseProgram(program1);
glDrawArrays(GL_TRIANGLES, 0, vertices_size);
glUseProgram(0);
}
else {
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer2);
glUseProgram(program2);
glDrawArrays(GL_LINES, 0, vertices_size);
glUseProgram(0);
}
glfwSwapBuffers(window);
And Vulkan:
static uint32_t image_index = 0;
vkAcquireNextImageKHR(device, swapchain, 0xFFFFFFFF, image_available_semaphores[image_index], VK_NULL_HANDLE, &image_indices[image_index]);
vkWaitForFences(device, 1, &submission_completed_fences[image_index], VK_TRUE, 0xFFFFFFFF);
// VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT
vkBeginCommandBuffer(cmd_buffers[image_index], &command_buffer_bi);
vkCmdBeginRenderPass(cmd_buffers[image_index], &render_pass_bi[image_index], VK_SUBPASS_CONTENTS_INLINE);
vkCmdEndRenderPass(cmd_buffers[image_index]);
vkEndCommandBuffer(cmd_buffers[image_index]);
vkResetFences(device, 1, &submission_completed_fences[image_index]);
vkQueueSubmit(graphics_queue, 1, &submit_info[image_index], submission_completed_fences[image_index]);
present_info[image_index].pImageIndices = &image_indices[image_index];
vkQueuePresentKHR(present_queue, &present_info[image_index]);
const static int max_swapchain_image_index = swapchain_image_count - 1;
if (++image_index > max_swapchain_image_index) {
image_index = 0;
}
In the Vulkan loop there are no even rendering commands, just empty render pass. Validation layers are disabled.
OpenGL FPS is about 10500, and Vulkan FPS is about 7500 (with 8 swapchain images in use with VK_PRESENT_MODE_IMMEDIATE_KHR, less images make FPS lower).
Code is running on laptop with Ubuntu 18.04, discrete GPU Nvidia RTX 2060, Nvidia driver 450.66, Vulkan API version 1.2.133.
I know that OpenGL driver is highly optimized, but i can't imagine what else is to be optimized in Vulkan loop to make it faster than it is.
Are there some low-level linux driver issues? Or maybe Vulkan performance increase is accomplished only in much more complex applications (using multithreading e.g.)?

Error C2664: 'auxDIBImageLoadW' : cannot convert parameter 1 from 'LPSTR' to 'LPCWSTR'

I was just modifying the code after reinstalling windows and VS 2012 Ultimate. The code (shown below) works perfectly fine before, but when I try to run the code right now, it gives following errors:
Error 1 error C2664: 'auxDIBImageLoadW' : cannot convert parameter 1 from 'LPSTR' to 'LPCWSTR'
Code:
void CreateTexture(GLuint textureArray[], LPSTR strFileName, int textureID)
{
AUX_RGBImageRec *pBitmap = NULL;
if (!strFileName) // Return from the function if no file name was passed in
return;
pBitmap = auxDIBImageLoad(strFileName); //<-Error in this line // Load the bitmap and store the data
if (pBitmap == NULL) // If we can't load the file, quit!
exit(0);
// Generate a texture with the associative texture ID stored in the array
glGenTextures(1, &textureArray[textureID]);
// This sets the alignment requirements for the start of each pixel row in memory.
// glPixelStorei (GL_UNPACK_ALIGNMENT, 1);
// Bind the texture to the texture arrays index and init the texture
glBindTexture(GL_TEXTURE_2D, textureArray[textureID]);
// Build Mipmaps (builds different versions of the picture for distances - looks better)
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pBitmap->sizeX, pBitmap->sizeY, GL_RGB, GL_UNSIGNED_BYTE, pBitmap->data);
// Lastly, we need to tell OpenGL the quality of our texture map. GL_LINEAR is the smoothest.
// GL_NEAREST is faster than GL_LINEAR, but looks blochy and pixelated. Good for slower computers though.
// Read more about the MIN and MAG filters at the bottom of main.cpp
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
// glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
// Now we need to free the bitmap data that we loaded since openGL stored it as a texture
if (pBitmap) // If we loaded the bitmap
{
if (pBitmap->data) // If there is texture data
{
free(pBitmap->data); // Free the texture data, we don't need it anymore
}
free(pBitmap); // Free the bitmap structure
}
}
I tried this Link, This one too and also Tried this one too. but still getting error.
This function is used after initialization as:
LPCWSTR k =L"grass.bmp";
CreateTexture(g_Texture, "building1.bmp", 0);
CreateTexture(g_Texture, "clock.bmp", 0);
//list goes on
Can you help me out?
Change "LPSTR strFileName" to "LPCWSTR strFileName", "building1.bmp" to L"building1.bmp and "clock.bmp" to L"clock.bmp".
Always be careful because LPSTR is ASCII and LPCWSTR is Unicode. So if the function needs a Unicode variable (like this: L"String here") you can't give it a ASCII string.
The solutions are either:
Change your function prototype to take wide strings:
void CreateTexture(GLuint textureArray[], LPWSTR strFileName, int textureID)
//...
LPCWSTR k =L"grass.bmp";
CreateTexture(g_Texture, L"building1.bmp", 0);
CreateTexture(g_Texture, L"clock.bmp", 0);
or
Don't change your function prototype, but call the A version of the API function:
pBitmap = auxDIBImageLoadA(strFileName);
Recommended: Stick to wide strings and use the correct string types.

OpenGL: wglGetProcAddress always returns NULL

I am trying to create an OpenGL program which uses shaders. I tried using the following code to load the shader functions, but wglGetProcAddress always returns 0 no matter what I do.
The rest of the program works as normal when not using the shader functions.
HDC g_hdc;
HGLRC g_hrc;
PFNGLATTACHSHADERPROC glpf_attachshader;
PFNGLCOMPILESHADERPROC glpf_compileshader;
PFNGLCREATEPROGRAMPROC glpf_createprogram;
PFNGLCREATESHADERPROC glpf_createshader;
PFNGLDELETEPROGRAMPROC glpf_deleteprogram;
PFNGLDELETESHADERPROC glpf_deleteshader;
PFNGLDETACHSHADERPROC glpf_detachshader;
PFNGLLINKPROGRAMPROC glpf_linkprogram;
PFNGLSHADERSOURCEPROC glpf_shadersource;
PFNGLUSEPROGRAMPROC glpf_useprogram;
void GL_Init(HDC dc)
{
//create pixel format
PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
32,
0, 0, 0, 0, 0, 0,
0, 0, 0,
0, 0, 0, 0,
32, 0, 0,
PFD_MAIN_PLANE,
0, 0, 0, 0
};
//choose + set pixel format
int pixfmt = ChoosePixelFormat(dc, &pfd);
if (pixfmt && SetPixelFormat(dc, pixfmt, &pfd))
{
//create GL render context
if (g_hrc = wglCreateContext(dc))
{
g_hdc = dc;
//select GL render context
wglMakeCurrent(dc, g_hrc);
//get function pointers
glpf_attachshader = (PFNGLATTACHSHADERPROC) wglGetProcAddress("glAttachShader");
glpf_compileshader = (PFNGLCOMPILESHADERPROC) wglGetProcAddress("glCompileShader");
glpf_createprogram = (PFNGLCREATEPROGRAMPROC) wglGetProcAddress("glCreateProgram");
glpf_createshader = (PFNGLCREATESHADERPROC) wglGetProcAddress("glCreateShader");
glpf_deleteprogram = (PFNGLDELETEPROGRAMPROC) wglGetProcAddress("glDeleteProgram");
glpf_deleteshader = (PFNGLDELETESHADERPROC) wglGetProcAddress("glDeleteShader");
glpf_detachshader = (PFNGLDETACHSHADERPROC) wglGetProcAddress("glDetachShader");
glpf_linkprogram = (PFNGLLINKPROGRAMPROC) wglGetProcAddress("glLinkProgram");
glpf_shadersource = (PFNGLSHADERSOURCEPROC) wglGetProcAddress("glShaderSource");
glpf_useprogram = (PFNGLUSEPROGRAMPROC) wglGetProcAddress("glUseProgram");
}
}
}
I know this may be a possible duplicate, but on most of the other posts the error was because of simple mistakes (like calling wglGetProcAddress before wglMakeCurrent). I'm in a bit of an unique situation - any help would be appreciated.
You are requesting a 32-bit Z-Buffer in this code. That will probably throw you onto the GDI software rasterizer (which implements exactly 0 extensions). You can use 32-bit depth buffers on modern hardware, but most of the time you cannot do it using the default framebuffer, you have to use FBOs.
I have seen some drivers accept 32-bit depth only to fallback to the closest matching 24-bit pixel format, while others simply refuse to give a hardware pixel format at all (this is probably your case). If this is in fact your problem, a quick investigation of your GL strings (GL_RENDERER, GL_VERSION, GL_VENDOR) should make it obvious.
24-bit Depth + 8-bit Stencil is pretty much universally supported, this is the first depth/stencil size you should try.
You could use glew to do that kind of work for you.
Or maybe you don't want?
EDIT:
You could maybe use glext.h then. Or at least look in it and copy paste what interest you.
I'm not an expert I just remember having that kind of problem and try to remind what was conected to it.

Visual Studio Fallback error - Programming C++

I have some code I am trying to run on my laptop, but it keeps giving a 'FALLBACK' error. I don't know what it is, but it is quite annoying. It should just print 'Hello world!', but it prints it twice and changes the colours a little bit.
The same code is running perfectly on my PC.
I've searched a long time to solve this problem, but couldn't find anything. I hope some people out here can help me?
Here is my code:
// Template, major revision 3
#include "string.h"
#include "surface.h"
#include "stdlib.h"
#include "template.h"
#include "game.h"
using namespace Tmpl8;
void Game::Init()
{
// put your initialization code here; will be executed once
}
void Game::Tick( float a_DT )
{
m_Screen->Clear( 0 );
m_Screen->Print( "hello world", 2, 2, 0xffffff );
m_Screen->Line( 2, 10, 66, 10, 0xffffff );
}
Thanks in advance! :-)
Edit:
It gives an error on this line:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, SCRWIDTH, SCRHEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
Maybe this could help?
Looking at this post from OpenGl Forums and seeing that you're using OpenGL, I may have an idea.
You say that the code works fine on your computer but not on your notebook. There you have a possible hardware (different video cards) and software (different OpenGL version/support).
What may be happening is that the feature you want to use from OpenGL is not supported on your notebook. Also, you are creating a texture without data (the NULL on the last parameter), this will probably give you errors such as buffer overflow.
EDIT:
You may take a look on GLEW. It has a tool called "glewinfo" that looks for all features available on your hardware/driver. It generates a file by the same name on the same path of the executable. For the power of two textures, look for GL_ARB_texture_non_power_of_two.
EDIT 2:
As you said on the comments, without the GL_ARB_texture_non_power_of_two extension, and the texture having size of 640x480, glTexture will give you an error, and all the code that depends on it will likely fail. To fix it, you have to stretch the dimensions of the image to the next power of two. In this case, it would become 1024x512. Remember that the data that you supply to glTexture MUST have these dimensions.
Seeing that the error comes from the line:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, SCRWIDTH, SCRHEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
Here are the reasons why that function could return GL_INVALID_VALUE. Since I can't check it for sure, you'll have to go through this list and make sure which one of them caused this issue.
GL_INVALID_VALUE is generated if level is less than 0.
GL_INVALID_VALUE may be generated if level is greater than log 2 ⁡ max , where max is the returned value of GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if internalFormat is not 1, 2, 3, 4, or one of the accepted resolution and format symbolic constants.
GL_INVALID_VALUE is generated if width or height is less than 0 or greater than 2 + GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if non-power-of-two textures are not supported and the width or height cannot be represented as 2 k + 2 ⁡ border for some integer value of k.
GL_INVALID_VALUE is generated if border is not 0 or 1.
EDIT: I believe it could be the non-power-of-two texture size that's causing the problem. Rounding your texture size to the nearest power-of-two should probably fix the issue.
EDIT2: To test which of these is causing an issue, let's start with the most common issue; trying to create a texture of non-power-of-two size. Create an image of size 256x256 and call this function with 256 for width and height. If the function still fails I would try putting the level to 0 (keeping the power-of-two size still in place).
BUT DANG you don't have data for your image? It's set as NULL. You need to load the image data into memory and pass it to the function to create the texture. And you aren't doing that. Read how to load images from a file or how to render to texture, whichever is relevant to you.
This is to give you a better answer as a fresh post. First you need this helper function to load a bmp file into memory.
unsigned int LoadTex(string Image)
{
unsigned int Texture;
FILE* img = NULL;
img = fopen(Image.c_str(),"rb");
unsigned long bWidth = 0;
unsigned long bHeight = 0;
DWORD size = 0;
fseek(img,18,SEEK_SET);
fread(&bWidth,4,1,img);
fread(&bHeight,4,1,img);
fseek(img,0,SEEK_END);
size = ftell(img.file) - 54;
unsigned char *data = (unsigned char*)malloc(size);
fseek(img,54,SEEK_SET); // image data
fread(data,size,1,img);
fclose(img);
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, bWidth, bHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
if (data)
free(data);
return Texture;
}
Courtesy: Post by b0x in Game Deception.
Then you need to call it in your code likewise:
unsigned int texture = LoadTex("example_tex.bmp");

OpenGL calls segfault when called from OpenMP thread

Let me start by trying to specify what I want to do:
Given a grey scale image, I want to create 256 layers (assuming 8bit images), where each layer is the image thresholded with a grey scale i -- which is also the i'th layer (so, i=0:255). For all of these layers I want to compute various other things which are not very relevant to my problem, but this should explain the structure of my code.
The problem is that I need to execute the code very often, so I want to speed things up as much as possible, using a short amount of time (so, simple speedup tricks only). Therefore I figured I could use the OpenMP library, as I have a quad core, and everything is CPU-based at the moment.
This brings me to the following code, which executes fine (at least, it looks fine :) ):
#pragma omp parallel for private(i,out,tmp,cc)
for(i=0; i< numLayers; i++){
cc=new ConnectedComponents(255);
out = (unsigned int *) malloc(in->dimX()* in->dimY()*sizeof(int));
tmp = (*in).dupe();
tmp->threshold((float) i);
if(!tmp){ printf("Could not allocate enough memory\n"); exit(-1); }
cc->connected(tmp->data(),out,tmp->dimX(),tmp->dimY(),std::equal_to<unsigned int>(), true);
free(out);
delete tmp;
delete cc;
}
ConnectedComponents is just some library which implements the 2-pass floodfill, just there for illustration, it is not really part of the problem.
This code finishes fine with 2,3,4,8 threads (didn't test any other number).
So, now the weird part. I wanted to add some visual feedback, helping me to debug. The object tmp contains a method called saveAsTexture(), which basically does all the work for me, and returns the texture ID. This function works fine single threaded, and also works fine with 2 threads. However, as soon as I go beyond 2 threads, the method causes a segmentation fault.
Even with #pragma omp critical around it (just in case saveAsTexture() is not thread-safe), or executing it only once, it still crashes. This is the code I have added to the previous loop:
if(i==100){
#pragma omp critical
{
tmp->saveToTexture();
}
}
which is only executed once, since i is the iterator, and it is a critical section... Still, the code ALWAYS segfaults at the first openGL call (bruteforce tests with printf(), fflush(stdout)).
So, just to make sure I am not leaving out relevant information, here is the saveAsTexture function:
template <class T> GLuint FIELD<T>::saveToTexture() {
unsigned char *buf = (unsigned char*)malloc(dimX()*dimY()*3*sizeof(unsigned char));
if(!buf){ printf("Could not allocate memory\n"); exit(-1); }
float m,M,avg;
minmax(m,M,avg);
const float* d = data();
int j=0;
for(int i=dimY()-1; i>=0; i--) {
for(const float *s=d+dimX()*i, *e=s+dimX(); s<e; s++) {
float r,g,b,v = ((*s)-m)/(M-m);
v = (v>0)?v:0;
if (v>M) { r=g=b=1; }
else { v = (v<1)?v:1; }
r=g=b=v;
buf[j++] = (unsigned char)(int)(255*r);
buf[j++] = (unsigned char)(int)(255*g);
buf[j++] = (unsigned char)(int)(255*b);
}
}
GLuint texid;
glPixelStorei(GL_UNPACK_ALIGNMENT,1);
glDisable(GL_TEXTURE_3D);
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &texid);
printf("TextureID: %d\n", texid);
fflush(stdout);
glBindTexture(GL_TEXTURE_2D, texid);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, dimX(), dimY(), 0, GL_RGB, GL_UNSIGNED_BYTE, buf);
glBindTexture(GL_TEXTURE_2D, 0);
glDisable(GL_TEXTURE_2D);
free(buf);
return texid;
}
It is good to note here that T is ALWAYS a float in my program.
So, I do not understand why this program works fine when executed with 1 or 2 threads (executed ~25 times, 100% success), but segfaults when using more threads (executed ~25 times, 0% success). And ALWAYS at the first openGL call (e.g. if I remove glPixelStorei(), it segfaults at glDisable()).
Am I overlooking something really obvious, am I encountering a weird OpenMP bug, or... what is happening?
You can only make OpenGL calls from one thread at a time, and the thread has to have the current context active.
An OpenGL context can only be used by one thread at a time (limitation imposed by wglMakeCurrent/glxMakeCurrent).
However, you said you're using layers. I think you can use different contexts for different layers, with the WGL_ARB_create_context extension (I think there's one for linux too) and setting the WGL_CONTEXT_LAYER_PLANE_ARB parameter. Then you could have a different context per thread, and things should work out.
Thank you very much for all the answers! Now I know why it fails I have decided to simply store everything in a big 3D texture (because this was an even easier solution), and just send all the data to the GPU at once. That works fine in this case.