I have some (OpenCV) code that generates images. I'm displaying these using OpenGL. When new images are created I run the following function (each time) with the same texture name and a new image:
void loadCVTexture(GLuint& texture, const cv::Mat_<Vec3f>& image){
if(texture != 0){
glBindTexture(GL_TEXTURE_2D, texture);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, image.cols, image.rows, GL_BGR, GL_FLOAT, image.data);
} else {
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, 3, image.cols, image.rows, 0, GL_BGR, GL_FLOAT, image.data);
}
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
}
I initialize the first image before glutMainLoop() and it displays correctly. It is given the id 1. When I update the image again the picture does not change. (I have confirmed that the display function is being called, and that the image is different.)
Edit: Another clue, I have sub-windows. If I comment-out my other window the code works as expected.
Since it works correctly without "sub-windows", my guess would be that you have multiple OpenGL contexts in your application, and that the updating of the texture happens with the wrong context active.
Try putting the texture uploading into your display function and see if that makes a difference.
Are you trying to show new images in a sequence instead of the existing ones?
In which case you just need to change the image.data, not create a new texture binding.
Related
I'm creating a 2D Engine and I want to implement docking, so I need to create a viewport and render the screen to a texture.
To render the viewport I'm saving the framebuffer into a FrameBufferObject and drawing as normally, I used this technique time ago and it worked with no problems, here is the Draw code:
glBindFramebuffer(GL_FRAMEBUFFER, fbo_msaa_id);
glViewport(0, 0, width, height);
DrawRoomObjects();
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo_msaa_id);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo_id);
glBlitFramebuffer(0, 0, width, height, // src rect
0, 0, width, height, // dst rect
GL_COLOR_BUFFER_BIT, // buffer mask
GL_LINEAR); // scale filter
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glViewport(0, 0, App->moduleWindow->screen_surface->w, App->moduleWindow->screen_surface->h);
I've made shure the DrawRoomsObjects() function is working correctly, and FBO is initialized correctly.
Here is the code to render the texture created using ImGui library:
glEnable(GL_TEXTURE_2D);
if (ImGui::Begin("Game Viewport", &visible, ImGuiWindowFlags_MenuBar)) {
ImGui::Image(viewportTexture->GetTextureID());
}
Before this chunk I make some calculations to fit the image to the dock, I'm not using viewportTexture any more on the code.
The problem comes when I get this weird artifact at the time of moving the quad, which I don't know how to call, click this link to see a gif of the bug.
It seems the texture is not cleaning the data correctly...?
You've to clear the framebuffer, before you render the objects to the framebuffer:
glBindFramebuffer(GL_FRAMEBUFFER, fbo_msaa_id);
glViewport(0, 0, width, height);
glClear(GL_COLOR_BUFFER_BIT);
DrawRoomObjects();
I am trying to make DirectX - OpenGL interop to work, with no success so far. In my case rendering is done in OpenGL (by OSG library), and I would like to have the rendered image as DirectX Texture2D. What I am trying so far:
Initialization:
ID3D11Device *dev3D;
// init dev3D with D3D11CreateDevice
ID3D11Texture2D *dxTexture2D;
// init dxTexture2D with CreateTexture2D, with D3D11_USAGE_DEFAULT, D3D11_BIND_SHADER_RESOURCE
HANDLE hGlDev = wglDXOpenDeviceNV(dev3D);
GLuint glTex;
glGenTextures(1, &glTex);
HANDLE hGLTx = wglDXRegisterObjectNV(hGlDev, (void*) dxTexture2D, glTex, GL_TEXTURE_2D, WGL_ACCESS_READ_WRITE_NV);
On every frame rendered by OSG camera I am getting a callback. First I start with glReadBuffer(GL_FRONT), and it seems to be OK till that point, as I am able to read the rendered buffer into memory with glReadPixels. The problem is that I can't copy the pixels to previously created GL_TEXTURE_2D:
BOOL lockOK = wglDXLockObjectsNV(hGlDev, 1, &hGLTx);
glBindTexture(GL_TEXTURE_2D, glTex);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, width, height, 0);
auto err = glGetError();
The last call to glCopyTexImage2D creates an error 0x502 (GL_INVALID_OPERATION), and I can't figure out why. Until this point everything else looks fine.
Any help is appreciated.
Found the problem. Instead of the call to glCopyTexImage2D (which creates a new texture), needed to use glCopyTexSubImage2D:
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, width, height);
I am trying to execute a code that uses a raw picture as a texture. The problem is the picture wont load. Where do I need to put the picture so the program can locate it? It is currently in the project folder I am working on. I work in Codeblocks 12.11 (Win7, MinGW)
bool setup_textures()
{
RGBIMG img;
// Create The Textures' Id List
glGenTextures(TEXTURES_NUM, g_texid);
// Load The Image From A Disk File
if (!load_rgb_image("glass_128x128.raw", 128, 128, &img)) return false;
// Create Nearest Filtered Texture
glBindTexture(GL_TEXTURE_2D, g_texid[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, 3, img.w, img.h, 0, GL_RGB, GL_UNSIGNED_BYTE, img.data);
// Create Linear Filtered Texture
glBindTexture(GL_TEXTURE_2D, g_texid[1]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 3, img.w, img.h, 0, GL_RGB, GL_UNSIGNED_BYTE, img.data);
// Create MipMapped Texture
glBindTexture(GL_TEXTURE_2D, g_texid[2]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_NEAREST);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, img.w, img.h, GL_RGB, GL_UNSIGNED_BYTE, img.data);
// Finished With Our Image, Free The Allocated Data
delete img.data;
return true;
}
the problem is the line with the loading of glass_128x128.raw fails and returns false;
Go to:
Project -> Properties -> Build targets -> [name of target] -> Execution working dir
and make sure that's set to the same directory as glass_128x128.raw. Chances are it's running from whatever directory your debug builds get put in, which isn't the same directory as your image is in.
This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
I want to try to make a simple program that takes a 3D model and renders it into an image. Is there any way I can use OpenGL to render an image and put it into a variable that holds an image rather than displaying an image? I don't want to see what I'm rendering I just want to save it. Is there any way to do this with OpenGL?
I'm assuming that you know how to draw stuff to the screen with OpenGL, and you wrote a function such as drawStuff to do so.
First of all you have to decide how big you want your final render to be; I'm choosing a square here, with size 512x512. You can also use sizes that are not power of two, but to keep things simple let's stick to this format for now. Sometimes OpenGL gets picky about this issue.
const int width = 512;
const int height = 512;
Then you need three objects in order to create an offscreen drawing area; this is called a frame buffer object as user1118321 said.
GLuint color;
GLuint depth;
GLuint fbo;
The FBO stores a color buffer and a depth buffer; also you screen rendering area has these two buffers, but you don't want to use them because you don't want to draw to the screen. To create the FBO, you need to do something like the following only one time for instance at startup:
glGenTextures(1, &color);
glBindTexture(GL_TEXTURE_2D, color);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glGenRenderbuffers(1, &depth);
glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
First you create a memory area to store pixel color, than one to store pixel depth (which in computer graphics is used to remove hidden surfaces), and finally you connect them to the FBO, which basically holds a reference to both. Consider as an example the first block, with 6 calls:
glGenTextures creates a name for a texture; a name in OpenGL is simply an integer, because a string would be too inefficient.
glBindTexture binds the texture to a target, namely GL_TEXTURE_2D; subsequent calls that specify that same target will operate on that texture.
The 3rd, 4th and 5th call are specific to the target being manipulated, and you should refer to the OpenGL documentation for further information.
The last call to glBindTexture unbinds the texture from the target. Since at some point you will hand control to your drawStuff function, which in turn will make its whole lot of OpenGL calls, you need to clear you workspace now, to avoid interference with the object that you have created.
To switch from screen rendering to offscreen rendering you could use a boolean variable somewhere in your program:
if (offscreen)
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
else
glBindFramebuffer(GL_FRAMEBUFFER, 0);
drawStuff();
if (offscreen)
saveToFile();
So, if offscreen is true you actually want drawStuff to interfere with fbo, because you want it to render the scene on it.
Function saveToFile is responsible for loading the result of the rendering and converting it to file. This is heavily dependent on the OS and language that you are using. As an example, on Mac OS X with C it would be something like the following:
void saveImage()
{
void *imageData = malloc(width * height * 4);
glBindTexture(GL_TEXTURE_2D, color);
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
CGContextRef contextRef = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB), kCGImageAlphaPremultipliedLast);
CGImageRef imageRef = CGBitmapContextCreateImage(contextRef);
CFURLRef urlRef = (CFURLRef)[NSURL fileURLWithPath:#"/Users/JohnDoe/Documents/Output.png"];
CGImageDestinationRef destRef = CGImageDestinationCreateWithURL(urlRef, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destRef, imageRef, nil);
CFRelease(destRef);
glBindTexture(GL_TEXTURE_2D, 0);
free(imageData);
}
Yes, you can do that. What you want to do is create a frame buffer object (FBO) backed by a texture. Once you create one and draw to it, you can download the texture to main memory and save it just like you would any bitmap.
Hey, I have this script to load a SDL_Surface and save it as a OpenGL texture:
typedef GLuint texture;
texture load_texture(std::string fname){
SDL_Surface *tex_surf = IMG_Load(fname.c_str());
if(!tex_surf){
return 0;
}
texture ret;
glGenTextures(1, &ret);
glBindTexture(GL_TEXTURE_2D, ret);
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(tex_surf);
return ret;
}
The problem is that it isn't working. When I call the function from the main function, it just doesn't load any image (when displaying it's just turning the drawing color), and when calling from any function outside the main function, the program crashes.
It's this line that makes the program crash:
2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
Can anybody see a mistake in this?
My bet is you need to convert the SDL_Surface before trying to cram it into an OpenGL texture. Here's something that should give you the general idea:
SDL_Surface* originalSurface; // Load like an other SDL_Surface
int w = pow(2, ceil( log(originalSurface->w)/log(2) ) ); // Round up to the nearest power of two
SDL_Surface* newSurface =
SDL_CreateRGBSurface(0, w, w, 24, 0xff000000, 0x00ff0000, 0x0000ff00, 0);
SDL_BlitSurface(originalSurface, 0, newSurface, 0); // Blit onto a purely RGB Surface
texture ret;
glGenTextures( 1, &ret );
glBindTexture( GL_TEXTURE_2D, ret );
glTexImage2D( GL_TEXTURE_2D, 0, 3, w, w, 0, GL_RGB,
GL_UNSIGNED_BYTE, newSurface->pixels );
I found the original code here. There may be some other useful posts on GameDev as well.
The problem lies probably in 3rd argument (internalformat) of the call to glTexImage2D.
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
You have to use constants like GL_RGB or GL_RGBA because the actual values of the macro are not related to the number of color components.
A list of allowed values is in the reference manual: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml .
This seems to be a frequent mistake. Maybe some drivers are just clever and correct this, so the wrong line might still work for some people.
/usr/include/GL/gl.h:473:#define GL_RGB 0x1907
/usr/include/GL/gl.h:474:#define GL_RGBA 0x1908
I'm not sure if you're doing this somewhere outside your code snippet, but have you called
glEnable(GL_TEXTURE_2D);
at some point?
Some older hardware (and, surprisingly, emscripten's opengl ES 2.0 emulation, running on the new machine I bought this year) doesn't seem to support textures whose dimensions aren't powers of two. That turned out to be the problem I was stuck on for a while (I was getting a black rectangle rather than the sprite I wanted). So it's possible the poster's problem would go away after resizing the image to have dimensions that are powers of two.
See: https://www.khronos.org/opengl/wiki/NPOT_Texture