Related
I have two textures rendered in the same way. The green texture has the right transparency in the right places, but when I move the pink texture in front, it shows the background color where it should be transparent.
This is the snippet code of the paintGL method that renders the textures.
void OpenGLWidget::paintGL()
{
// ...
for (int i = 0; i < lights.size(); i++)
{
glUseProgram(lights[i].program);
setUniform3fv(program, "lightPosition", 1, &lights[i].position[0]);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, lights[i].texture);
lights[i].svg.setColor(toColor(lights[i].diffuse));
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, lights[i].svg.width(), lights[i].svg.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, lights[i].svg.toImage().constBits());
glGenerateMipmap(GL_TEXTURE_2D);
glBindVertexArray(lights[i].vertexArray);
glDrawElements(GL_TRIANGLES, lights[i].indices.size(), GL_UNSIGNED_BYTE, nullptr);
}
update();
}
The toImage method of the svg class generates a new QImage object from the svg file, so the new texture value should be updated with each frame.
Where am I doing wrong? Thanks!
This probably happens because you have depth testing enabled. Even though parts of the texture are (partly or fully) transparent, OpenGL still writes to the depth buffer, so the pink light's quad appears to obscure the green light. It works the other way round, because the pink light is drawn first, so the green light hasn't been written to the depth buffer at that point.
The usual solution to this is to render transparent textures in back to front order.
You could also just write your fragment shader to discard fragments if they are transparent. But this results in artifacts if you have semi-transparent fragments, which you have, because of texture filtering and mipmaps.
Here is a description of the problem:
I want to render some VBO shapes (rectangles, circles, etc) to an off screen framebuffer object. This could be any arbitrary shape.
Then I want to draw the result on a simple sprite surface as a texture, but not on the entire screen itself.
I can't seem to get this to work correctly.
When I run the code, I see the shapes being drawn all over the screen, but not in the sprite in the middle. It remains blank. Even though it seems like I set up the FBO with 1 color texture, it still only renders to screen even if I select the FBO object into context.
What I want to achieve is these shapes being drawn to an off screen texture (using an FBO, obviously) and then render it on the surface of a sprite (or a cube, or we) drawn somewhere on the screen. Yet, whatever I draw, appears to be drawn in the screen itself.
The tex(tex_object_ID); function is just a short-hand wrapper for OpenGL's standard texture bind. It selects a texture into current rendering context.
No matter what I try I get this result: The sprite is blank, but all these shapes should appear there, not on the main screen. (Didn't I bind rendering to FBO? Why is it still rendering on screen?)
I think it is just a logistics of setting up FBO in the right order that I am missing. Can anyone tell what's wrong with my code?
Not sure why the background is red, as I clear it after I select the FBO. It is the sprite that should get the red background & shapes drawn on it.
/*-- Initialization -- */
GLuint texture = 0;
GLuint Framebuffer = 0;
GLuint GenerateFrameBuffer(int dimension)
{
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, dimension, dimension, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
glGenFramebuffers(1, &Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, Framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
glDrawBuffer(GL_COLOR);
glReadBuffer(GL_COLOR);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
console_log("GL_FRAMEBUFFER != GL_FRAMEBUFFER_COMPLETE\n");
return texture;
}
// Store framebuffer texture (should I store texture here or Framebuffer object?)
GLuint FramebufferHandle = GenerateFrameBuffer( 256 );
Standard OpenGL initialization code follows, memory is allocated, VBO's are created and bound, etc. This works correctly and there aren't errors in initialization. I can render VBOs, polygons, textured polygons, lines, etc, on standard double buffer with success.
Next, in my render loop I do the following:
// Possible problem?
// Should FramebufferHandle be passed here?
// I tried "texture" and "Framebuffer " as well, to no effect:
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferHandle);
// Correct projection, just calculates the view based on current zoom
Projection = setOrthoFrustum(-config.zoomed_width/2, config.zoomed_width/2, -config.zoomed_height/2, config.zoomed_height/2, 0, 100);
View.identity();
Model.identity();
// Mini shader, 100% *guaranteed* to work, there are no errors in it (works normally on the screen)
shaderProgramMini.use();
//Clear frame buffer with blue color
glClearColor(0.0f, 0.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);// | GL_DEPTH_BUFFER_BIT);
// Set yellow to draw different shapes on the framebuffer
color = {1.0f,1.0f,0.0f};
// Draw several shapes (already correctly stored in VBO objects)
Memory.select(VBO_RECTANGLES); // updates uniforms
glDrawArrays(GL_QUADS, 0, Memory.renderable[VBO_RECTANGLES].indexIndex);
Memory.select(VBO_CIRCLES); // updates uniforms
glDrawArrays(GL_LINES, 0, Memory.renderable[VBO_CIRCLES].indexIndex);
Memory.select(VBO_2D_LIGHT); // updates uniforms
glDrawArrays(GL_LINES, 0, Memory.renderable[VBO_2D_LIGHT].indexIndex);
// Done writing to framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Correct projection, just calculates the view based on current zoom
Projection = setOrthoFrustum(-config.zoomed_width/2, config.zoomed_width/2, -config.zoomed_height/2, config.zoomed_height/2, 0, 100);
View.identity();
Model.identity();
Model.scale(10.0);
// Select texture shader to draw what was drawn on offscreen Framebuffer / texture
// Standard texture shader, 100% *guaranteed* to work, there are no errors in it (works normally on the screen)
shaderProgramTexture.use();
// This is a wrapper for bind texture to ID, just shorthand function name
tex(texture); // FramebufferHandle; // ? // maybe the mistake in binding to the wrong target object?
color = {0.5f,0.2f,0.0f};
Memory.select(VBO_SPRITE); Select a square VBO for rendering sprites (works if any other texture is assigned to it)
// finally draw the sprite with Framebuffer's texture:
glDrawArrays(GL_TRIANGLES, 0, Memory.renderable[VBO_SPRITE].indexIndex);
I may have gotten the order of something completely wrong. Or FramebufferHandle/Framebuffer/texture object is not passed to something correctly. But I spent all day, and hope someone more experienced than me can see the mistake.
GL_COLOR is not an accepted value for glDrawBuffer
See OpenGL 4.6 API Compatibility Profile Specification, 17.4.1 Selecting Buffers for Writing, Table 17.4 and Table 17.5, page 628
NONE, FRONT_LEFT, FRONT_RIGHT, BACK_LEFT, BACK_RIGHT, FRONT, BACK, LEFT, RIGHT, FRONT_AND_BACK, AUXi.
Arguments to DrawBuffer when the context is bound to a default framebuffer, and the buffers they indicate. The same arguments are valid for ReadBuffer, but only a single buffer is selected as discussed in section.
COLOR_ATTACHMENTi
Arguments to DrawBuffer(s) and ReadBuffer when the context is bound to a framebuffer object, and the buffers they indicate. i in COLOR_ATTACHMENTi may range from zero to the value of MAX_COLOR_ATTACHMENTS minus one.
Thsi means that glDrawBuffer(GL_COLOR); and glReadBuffer(GL_COLOR); will generate a GL_INVALID_ENUM error.
Try to use COLOR_ATTACHMENT0 instead.
Furthermore, glCheckFramebufferStatus(GL_FRAMEBUFFER), checkes the completeness of the framebuffer object which is bound to the target.
This means that
glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE
has to be done before
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Or you have to use:
glNamedFramebufferReadBuffer(Framebuffer, GL_FRAMEBUFFER);
I want to use a grayscale image generated in OpenCV in a GLSL shader.
Based on the question on OpenCV image loading for OpenGL Texture, I've managed to come up with the code that passes RGB image to the shader:
cv::Mat image;
// ...acquire and process image somehow...
//create and bind a GL texture
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, // Type of texture
0, // Pyramid level (for mip-mapping) - 0 is the top level
GL_RGB, // Internal colour format to convert to
image.cols, image.rows, // texture size
0, // Border width in pixels (can either be 1 or 0)
GL_BGR, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
GL_UNSIGNED_BYTE, // Image data type
image.ptr()); // The actual image data itself
glGenerateMipmap(GL_TEXTURE_2D);
and then in the fragment shader I just use this texture:
#version 330
in vec2 tCoord;
uniform sampler2D texture;
out vec4 color;
void main() {
color = texture2D(texture, tCoord);
}
and it all works great.
But now I want to do some grayscale processing on that image, starting with cv::cvtColor(image, image, CV_BGR2GRAY);, doing some more OpenCV stuff to it, and then passing the grayscale to the shaders.
I thought I should use GL_LUMINOSITY as the colour format to convert to, and probably as the input image format as well - but all I'm getting is a black screen.
Can anyone please help me with it?
input format
I'd use GL_RED, since the GL_LUMINANCE format has been deprecated
internalFormat
depends on what you want to do in your shader, although you should always specify a sized internal format, e.g. GL_RGBA8 which gives you 8 bits per channel. Although, with GL_RGBA8, the green, blue and alpha channels will be zero anyway since your input data only has a single channel, so you should probably use the GL_R8 format instead. Also, you can use texture swizzling:
GLint swizzleMask[] = {GL_RED, GL_RED, GL_RED, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
which will cause all channels to 'mirror' the red channel when you access the texture in the shader.
If I render a scene in openGL, is it possible to get back the texture coordinates that were used to paint that pixel?
For example, if I render a triangle that has 3 vertices (x,y,z) and 3 tex coords (u,v), and then I select a pixel on the triangle, I can get the color of the triangle and the depth using opengl calls, but is it possible to also get the interpolated texture coordinate?
Basically, I want to get the image point on the texture that was used to paint the triangle at a particular pixel.
I am guessing the only real way to do this is by reconstructing the ray that goes from the camera center through the pixel on the image plane, and then do a ray-triangle intersection to figure out which triangle it was, and then I can do a lookup into my texture array to get the texture coordinates of the triangle, and then do my own barycentric interpolation, but I would like to avoid having to do all that if possible.
Edit: The code I currently have didn't appear properly formatted in the bounty request below, so I've put it here. This is what I have right now, I would like to add reading texture coordinates u,v to it, ideally without a shader program if possible.
// First initialize the FBO, I am interested in depth and color
// create a framebuffer object
glGenFramebuffers(1, &id);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, id);
// Create texture to store color info
glGenTextures(1, &color);
glBindTexture(GL_TEXTURE_2D, color);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, color, 0);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
// Create render buffer to store depth info
glGenRenderbuffers(1, &depth);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, width, height);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
// Attach the renderbuffer to depth attachment point
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
// Then later in the code, I use the actual buffer:
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId);
...
//draw model
...
//read color and depth values (want to also read texture coordinate values u and v here too)
...
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
If you are determined to do this without using shaders, you could render your scene without lighting and using a single texture for every object. This texture would be filled with two gradients. The red channel would go from 0 to 255 horizontally and the green channel would go from 0 to 255 vertically. Now you have effectively painted the scene using the texture coordinates (assuming they are in the range 0-1). You can use glReadPixels to read back the buffer (or part of the buffer) you have just rendered to and use the red channel to retrieve u and the green channel to retrieve v.
Render your scene to a FBO with a 2-channel floating-point color attachment (GL_RG32F or similar) and output the u/v coordinates to that attachment in the fragment shader.
I have an FBO object with a color and depth attachment which I render to and then read from using glReadPixels() and I'm trying to add to it multisampling support.
Instead of glRenderbufferStorage() I'm calling glRenderbufferStorageMultisampleEXT() for both the color attachment and the depth attachment. The frame buffer object seem to have been created successfully and is reported as complete.
After rendering I'm trying to read from it with glReadPixels(). When the number of samples is 0 i.e. multisampling disables it works perfectly and I get the image I want. when I set the number of samples to something else, say 4, the frame buffer is still constructed OK but glReadPixels() fails with an INVALID_OPERATION
Anyone have an idea what could be wrong here?
EDIT: The code of glReadPixels:
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, ptr);
where ptr points to an array of width*height uints.
I don't think you can read from a multisampled FBO with glReadPixels(). You need to blit from the multisampled FBO to a normal FBO, bind the normal FBO, and then read the pixels from the normal FBO.
Something like this:
// Bind the multisampled FBO for reading
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, my_multisample_fbo);
// Bind the normal FBO for drawing
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, my_fbo);
// Blit the multisampled FBO to the normal FBO
glBlitFramebufferEXT(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
//Bind the normal FBO for reading
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, my_fbo);
// Read the pixels!
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
You can't read the multisample buffer directly with glReadPixels since it would raise an GL_INVALID_OPERATION error. You need to blit to another surface so that the GPU can do a downsample. You could blit to the backbuffer, but there is the problem of the "pixel owner ship test". It is best to make another FBO. Let's assume you made another FBO and now you want blit. This requires GL_EXT_framebuffer_blit. Typically, when your driver supports GL_EXT_framebuffer_multisample, it also supports GL_EXT_framebuffer_blit, for example the nVidia Geforce 8 series.
//Bind the MS FBO
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, multisample_fboID);
//Bind the standard FBO
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, fboID);
//Let's say I want to copy the entire surface
//Let's say I only want to copy the color buffer only
//Let's say I don't need the GPU to do filtering since both surfaces have the same dimension
glBlitFramebufferEXT(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
//--------------------
//Bind the standard FBO for reading
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboID);
glReadPixels(0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
Source: GL EXT framebuffer multisample