OpenGL ES can't render to texture - c++

I wrote this code:
First, I generate a texture and a depth buffer, then bind them to a framebuffer.
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
GLint max;
glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE,&max);;
if(max<=esContext->width||max<=esContext->height)
{
printf("Too big!\n");
getchar();
}
glGenFramebuffers(1,&framebuffer1);
glGenTextures(1,&texturel);
glBindTexture(GL_TEXTURE_2D,texturel);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,esContext->width,esContext->height,0,GL_RGBA,GL_UNSIGNED_BYTE,NULL);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glGenRenderbuffers(1,&depthbuffer);
glBindRenderbuffer(GL_RENDERBUFFER,depthbuffer);
glRenderbufferStorage(GL_RENDERBUFFER,GL_DEPTH_COMPONENT24,esContext->width,esContext->height);
glGenRenderbuffers(2,renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER,renderbuffer[0]);
glRenderbufferStorage(GL_RENDERBUFFER,GL_RGBA8,esContext->width,esContext->height);
glBindFramebuffer(GL_FRAMEBUFFER,framebuffer1);
glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D,texturel,0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,GL_RENDERBUFFER,depthbuffer);
Then I render a box to the framebuffer and try to render the texture to my screen:
glBindFramebuffer(GL_FRAMEBUFFER,framebuffer1);
glViewport(0,0,esContext->width,esContext->height);
glClearColor(1.0,0.0,0.0,0.0);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
g_renderer->draw_box(&s);
glBindFramebuffer(GL_FRAMEBUFFER,0);
glClearColor(1.0,1.0,0.0,0.0);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
g_renderer->render(texturel,0,0,esContext->width/2,esContext->height/2);
eglSwapBuffers( esContext->eglDisplay, esContext->eglSurface );
At the end, the result looks like random data.
I have tried many ways to render to texture, even copying the code in the OpenGL ES books, but the result is still wrong.

Assuming that you're using ES 2.0, the formats you are using for your texture and renderbuffer are not valid for render targets.
In ES 2.0, the only depth format that is valid for render targets is DEPTH_COMPONENT16.
For color render targets, the only valid formats are RGBA4, RGB5_A1, and RGB565.
Therefore, to get this to work with standard ES 2.0, you can for example use:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
esContext->width, esContext->height, 0,
GL_RGB, GL_UNSIGNED_SHORT_5_6_5, NULL);
...
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16,
esContext->width, esContext->height);
There are extensions to ES 2.0 to add support for the formats you are trying to use:
OES_rgb8_rgba8 adds support for RGB8 and RGBA8 as a color render target format.
OES_depth24 adds support for DEPTH_COMPONENT24 as a depth render target format.
But you will have to test for the presence of these extensions before attempting to use these formats.
Anytime you have problems with FBO rendering, it's always a good idea to call glCheckFramebufferStatus() to validate that the framebuffer status is valid.

Related

Fastest way to draw or blit rgb pixel array to window in SDL2 & OpenGL2 in C++

QUESTION:
How do I draw an rgb pixel image (array of rgb structs) to the display using SDL2 and OpenGL2 as fast & as efficiently as possible? I tried mapping to a 2d texture and blitting an SDL_Surface... I think the 2d texture is slightly faster but it uses more of my cpu...
CONTEXT:
I am making my own raytracer in C++ and I have some framework set up so that I can watch the image being raytraced in realtime. I am currently using SDL2 for the window and I am displaying my rendered image by mapping the image to a 2d texture via OpenGL. I should mention that I am using OpenGL2 for rendering because:
I am on WSL
I am using a GUI library which requires OpenGL (DearImGUI)
I am currently getting around 55fps but it is using a lot of cpu for drawing the window, which I did not expect it to. I was wondering if there is a way to display an rgb pixel array faster and reduce the amount of computation/stress on my cpu. I have a 2-core (lol) i7-5500U cpu (with integrated graphics) and I am rendering using my laptop. I am guessing that this is probably the limit of my laptop because it doesn't have a discrete gpu to help out, but still it is better to ask.
I am also a complete beginner at OpenGL so there is also a chance that there can be improvement in my code as well, so I also appreciate any feedback on my implementation.
METHOD:
So I want to detail the way I am showing the realtime rendered image. In terms of pseudo code and c++ code:
// I use this function to setup my window and opengl context - and I setup my textures here
void setup_window__and__image_texture(...args...) {
//SDL Window and OpenGL Context Creation
...
//Create an SDL_Surface from my rgb pixel array/image
SDL_Surface gImage = SDL_CreateRGBSurfaceFrom((void*)rgb_arr, width, height, depth, pitch,
0x000000ff, 0x0000ff00, 0x00ff0000, 0);
//Generate and Bind 2D Texture from surface
GLuint tex_img;
glGenTextures(1, &tex_img);
glBindTexture(GL_TEXTURE_2D, tex_img);
glTexImage2D(GL_TEXTURE_2D, 0, 3, gImage->w, gImage->h, 0, GL_RGB, GL_UNSIGNED_BYTE, gImage->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
return status;
}
// This function is used in my main loop where I update the image after tracing some more rays in the scene
void render_loop__display_image() {
glTexImage2D(GL_TEXTURE_2D, 0, 3, gImage->w, gImage->h, 0, GL_RGB, GL_UNSIGNED_BYTE, gImage->pixels);
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);
glColor4f( 1.0, 1.0, 1.0, 1.0 ); //Don't use special coloring
glTexCoord2f(-1.0f,-1.0f); glVertex2d(display_width*2, display_height*2);
glTexCoord2f( 1.0, -1.0f); glVertex2d(-display_width*2, display_height*2);
glTexCoord2f( 1.0, 1.0); glVertex2d(-display_width*2, -display_height*2);
glTexCoord2f(-1.0f, 1.0); glVertex2d(display_width*2, -display_height*2);
glEnd();
glDisable(GL_TEXTURE_2D);
}
I know I can blit the screen via SDL and I've tried that but that doesn't work nicely with OpenGL. I end up getting some gnarly screen tearing and flickering.
So is this the best that I can do in terms of speed/efficiency?

draw in FrameBuffer but get only black

windows
using glew
I'm trying to render offscreen and save the img opengl rendered to a png file.
I followed a highly rated answer on stackoverflow:
How to render offscreen on OpenGL?
But the png file I get is only a black screen.
Here's my code relating to it:
glutCreateWindow(argv[0]);
if(GLEW_OK!=glewInit())
{
return -1;
}
initScene();
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(GL_RENDERBUFFER,render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB8, viewport.w, viewport.h);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,fbo);
glClear(GL_COLOR_BUFFER_BIT); // clear the color buffer
glMatrixMode(GL_MODELVIEW); // indicate we are specifying camera transformations
glLoadIdentity(); // make sure transformation is "zero'd"
//draw...
//glBegin(GL_POINTS) glColor3f, glVertex2f
//glFlush();
glFinish();
/*glutDisplayFunc(myDisplay);
glutPostRedisplay();*/
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
savePNG(outputPNGName,0,0,viewport.w,viewport.h);
//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
How to solve the problem?
Thank you
savePNG (related code):
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(x, y, width, height, GL_RGB, GL_UNSIGNED_BYTE, (GLvoid *)image);
There are at least two problems in this code:
GL_RGB8 is not a valid format for a renderbuffer. From the glRenderbufferStorage() man page:
internalformat specifies the internal format to be used for the renderbuffer object's storage and must be a color-renderable, depth-renderable, or stencil-renderable format.
Table 8.13 in the latest spec document (4.5, downloadable from https://www.opengl.org/registry) lists all formats, with a column showing which of them are color-renderable. RGB8 does not have a checkmark in that column. You can use GL_RGBA8 instead, which is color-renderable.
You may also want to check out the glCheckFramebufferStatus() function, which allows you to check if your framebuffer setup is valid.
While we don't see the code for savePNG(), there is no way it can know that you want to read the pixel data from your FBO. It will most likely use glReadPixels(), which reads data from the current read framebuffer, while your code only sets the draw framebuffer. Before calling savePNG(), add this call to set the read framebuffer to your FBO:
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);

Draw the contents of the render buffer Object

Do not quite understand the operation render buffer object. For example if I want to show what is in the render buffer, I must necessarily do the render to texture?
GLuint fbo,color_rbo,depth_rbo;
glGenFramebuffers(1,&fbo);
glBindFramebuffer(GL_FRAMEBUFFER,fbo);
glGenRenderbuffersEXT(1, &color_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, color_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGBA8, 256, 256);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,GL_RENDERBUFFER_EXT, color_rb);
glGenRenderbuffersEXT(1, &depth_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,GL_RENDERBUFFER_EXT, depth_rb);
if(glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)!=GL_FRAMEBUFFER_COMPLETE_EXT)return 1;
glBindFramebuffer(GL_FRAMEBUFFER,0);
//main loop
//This does not work :-(
glBindFramebuffer(GL_FRAMEBUFFER,fbo);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
drawCube();
glBindFramebuffer(GL_FRAMEBUFFER,0);
any idea?
You are not going to see anything when you draw to an FBO instead of the default framebuffer, that is part of the point of FBOs.
Your options are:
Blit the renderbuffer into another framebuffer (in this case it would probably be GL_BACK for the default backbuffer)
Draw into a texture attachment and then draw texture-mapped primitives (e.g. triangles / quad) if you want to see the results.
Since 2 is pretty self-explanatory, I will explain option 1 in greater detail:
/* We are going to blit into the window (default framebuffer) */
glBindFramebuffer (GL_DRAW_FRAMEBUFFER, 0);
glDrawBuffer (GL_BACK); /* Use backbuffer as color dst. */
/* Read from your FBO */
glBindFramebuffer (GL_READ_FRAMEBUFFER, fbo);
glReadBuffer (GL_COLOR_ATTACHMENT0); /* Use Color Attachment 0 as color src. */
/* Copy the color and depth buffer from your FBO to the default framebuffer */
glBlitFramebuffer (0,0, width,height,
0,0, width,height,
GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT,
GL_NEAREST);
There are a couple of things worth mentioning here:
First, blitting from one framebuffer to another is often measurably slower than drawing two textured triangles that fill the entire viewport. Second, you cannot use linear filtering when you blit a depth or stencil image... but you can if you take the texture mapping approach (this only truly matters if the resolution of your source and destination buffers differ when blitting).
Overall, drawing a textured primitive is the more flexible solution. Blitting is most useful if you need to do Multisample Anti-Aliasing, because you would have to implement that in a shader otherwise and multisample texturing was added after Framebuffer Objects; some older hardware/drivers support FBOs but not multisample color (requires DX10 hardware) or depth (requires DX10.1 hardware) textures.

Is it possible to attach textures as render target to the default framebuffer?

Is it possible to attach textures as render target to the default framebuffer?
I.e.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
GLenum bufs[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, bufs);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, sceneTexture, 0);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, postProcessingStuffTexture, 0);
// Draw something
Also why does rendering to texture happen without anit-aliasing? Was pretty happy with my cheap 5xRCSAA or what it was.
Is it possible to attach textures as render target to the default framebuffer?
No.
Also why does rendering to texture happen without anit-aliasing?
Because antialiasing requires a multisample render target. Regular textures are not multisampled. But there are multisample textures which for that purpose. You can create a multisample texture object using glTexStorage2DMultisample or glTexImage2DMultisample.

Rendering to framebuffer and screen

I am trying to render to an fbo and then use that texture as an input to my second render pass for post processing, but it seems that glClear and glClearColor affect the texture that has been rendered to. How can I make them only affect the display buffer?
My code looks something like this:
UseRenderingShaderProgram();
glBindFramebuffer(GL_FRAMEBUFFER, fb);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
renderWorld();
// render to screen
UsePostProcessingShaderProgram();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClearColor(0.0, 0.0, 0.0, 1.0); // <== texture appears to get cleared in this two lines.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
renderWorld();
glfwSwapBuffers();
If I had to make an educated guess you're defined your texture to use mipmap minification filtering. After rendering to a texture using a FBO only one mipmap level is defined. But without all the mipmap levels selected being created the texture is incomplete and will not deliver data. The easiest solution would be disabling mipmapping for this texture by setting its minification filter parameter
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Also you must make sure that you're correctly unbinding and binding the texture being attached to the FBO.
Unbind the texture before binding the FBO (you can have it attached to it the whole time safely).
Unbind the FBO before binding the texture as image source for rendering.
Adding those two changes (binding order and mipmap levels) should make your texture appear.