So I was doing my part in implementing an OpenCV program combined with OpenGL and the thing is that I don't know which one is telling me the truth.
Wiki
API
The former one tells me that I can use GL_SRGB8_ALPHA8 on setting an image format in this function
void glRenderbufferStorage( GLenum target,
GLenum internalformat,
GLsizei width,
GLsizei height);
whereas the latter one explains that I cannot use GL_SRGB8_ALPHA8 but rather
GL_RGBA4, GL_RGB565, GL_RGB5_A1, GL_DEPTH_COMPONENT16, or GL_STENCIL_INDEX8.
Which one is showing me the right way to implement the function?
They're both right.
The OpenGL Wiki is for OpenGL, not OpenGL ES 2.0. Whereas OpenGL ES 2.0's documentation is for OpenGL ES 2.0, not desktop OpenGL.
The various documentation is correct for the API that they document.
Related
Using OpenGL FBO to do Offscreen rendering. Main code fragment of creating FBO is listed bellow:
glGenFramebuffers(1, &fbo);
// Create new framebuffers with new size.
int maxSize; glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE, &maxSize);
glGenRenderbuffers(1,&clrRbo)
glBindRenderbuffer(GL_RENDERBUFFER,clrRbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, (std::min)(maxSize,m_localvp.pix_width),(std::min)(maxSize, m_localvp.pix_height));
glGenRenderbuffers(1,&stencilRbo);
glBindRenderbuffer(GL_RENDERBUFFER,stencilRbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, (std::min)(maxSize,m_localvp.pix_width),(std::min)(maxSize, m_localvp.pix_height));
// Bind new framebuffers to FBO;
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, clrRbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, stencilRbo);
// Checke FBO status
checkFramebufferStatus();
The code above works for some of the graphic hardware, but fails on others. And the error reported in glCheckFramebufferStatus() is GL_FRAMEBUFFER_UNSUPPORTED which means the internal format of color or stencil format is non-renderable.
How can I get the correct color or stencil-renderable internal format supported on the graphic hardware where the code run? Or, how can I make my code portable across different OpenGL versions and hardware implementations?
MY research:
glGetInternalformat() is only supported in OpenGL 4.1 or higher. If I can get the same functionality of this function in older OpenGL versions?
This says GL_STENCIL_INDEX8 is the only stencil-renderable format, is that correct? It fails to function in my code.
This doc shows all the supported formats enums. What's the difference between internal formats and base formats.
glGetInternalFormat() was new functionality in OpenGL 4.2. There is no way to get the same information in earlier versions.
I'm not convinced that it would give you all you need anyway. While it provides a ton of information about each format, which looks partly helpful for the purpose, it still does not tell you which combinations of formats are valid as FBO attachments. GL_FRAMEBUFFER_UNSUPPORTED is not based on each format individually being renderable, but the combination of them. From the spec:
The combination of internal formats of the attached images does not violate an implementation-dependent set of restrictions.
If you used an attachment that does not use a renderable format, you would in fact get a GL_FRAMEBUFFER_ATTACHMENT_INCOMPLETE value.
To find a combination of formats that is supported by a given implementation, your best bet is to try a few combinations that can support the functionality you need. I would try them in order of preference, until you find a combination where glCheckFramebufferStatus() succeeds.
I am new to OpenGL and I am still experimenting with basic shapes. I sometimes find many functions like glEnd and many more, that are not mentioned in the OpenGL 3+ documentation. Were they replaced by other functions? Or do I have to write them manually?
Is there a tutorial online that uses OpenGL 3+?
As for " gluPerspective" I have read that it isn't used in Opengl 3+. Isn't it supposed to be a separate function in GLUT? what does it has to do with OpenGL 3+? Last, what does Transform( Width, Height ); do? (I found it in some sample code I downloaded, and I can't find it in GLUT or OpenGL).
here is the code:
GLvoid Transform(GLfloat Width, GLfloat Height)
{
glViewport(00, 00, Width, Height); /* Set the viewport */
glMatrixMode(GL_PROJECTION); /* Select the projection matrix */
glLoadIdentity(); /* Reset The Projection Matrix */
gluPerspective(20.0,Width/Height,0.1,100.0); /* Calculate The Aspect Ratio Of The Window */
glMatrixMode(GL_MODELVIEW); /* Switch back to the modelview matrix */
}
/* A general OpenGL initialization function. Sets all of the initial parameters. */
GLvoid InitGL(GLfloat Width, GLfloat Height)
{
glClearColor(0.0, 0.0, 0.0, 0.0); /* This Will Clear The Background Color To Black */
glLineWidth(2.0); /* Add line width, ditto */
Transform( Width, Height ); /* Perform the transformation */
}
/* The function called when our window is resized */
GLvoid ReSizeGLScene(GLint Width, GLint Height)
{
if (Height==0) Height=1; /* Sanity checks */
if (Width==0) Width=1;
Transform( Width, Height ); /* Perform the transformation */
}
I sometimes find many functions like glEnd and many more, that are not mentioned in the OpenGL 3+ documentation. Were they replaced by other functions?
They have been completely removed, since their workings doesn't reflect well with how modern graphics systems work on both the hardware and the software side. glBegin(…) and glEnd() form the surroundings of the so called immediate mode: Every call causes an operation. This reflects how early graphics systems were built, some 20 years ago.
Today one prepares batches of data, transfers them to GPU memory and triggers batch drawings with a single drawing call. OpenGL does this through vertex arrays and vertex buffer objects (VBOs). Vertex arrays have been around since OpenGL-1.1 (1996), and the VBO API is founded on vertex arrays, so for any reasonable program VBO support was added easily.
Or do I have to write them manually? Is there a tutorial online that uses OpenGL 3+?
It depends on the function in question. For example the whole texture environment, combiners have been removed. Just like the matrix manipulation functions and the whole lighting interface.
What they did and configured is now done through shaders and uniforms. Since you're expected to supply shaders one might say, you're expected to implement this yourself. OTOH you'll quickly find out, that often writing a shader is easier and more concise, than fiddling with large numbers of OpenGL parameter setting calls. Also once you've progressed far enough you'll hardly miss the matrix manipulation functions. Every serious application dealing with 3D graphics maintains the transformation matrices itself; be it for enhanced flexibilty or simply because those matrices are required in other places, too, e.g. some physics simulation.
As for " gluPerspective" I have read that it isn't used in Opengl 3+. Isn't it supposed to be a separate function in GLUT? what does it has to do with OpenGL 3+? Last, what does Transform( Width, Height ); do? (I found it in some sample code I downloaded, and I can't find it in GLUT or OpenGL).
gluPerspective is part of GLU. GLU is a companion library of OpenGL Utility functions, that used to ship with OpenGL-1.1. However it is not part of the OpenGL specification and completely optional.
GLUT is something else again. It's a simplicistic framework for quick and dirty setup of a OpenGL window and context, offering some minimalistic input API. Also it's no longer actively maintained. Personally I recommend not using it. If you must use a GLUT API, use FreeGLUT. Or better yet, don't GLUT at all, use a toolkit like Qt, GTK or a framework like GLFW or SDL.
Were they replaced by other functions?
No.
Or do I have to write them manually?
For old-style immediate-mode geometry submission you'll have to make your own work-alike. The matrix stack has a replacement.
Is there a tutorial online that uses OpenGL 3+?
At least one.
I'm using cairo (http://cairographics.org) in combination with an OpenGL based 3D graphics library.
I'm currently using the 3D library on Windows, but I'm hoping to receive an answer that is platform independent.
This is all done in c++.
I've got the straight forward approach working which is to use cairo_image_surface_create in combination with glTexImage2D to get an OpenGL texture.
However, from what I've been able to gather from the documentation cairo_image_surface_create uses a CPU-based renderer and writes the output to main memory.
I've come to understand cairo has a new OpenGL based renderer which renders its output directly on the GPU, but I'm unable to find concrete details on how to use it.
(I've found some details on the glitz-renderer, but it seems to be deprecated and removed).
I've checked the surface list at: http://www.cairographics.org/manual/cairo-surfaces.html, but I feel I'm missing the obvious.
My question is: How do I create a cairo surface that renders directly to an OpenGL texture?
Do note: I'll need to be able to use the texture directly (without copying) to display the cairo output on screen.
As of 2015, Cairo GL SDL2 is probably the best way to use Cairo GL
https://github.com/cubicool/cairo-gl-sdl2
If you are on an OS like Ubuntu where cairo is not compiled with GL you will need to compile your own copy and let cairo-gl-sdl2 know where it is.
The glitz renderer has been replaced by the experimental cairo-gl backend.
You'll find a mention of it in:
http://cairographics.org/OpenGL/
Can't say if it's stable enough to be used though.
Once you have a gl backend working, you can render into a Framebuffer Object to render directly in a given texture.
I did it using GL_BGRA.
int tex_w = cairo_image_surface_get_width(surface);
int tex_h = cairo_image_surface_get_height(surface);
unsigned char* data = cairo_image_surface_get_data(surface);
then do
glTexImage2D(GL_TEXTURE_2D, 0, 4, tex_w,tex_h, 0,GL_BGRA, GL_UNSIGNED_BYTE, data);
when you create the texture.
Use glTexSubImage2D(...) to update th texture when the image content changes. For speed
set the filters for the texture to GL_NEAREST
I'm trying something like
glEnable(texture_2d)
glBindTexture
glCopyTexImage2D
glDisable(GL_TEXTURE_2D);
I think glCopyTexImage2D won't work with a non-power of two image, so that's one problem; I've also tried glReadPixels, but it's too slow for my purposes.
If glReadPixels is too slow for you, then glCopyTexImage2D and glCopyTexSubImage2D aren’t going to be a whole lot faster. On platforms with support for framebuffer objects, like iOS, the recommended (i.e. faster) way to get GPU-rendered image data into a texture is to use that texture as the color attachment for a framebuffer object and render directly into it. That said, if you still want to pursue this method, here’s what you need to do to fix it:
First, you’re passing bad arguments to glCopyTexImage2D. The third argument, internalformat, should probably be GL_RGBA instead of 0. If you had called glGetError after calling glCopyTexImage2D, you would probably have gotten GL_INVALID_OPERATION. See the OpenGL ES 1.1 man pages for glCopyTexImage2D and glCopyTexSubImage2D.
Second, as you’ve already observed, glCopyTexImage2D requires its width and height arguments to be power-of-two as well. The correct way to deal with this is to allocate a texture image using glTexImage2D (you can pass NULL for pixels here), then use glCopyTexSubImage2D to copy your framebuffer contents into a rectangle. Note that glCopyTexSubImage2D doesn’t take an internalformat argument—because it’s updating a subrectangle of a texture, it uses the texture’s existing format.
For the record, glGetTexImage doesn’t exist in OpenGL ES 1.1 or 2.0, which is why you’re getting an implicit declaration.
You can check if the video card supports non-power of 2 textures if it supports the ARB_texture_non_power_of_two extension. See here for info.
glCopyTexImage2D does work with NPOT image.
NPT image (non-power of two) is limited supported by OpenGLES 2/OpenGL 1 or WebGL, In OpenGLES 3/OpenGL 2 or later it is fully supported.
If you want to copy color attachment of fbo to newTexture.
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, newTexture);
glTexImage2D(bindTarget, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glCopyTexSubImage2D(target, level, 0, 0, 0, 0, width, height);
NPT image will output black color in fragment shader sampling if texture mipmap, magnification filter and repeat mode setting is wrong.
To help figure out if the "non-power of two" thing is a problem, use glGetError() like this:
printf("error: %#06x\n", glGetError());
Put that in different places in your code to make sure what line is causing the problem, then check the error code here: https://www.khronos.org/opengl/wiki/OpenGL_Error
To copy a texture I did:
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, TEXTURE_WIDTH, TEXTURE_HEIGHT, 0);
glGenerateMipmap(GL_TEXTURE_2D);
after binding the texture. Check the docs on those two functions for more info.
I'm trying to find the most efficient way to alpha blend in SDL. I don't feel like going back and rewriting rendering code to use OpenGL instead (which I've read is much more efficient with alpha blending), so I'm trying to figure out how I can get the most juice out of SDL's alpha blending.
I've read that I could benefit from using hardware surfaces, but this means I'd have to run the game in fullscreen. Can anyone comment on this? Or anything else regarding alpha transparency in SDL?
I have once playing with sdl hardware surfaces ant it wasn't the most pleasant experience. SDL is really easy to use, but when it gets to efficiency you should really stick with something that was especially designed for such task. OpenGL is a good choice here. You can always mix SDL (window and event management) with openGL (graphics) and use some of the already written code.
You can find some info on hardware surfaces here and here
Decided to just not use alpha blending for that part. Pixel blending is too much for software surfaces, and OpenGL is needed when you want the power of your hardware.
Use OpenGL with SDL. It's good to get to know the GL library (I hardly see a use for non-graphics accelerated stuff these days, even GUIs use it now). SDL_image has a way to check for alpha channel. My function that creates textures from a path to an image file (uses SDL_image's IMG_Load() function) has this:
// if we successfully open a file
// do some gl stuff then
SDL_PixelFormat *format = surface->format;
int width, height;
width = pow2(surface->w);
height = pow2(surface->h);
SDL_LockSurface(surface); // Call this whenever reading pixels from a surface
/* Check for alpha channel */
if (format->Amask)
glTexImage2D(GL_TEXTURE_2D, 0, 4, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, surface->pixels);
else
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, surface->pixels);
SDL_UnlockSurface(surface);
pow2() just rounds the number to the next closest power of 2. A lot of video cards nowadays can handle non-power of 2 values for texture sizes but as far as I can tell, they are definitely NOT optimised for it (tested framerates). Other video cards will just refuse to render, your app may crash, etc etc.
Code is here: http://www.tatsh.net/2010-06-19/using-sdlimage-and-sdlttf-opengl