i am porting a program that run on windows and android to ios.
the following code works on both platform but on ios it stops rendering after that code is being executed, i suspect that the bind never gets unbinded, what is the proper way of doing it?
the objective of the code is to get the textures pixels.
this is the code:
void Texture::Bind()
{
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mTextureID);
}
GLubyte* Texture::GetPixels()
{
Bind();
int data_size = mWidth * mHeight * 4;
GLubyte* pixels = new GLubyte[data_size];
#ifdef _WIN32
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
#else
GLuint fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, mTextureID, 0);
glReadPixels(0, 0, mWidth, mHeight, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDeleteFramebuffers(1, &fbo);
#endif
return pixels;
}
iOS does not work with default framebuffer indexed with 0. You need to bind the buffer you use as main. It depends on the tool you used but if you are using directly an UIView then you should find some code similar to the following:
- (instancetype)initWithView:(UIView *)view {
if((self = [super init])) {
{
GLuint bufferID = 0;
glGenFramebuffers(1, &bufferID);
glBindFramebuffer(GL_FRAMEBUFFER, bufferID);
self.frameBufferID = bufferID;
}
{
GLuint bufferID = 0;
glGenRenderbuffers(1, &bufferID);
glBindRenderbuffer(GL_RENDERBUFFER, bufferID);
view.layer.contentsScale = UIScreen.mainScreen.scale;
[[EAGLContext currentContext] renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)view.layer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, bufferID);
self.colorBufferID = bufferID;
GLint width, height;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
self.bufferWidth = width;
self.bufferHeight = height;
}
}
return self;
}
You are looking for a call to renderbufferStorage:fromDrawable:. Near it a frame buffer should be created which is associated to this render buffer. The id of that frame buffer is what you need to bind.
So in the snipped above you would use self.frameBufferID.
As for the snippet I posted it is a part of a project which generates frame and render buffer from a given UIView. First it generates frame buffer and binds it. Next render buffer is created and bound. Render buffer is then setup through native iOS code with layer. We attach render buffer to frame buffer. At the end width and height are extracted.
Before drawing to this object the following bind method is called:
- (void)bind {
glBindFramebuffer(GL_FRAMEBUFFER, self.frameBufferID);
glBindRenderbuffer(GL_RENDERBUFFER, self.colorBufferID);
}
Related
If we want to get the previous frame texture every frame, we can use two FBO, and do something like that:
uint fbos[2];
uint textures[2];
// attach...
for(int i = 0; i < 2; i++)
{
// ...
glBindFrameBuffer(GL_FRAMEBUFFER, fbos[i]);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textures[i]);
// ...
}
// every frame
int i = frame % 2;
glBindFramebuffer(GL_FRAMEBUFFER, fbos[i]);
glBindTexture(GL_TEXTURE_2D, textures[1-i]);
draw();
But we can also use only one FBO :
// every frame
GLuint frambuffer; glGenFramebuffers(1, &framebuffer);
// init...
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textures[i]);
glBindTexture(GL_TEXTURE_2D, textures[1-i]);
draw();
glDeleteFramebuffers(1, &framebuffer);
It is simpler to use the second method as when we create a new post-effect, we don't have to create a new FBO. So is it advisable to do so performance-wise?
TL; DR; Is it advisable to call glFramebufferTexture2D() every frame and create an FBO every frame?
If you want the previous frame data than using a pixel buffer object would be a very fast solution.
Create two pixel buffer objects and get the data in this manner.
glReadBuffer(GL_COLOR_ATTACHMENT0);
w_writeIndex = (w_writeIndex + 1) % 2;
w_readIndex = (w_readIndex + 1) % 2;
glBindBuffer(GL_PIXEL_PACK_BUFFER, w_pbo[w_writeIndex]);
// copy from framebuffer to PBO asynchronously. it will be ready in the NEXT frame
glReadPixels(0, 0, SCR_WIDTH, SCR_HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
// now read other PBO which should be already in CPU memory
glBindBuffer(GL_PIXEL_PACK_BUFFER, w_pbo[w_readIndex]);
unsigned char* previousFrameData = (unsigned char*)glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
I am trying to initialize a texture with all zeros, using DRAW framebuffer as suggested by this post. However, I'm quite puzzled that my DRAW framebuffer is only cleared when I attached it to GL_COLOR_ATTACHMENT0:
int levels = 2;
int potW = 2; int potH = 2;
GLuint _potTextureName;
glGenTextures(1, &_potTextureName);
glBindTexture(GL_TEXTURE_2D, _potTextureName);
glTexStorage2D(GL_TEXTURE_2D, levels, GL_RGBA32F, potW, potH);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _potTextureName, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
GLuint clearColor[4] = {0,0,0,0};
glClearBufferuiv(GL_COLOR, 0, clearColor);
Modifying the snippet to use GL_COLOR_ATTACHMENT1, retaining everything else, will NOT clear the framebuffer:
int levels = 2;
int potW = 2; int potH = 2;
GLuint _potTextureName;
glGenTextures(1, &_potTextureName);
glBindTexture(GL_TEXTURE_2D, _potTextureName);
glTexStorage2D(GL_TEXTURE_2D, levels, GL_RGBA32F, potW, potH);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, _potTextureName, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT1);
GLuint clearColor[4] = {0,0,0,0};
glClearBufferuiv(GL_COLOR, 0, clearColor);
I tried using glDrawBuffers instead as suggested here, and I also tried using glClearColor and glClear, but they all behave the same way. What am I missing here?
It turns out that it has todo with what I previously bind to GL_COLOR_ATTACHMENT0.
In the second case, GL_COLOR_ATTACHMENT0 was already bound to a texture of smaller size. There is a note related to Framebuffer Completeness Rules that although there is no restriction on the texture size, the effective size of the FBO is the intersection of the sizes of all bound images. Therefore, in my second case, if the texture (1) bound to GL_COLOR_ATTACHMENT1 is bigger than what I bound to GL_COLOR_ATTACHMENT0, then the texture (1) will only be cleared partially, no matter what clear operation I used (glClear or glClearBuffer*).
The first case turns out to work for me since I only have one texture bound to the FBO, in GL_COLOR_ATTACHMENT0.
I'm trying to use egl to do offscreen rendering to an image.
my code doesn't generate any error. the egl part seems to be correct, the fbo is also complete. but when I read pixels using glReadPixels, I always get a black image (I cleared the entire scene with red, so the image should be red too).
I can't figure out what's wrong.
Also, I noticed that glRenderbufferStorage can only support 16bit color depth. GL_RGBA8 is consider an invalid parameter for this function. Isn't 16bit a bit low for serious opengl application?
My environment is Ubuntu 14.10 with mesa and intel graphics.
#include <QCoreApplication>
#include <QDebug>
#include <QImage>
#include <GLES2/gl2.h>
#include <EGL/egl.h>
int main(int argc, char *argv[])
{
#define CONTEXT_ES20
#ifdef CONTEXT_ES20
EGLint ai32ContextAttribs[] = { EGL_CONTEXT_CLIENT_VERSION, 2,
EGL_NONE };
#endif
// Step 1 - Get the default display.
EGLDisplay eglDisplay = eglGetDisplay((EGLNativeDisplayType)0);
// Step 2 - Initialize EGL.
eglInitialize(eglDisplay, 0, 0);
#ifdef CONTEXT_ES20
// Step 3 - Make OpenGL ES the current API.
eglBindAPI(EGL_OPENGL_ES_API);
// Step 4 - Specify the required configuration attributes.
EGLint pi32ConfigAttribs[5];
pi32ConfigAttribs[0] = EGL_SURFACE_TYPE;
pi32ConfigAttribs[1] = EGL_WINDOW_BIT;
pi32ConfigAttribs[2] = EGL_RENDERABLE_TYPE;
pi32ConfigAttribs[3] = EGL_OPENGL_ES2_BIT;
pi32ConfigAttribs[4] = EGL_NONE;
#else
EGLint pi32ConfigAttribs[3];
pi32ConfigAttribs[0] = EGL_SURFACE_TYPE;
pi32ConfigAttribs[1] = EGL_WINDOW_BIT;
pi32ConfigAttribs[2] = EGL_NONE;
#endif
// Step 5 - Find a config that matches all requirements.
int iConfigs;
EGLConfig eglConfig;
eglChooseConfig(eglDisplay, pi32ConfigAttribs, &eglConfig, 1,
&iConfigs);
if (iConfigs != 1) {
printf("Error: eglChooseConfig(): config not found.\n");
exit(-1);
}
// Step 6 - Create a surface to draw to.
EGLSurface eglSurface;
eglSurface = eglCreateWindowSurface(eglDisplay, eglConfig,
(EGLNativeWindowType)NULL, NULL);
// Step 7 - Create a context.
EGLContext eglContext;
#ifdef CONTEXT_ES20
eglContext = eglCreateContext(eglDisplay, eglConfig, NULL,
ai32ContextAttribs);
#else
eglContext = eglCreateContext(eglDisplay, eglConfig, NULL, NULL);
#endif
// Step 8 - Bind the context to the current thread
eglMakeCurrent(eglDisplay, eglSurface, eglSurface, eglContext);
GLuint fboId = 0;
GLuint renderBufferWidth = 1280;
GLuint renderBufferHeight = 720;
// create a framebuffer object
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
// create a texture object
/* GLuint textureId;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//GL_LINEAR_MIPMAP_LINEAR
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_HINT, GL_TRUE); // automatic mipmap
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, renderBufferWidth, renderBufferHeight, 0,
GL_RGB, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
// attach the texture to FBO color attachment point
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, textureId, 0);
*/
qDebug() << glGetError();
GLuint renderBuffer;
glGenRenderbuffers(1, &renderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer);
qDebug() << glGetError();
glRenderbufferStorage(GL_RENDERBUFFER,
GL_RGB565,
renderBufferWidth,
renderBufferHeight);
qDebug() << glGetError();
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,
renderBuffer);
qDebug() << glGetError();
GLuint depthRenderbuffer;
glGenRenderbuffers(1, &depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, renderBufferWidth, renderBufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
// check FBO status
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status != GL_FRAMEBUFFER_COMPLETE) {
printf("Problem with OpenGL framebuffer after specifying color render buffer: \n%x\n", status);
} else {
printf("FBO creation succedded\n");
}
glClearColor(1.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
qDebug() << eglSwapBuffers( eglDisplay, eglSurface);
int size = 4 * renderBufferHeight * renderBufferWidth;
printf("print size");
printf("size %d", size);
qDebug() << size;
unsigned char *data2 = new unsigned char[size];
glReadPixels(0,0,renderBufferWidth,renderBufferHeight,GL_RGB, GL_RGB565, data2);
QImage image(data2, renderBufferWidth, renderBufferHeight,renderBufferWidth*2, QImage::Format_RGB16);
image.save("result.png");
qDebug() << "done";
QCoreApplication a(argc, argv);
return a.exec();
}
OpenGL ES 2.0 has a very limited number of formats/types that are supported for glReadPixels(). The ones you are trying to use are not guaranteed to be supported:
glReadPixels(0 ,0, renderBufferWidth, renderBufferHeight,
GL_RGB, GL_RGB565, data2);
Only two formats/types are supported:
GL_RGBA/GL_UNSIGNED_BYTE.
An implementation dependent combination.
The format and type of the implementation dependent combination can be queried with:
GLint format = 0, type = 0;
glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_FORMAT, &format);
glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_TYPE, &type);
This can give you one of the following combinations:
GL_RGB/GL_UNSIGNED_BYTE.
GL_RGB/GL_UNSIGNED_SHORT_5_6_5.
GL_RGBA/GL_UNSIGNED_SHORT_4_4_4_4.
GL_RGBA/GL_UNSIGNED_SHORT_5_5_5_1.
GL_ALPHA/GL_UNSIGNED_BYTE.
So the combination you tried to use could be supported by an implementation, if it returns the corresponding values from the glGetIntegerv() calls above. However, there was a subtle but important error in the arguments of your glReadPixels() call even if it is supported: GL_RGB565 is a value for a format, while the 6th argument is a type. The call would have to be:
glReadPixels(0 ,0, renderBufferWidth, renderBufferHeight,
GL_RGB, GL_UNSIGNED_SHORT_5_6_5, data2);
I'm looking how to convert a GL_RGBA framebuffer texture to GL_COMPRESSED_RGBA texture, preferably on the GPU. Framebuffers apparently canĀ“t have the GL_COMPRESSED_RGBA internal format, thus I need a way to convert.
See this document that describes OpenGL Texture Compression. The sequence of steps is like (this is hacky - Buffer objects for the textures throughout would improve things somewhat)
GLUint mytex, myrbo, myfbo;
glGenTextures(1, &mytex);
glBindTexture(GL_TEXTURE_2D, mytex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, 0 );
glGenRenderbuffers(1, &myrbo);
glBindRenderbuffer(GL_RENDERBUFFER, myrbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, width, height)
glGenFramebuffers(1, &myfbo);
glBindFramebuffer(GL_FRAMEBUFFER, myfbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, myrbo);
// If you need a Z Buffer:
// create a 2nd renderbuffer for the framebuffer GL_DEPTH_ATTACHMENT
// render (i.e. create the data for the texture)
// Now get the data out of the framebuffer by requesting a compressed read
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA,
0, 0, width, height, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glDeleteRenderbuffers(1, &myrbo);
glDeleteFramebuffers(1, &myfbo);
// Validate it's compressed / read back compressed data
GLInt format = 0, compressed_size = 0;
glGetTexLevelParameteri(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT, &format);
glGetTexLevelParameteri(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED_IMAGE_SIZE,
char *data = malloc(compressed_size);
glGetCompressedTexImage(GL_TEXTURE_2D, 0, data);
glBindTexture(GL_TEXTURE_2D, 0);
glDeleteTexture(1, &mytex);
// data now contains the compressed thing
If you'd use a PBO object for the texture, you'd be able to get away without the malloc().
If you would like to perform the compression on the GPU without transfer to the CPU - here's two samples you might be able to repurpose for OpenGL (they're DX based)
GPU accelerated texture compression
GPU accelerated texture compression 2
Hope this helps!
I have a very basic fragment shader which I want to output 'gl_PrimitiveID' to a fragment buffer object (FBO) which I have defined. Below is my fragment shader:
#version 150
uniform vec4 colorConst;
out vec4 fragColor;
out uvec4 triID;
void main(void)
{
fragColor = colorConst;
triID.r = uint(gl_PrimitiveID);
}
I setup my FBO like this:
GLuint renderbufId0;
GLuint renderbufId1;
GLuint depthbufId;
GLuint framebufId;
// generate render and frame buffer objects
glGenRenderbuffers( 1, &renderbufId0 );
glGenRenderbuffers( 1, &renderbufId1 );
glGenRenderbuffers( 1, &depthbufId );
glGenFramebuffers ( 1, &framebufId );
// setup first renderbuffer (fragColor)
glBindRenderbuffer(GL_RENDERBUFFER, renderbufId0);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, gViewWidth, gViewHeight);
// setup second renderbuffer (triID)
glBindRenderbuffer(GL_RENDERBUFFER, renderbufId1);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB32UI, gViewWidth, gViewHeight);
// setup depth buffer
glBindRenderbuffer(GL_RENDERBUFFER, depthbufId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32, gViewWidth, gViewHeight);
// setup framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, framebufId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbufId0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_RENDERBUFFER, renderbufId1);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthbufId );
// check if everything went well
GLenum stat = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(stat != GL_FRAMEBUFFER_COMPLETE) { exit(0); }
// setup color attachments
const GLenum att[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, att);
// render mesh
RenderMyMesh()
// copy second color attachment (triID) to local buffer
glReadBuffer(GL_COLOR_ATTACHMENT1);
glReadPixels(0, 0, gViewWidth, gViewHeight, GL_RED, GL_UNSIGNED_INT, data);
For some reason glReadPixels gives me a 'GL_INVALID_OPERATION' error? However if i change the internal format of renderbufId1 from 'GL_RGB32UI' to 'GL_RGB' and I use 'GL_FLOAT' in glReadPixels instead of 'GL_UNSIGNED_INT' then everything works fine. Does anyone know why I am getting the 'GL_INVALID_OPERATION' error and how I can solve it?
Is there an alternative way of outputting 'gl_PrimitiveID'?
PS: The reason I want to output 'gl_PrimitiveID' like this is explained here: Picking triangles in OpenGL core profile when using glDrawElements
glReadPixels(0, 0, gViewWidth, gViewHeight, GL_RED, GL_UNSIGNED_INT, data);
As stated on the OpenGL Wiki, you need to use GL_RED_INTEGER when transferring true integer data. Otherwise, OpenGL will try to use floating-point conversion on it.
BTW, make sure you're using glBindFragDataLocation to set up which buffers those fragment shader outputs go to. Alternatively, you can set it up explicitly in the shader if you're using GLSL 3.30 or above.