Android NDK OpenGL ES 1.0 Simple Rendering - c++

I'm starting out with the Android NDK and OpenGL. I know I'm doing something (probably a few) things wrong here and since I keep getting a black screen when I test I know the rendering isn't being sent to the screen.
In the Java I have a GLSurfaceView.Renderer that calls these two native methods. They are being called correctly but not drawing to the device screen.
Could someone point me in the right direction with this?
Here are the native method implementations:
int init()
{
sendMessage("init()");
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glGenRenderbuffersOES(1, &colorRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES, 854, 480);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer);
GLuint depthRenderbuffer;
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, 854, 480);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
if(status != GL_FRAMEBUFFER_COMPLETE_OES)
sendMessage("Failed to make complete framebuffer object");
return 0;
}
void draw()
{
sendMessage("draw()");
GLfloat vertices[] = {1,0,0, 0,1,0, -1,0,0};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableClientState(GL_VERTEX_ARRAY);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
}
The log output is:
init()
draw()
draw()
draw()
etc..

I don't think that this is a real solution at all.
I'm having the same problem here, using framebuffer objects inside native code, and by doing
framebuffer = (GLuint) 0;
you're only using the default frame buffer, which always exist and is reserved to 0.
Technically, you could erase all your code related to framebuffers and your app should be working properly as framebuffer 0 is always generated and is the one binded by defaut.
But, you should be able to generate multiple frame buffers and swap between them using the binding function (glBindFramebuffer) as you please. But that doesn't seems to be working on my end and I haven't found the real solution yet. There's not much documentation on the android part, and I'm starting to wonder if fbo are really supported in native code. They do work properly inside the java code though, I've tested it with succes !
Oh ! And I just noticed that your buffer dimensions are not power of 2...that usually should be the case for all textures/buffers like structure in Opengl.
UPDATE :
Now I'm fairly sure you cannot use FBOs with android (2.2 or lower) and ndk (version r5b or lower). It is a whole different game if you use the new 3.1 release though, where you can code all of your app with native code (no more jni wrapper necessary), but I haven't tested it yet !
On the other hand, I've manage to make Stencil buffers and textures work flawlessly !
So the workaround will be to use those for my rendering logic, and just forget about FBO offscreen rendering.

I finally found the problem after MUCH tinkering.
Turns out that because I was calling the code from a GLSurfaceView.Renderer in Java the frame buffer already existed and so by calling:
glGenFramebuffersOES(1, &framebuffer);
I was unintentionally allocating a NEW buffer that was not attached to the target display. By removing this line and replacing it with:
framebuffer = (GLuint) 0;
It now renders to the correct buffer and displays properly on the screen. Note that even though I don't really use the buffer in this snippet, changing it is what messed up the proper display.

I had similar issues when moving form iOS to Android NDK here is my complete solution too.
OpenGLES 1.1 with FrameBuffer / ColorBuffer / DepthBuffer for Android with NDK r7b

Related

Use Qt3D offscreen-rendered texture in OpenGL

Goal
I'd like to implement an actual widget for Qt3D since QWidget::createWindowContainer just doesn't cut it for me.
Problem Description
My first approach of letting a new class subclass QWidget and QSurface was not successful since the Qt3D code either expects a QWindow or a QOffscreenSurface in multiple places and I don't want to recompile the whole Qt3D base.
My second idea was to render the Qt3D content to an offscreen surface and then draw the texture on a quad in a QOpenGLWidget. When I use a QRenderCapture framegraph node to save the image rendered to the offscreen texture and then load the image into a QOpenGLTexture and draw it in the QOpenGLWidget's paintGL function I can see the rendered image - i.e. rendering in Qt3D and also in the OpenGL widget works properly. This is just extremely slow compared to rendering the content from Qt3D directly.
Now, when I use the GLuint returned by the QTexutre2D to bind the texture during rendering of the QOpenGLWidget, everything stays black.
Of course this would make sense, if the contexts of the QOpenGLWidget and QT3D were completely separate. But by retrieving the AbstractRenderer from the QRenderAspectPrivate I was able to obtain the context that Qt3D uses. In my main.cpp I set
QApplication::setAttribute(Qt::AA_ShareOpenGLContexts);
The context of the QOpenGLWidget and of Qt3D both reference the same shared context - I verified this by printing both using qDebug, they are the same object.
Shouldn't this allow me to use the texture from Qt3D?
Or any other suggestions on how to implement such a widget? I just thought this to be the easiest way.
Implementation Details / What I've tried so far
This is what the paintGL function in my QOpenGLWidget looks like:
glClearColor(1.0, 1.0, 1.0, 1.0);
glDisable(GL_BLEND);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
d->m_shaderProgram->bind();
{
QMatrix4x4 m;
m.ortho(0, 1, 1, 0, 1.0f, 3.0f);
m.translate(0.0f, 0.0f, -2.0f);
QOpenGLVertexArrayObject::Binder vaoBinder(&d->m_vao);
d->m_shaderProgram->setUniformValue("matrix", m);
glBindTexture(GL_TEXTURE_2D, d->m_colorTexture->handle().toUInt());
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
}
d->m_shaderProgram->release();
m_colorTexture is the QTexture2D that is attached to the QRenderTargetOutput/QRenderTarget that Qt3D renders the scene offscreen to.
I have a QFrameAction in place to trigger draw updates on the QOpenGLWidget:
connect(d->m_frameAction, &Qt3DLogic::QFrameAction::triggered, this, &Qt3DWidget::paintGL);
I have verified that this indeed calls the paintGL function. So every time I draw the QOpenGLWidget, the frame should be ready and present in the texture.
I've also tried to replace the m_colorTexture with a QSharedGLTexture. Then I created this texture with the context of the QOpenGLWidget like this
m_texture = new QOpenGLTexture(QOpenGLTexture::Target2D);
m_texture->setFormat(QOpenGLTexture::RGBA8_UNorm);
// w and h are width and height of the widget
m_texture->setSize(w, h);
// m_colorTexture is the QSharedGLTexture
m_colorTexture->setTextureId(m_texture->textureId());
In the resizeEvent function of the QOpenGLWdiget I set the appropriate size on this texture and also on all offscreen resources of Qt3D. This also shows just a black screen. Placing qDebug() << glGetError(); directly after binding the texture simply shows 0 every time, so I assume that there aren't any errors.
The code can be found here in my GitHub project.
Update (10th May 2021, since I stumbled upon my answer again):
My Qt3DWidget implementation works perfectly now, the issue was that I had to call update() when the frame action was triggered instead of paintGL (duh, silly me, I actually know that).
Although I didn't find an exact solution to my question I'll post an answer here since I succeeded in creating a Qt3D widget.
The code can be found here. It's not the cleanest solution because I think it should be possible to use the shared texture somehow. Instead, now I'm setting the QOpenGLWidget's context on Qt3D for which I have to use Qt3D's private classes. This means that Qt3D draws directly onto the frame buffer bound by the OpenGL widget. Unfortunately, now the widget has to be the render driver and performs manual updates on the QAspectEngine by calling processFrame. Ideally, I would have liked to leave all processing loops to Qt3D but at least the widget works now as it is.
Edit:
I found an example for QSharedGLTexture in the manual tests here. It works the other way round, i.e. OpenGL renders to the texture and Qt3D uses it so I assume it should be possible to inverse the direction. Unfortunately, QSharedGLTexture seems to be a bit unstable as resizing the OpenGL window sometimes crashes the app. That's why I'll stick with my solution for now. But if anyone has news regarding this issue feel free to post an answer!

Reading OpenGL's default framebuffer in a windowless GLX program

I would like to perform some 3D rendering on my Debian machine and bring the result into client-side memory.
I've created a C++, GLX, and GLEW based application that does not require a window. I get a display with glXOpenDisplay, use it to find a proper framebuffer with glXChooseFBConfig (passing the DefaultScreen of the display), obtain visual info with glXGetVisualFromFBConfig, and pass the relevant information to glXCreateContext. I make that context current & initialize GLEW.
As a starting test, I'm simply clearing the default framebuffer with a variety of colors; I would like to now query the result pixel-by-pixel, presumably with glReadPixels.
But this is where I seem to be fundamentally misunderstanding something: What are the dimensions of the default framebuffer? I never define an initial height or a width for it, and I'm not seeing a way to do so.
Answers such as this one imply the window "defines" the dimensions. In my application, is the DefaultScreen defining the dimensions? If that's the case, what can I do to make the default framebuffer larger than a particularly small screen?
and pass the relevant information to glXCreateContext. I then initialize GLEW.
Just because you have a context does not mean, that you can immediately use it. Consider what happens if you have more than one context. Before you can make OpenGL calls, you have to make active a context on the current thread, using glXMakeCurrent or glXMakeContextCurrent. If you look at those functios' signatures you'll see that they take a Drawable as parameter in addition to the OpenGL context. So you need that, too.
For windowless operation GLX offers PBuffers, which offer windowless drawables. Or you could use a window that you don't map to the screen. PBuffers allow to do offscreen rendering without the use of framebuffer objects, but the use of their main framebuffer is a bit finicky. My recommendation is the use of a 0×0 sized PBuffer and a framebuffer object.
You need to use framebuffer objects. These use textures whose dimensions you have specified. For example:
// Generate framebuffer ID
GLuint fb;
glGenFramebuffers(1, &fb);
// Make framebuffer active
glBindFramebuffer(GL_FRAMEBUFFER, fb);
// Attach color and depth textures (must have same dimensions)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color_tex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth_tex, 0);
// Check framebuffer is valid
assert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE);

Opengl - Is glDrawBuffers modification stored in a FBO? No?

I try to create a FrameBuffer with 2 textures attaching to it (Multi Render Targets). Then in every time step, both textures are cleared and painted, as following code. (Some part will be replaced as pseudo code to make it shorter.)
Version 1
//beginning of the 1st time step
initialize(framebufferID12)
//^ I quite sure it is done correctly,
//^ Note : there is no glDrawBuffers() calling
loop , do once every time step {
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebufferID12);
//(#1#) a line will be add here in version 2 (see belowed) <------------
glClearColor (0.5f, 0.0f, 0.5f, 0.0f);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// paint a lot of object here , using glsl (Shader .frag, .vert)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}
All objects are painted correctly to both texture, but only the first texture (ATTACHMENT0) is cleared every frame, which is wrong.
Version 2
I try to insert a line of code ...
glDrawBuffers({ATTACHMENT0,ATTACHMENT1}) ;
at (#1#) and it works as expected i.e. clear all two textures.
(image http://s13.postimg.org/66k9lr5av/gl_Draw_Buffer.jpg)
Version 3
From version 2, I move that glDrawBuffers() statement to be inside frame buffer initialization like this
initialize(int framebufferID12){
int nameFBO = glGenFramebuffersEXT();
int nameTexture0=glGenTextures();
int nameTexture1=glGenTextures();
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,nameFBO);
glBindTexture(nameTexture0);
glTexImage2D( .... ); glTexParameteri(...);
glFramebufferTexture2DEXT( ATTACHMENT0, nameTexture0);
glBindTexture(nameTexture1);
glTexImage2D( .... ); glTexParameteri(...);
glFramebufferTexture2DEXT( ATTACHMENT0, nameTexture1);
glDrawBuffers({ATTACHMENT0,ATTACHMENT1}) ; //<--- moved here ---
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0);
return nameFBO ;
}
It is no longer work (symptom like version 1), why?
The opengl manual said that "changes to context state will be stored in this object", so the state modification from glDrawBuffers() will be stored in "framebufferID12" right? Then, why I have to call it every time step (or every time I change FBO)
I may misunderstand some opengl's concept, someone enlighten me please.
Edit 1: Thank j-p. I agree that it is make sense, but shouldn't the state be recorded in the FBO already?
Edit 2 (accept answer): Reto Koradi's answer is correct! I am using a not-so-standard library called LWJGL.
Yes, the draw buffers setting is part of the framebuffer state. If you look at for example the OpenGL 3.3 spec document, it is listed in table 6.23 on page 299, titled "Framebuffer (state per framebuffer object)".
The default value for FBOs is a single draw buffer, which is GL_COLOR_ATTACHMENT0. From the same spec, page 214:
For framebuffer objects, in the initial state the draw buffer for fragment color zero is COLOR_ATTACHMENT0. For both the default framebuffer and framebuffer objects, the initial state of draw buffers for fragment colors other then zero is NONE.
So it's expected that if you have more than one draw buffer, you need the explicit glDrawBuffers() call.
Now, why it doesn't seem to work for you if you make the glDrawBuffers() call as part of the FBO setup, that's somewhat mysterious. One thing I notice in your code is that you're using the EXT form of the FBO calls. I suspect that this might have something to do with your problem.
FBOs have been part of standard OpenGL since version 3.0. If there's any way for you to use OpenGL 3.0 or later, I would strongly recommend that you use the standard entry points. While the extensions normally still work even after the functionality has become standard, I would always be skeptical how they interact with other features. Particularly, there were multiple extensions for FBO functionality before 3.0, with different behavior. I wouldn't be surprised if some of them interact differently with other OpenGL calls compared to the standard FBO functionality.
So, try using the standard entry points (the ones without the EXT in their name). That will hopefully solve your problem.

low resolution in OpenGL to mimic older games

I'm interested in know how is the right way to mimic the low resolution of the older games (like Atari 2600) in OpenGL to do a fps game. I imagine the best way to do it is writing the buffer into a texture, put onto a quad and display it to the screen resolution.
Take a look of http://www.youtube.com/watch?v=_ELRv06sa-c, for example (great game!)
Any advice, help or sample-code will be welcome.
I think the best way to do it would be like you said, render everything into a low-res texture (best done using FBOs) and then just display the texture by drawing a sceen-sized quad (of course using GL_NEAREST as magnification filter for the texture). Maybe you can also use glBlitFramebuffer for copying directly from the low-res FBO into the high-res framebuffer, although I don't know if you can copy directly into the default framebuffer (the displayed one) this way.
EDIT: After looking up the specification for framebuffer_blit it seems you can just copy from the low-res FBO into the high-res default framebuffer using glBlitFramebuffer(EXT/ARB). This might be faster than using a texture mapped quad as it completely bypasses the vertex-fragment-pipeline (although this would have been a simple one). And another advantage is that you also get the low-res depth and stencil buffers if needed and can this way render high-res content on top of the low-res background which might be an interesting effect. So it would happen somehow like this:
generate FBO with low-res renderbuffers for color and depth (and stencil)
...
glBindFramebuffer(GL_FRAMEBUFFER, lowFBO);
render_scene();
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, 640, 480, 0, 0, 1024, 768,
GL_COLOR_BUFFER_BIT [| GL_DEPTH_BUFFER_BIT], GL_NEAREST);

What are the usual troubleshooting steps for OpenGL textures not showing?

After making a few changes in my application, my textures are no longer showing. So far I've checked the following:
The camera direction hasn't changed.
I can see the vectors (when colored instead of textured).
Any usual suspects?
You may want to check the following:
glEnable(GL_TEXTURE_2D); presence
glBindTexture(GL_TEXTURE_2D,
texture[i]); and
glBindTexture(GL_TEXTURE_2D, 0);
when you don't need texture anymore
One common problem I run into from time to time is
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
but I forgot to supply mipmaps. Quickfix:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
A few more things to check:
glColorMaterial(...); To make sure colors aren't overwriting the texture
glEnable/glDisable(GL_LIGHTING); Sometimes lighting can wash out the texture
glDisable(GL_BLEND); Make sure that you're not blending the texture out
Make sure the texture coordinates are set properly.
I assume you had the must have operations implemented like glEnable(GL_TEXTURE_2D) and the texture binding since your textures worked fine before and then suddenly they just won't show.
If you are doing Object Oriented code you might want to have the texture generation happen when the thread that is actually doing the draw is instanced, in other words: avoid doing it in constructors or a call coming from a constructor, this might instance your texture object before the window or the app that is going to use it is on.
What I usually do is that I create a manual Init function of the texture creation that is called in the Init function of the App. Therefore I guarantee that the App exist when the binding occurs.
More info here: http://www.opengl.org/wiki/Common_Mistakes#The_Object_Oriented_Language_Problem
Does a glColor3ub(255,255,255) before rendering your textured object help? I think the default OpenGL state multiplies the current glColor by the incoming texel; a stray glColor3ub(0,0,0) will make all your textures look black.
Took me some while to figure this out...
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glDisable(GL_TEXTURE_GEN_S);
glDisable(GL_TEXTURE_GEN_T);
glDisable(GL_TEXTURE_GEN_R);
glDisable(GL_TEXTURE_GEN_Q);
Also make sure to unbind your stuff:
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindVertexArray(0);
If you use a third party engine, which is optimized, it probably has a "direct state access" layer for OpenGL (to not use the slow OpenGL query functions). If so, don't call OpenGL directly, but use the engine wrappers. Otherwise your code doesn't play nice with the rest of the engine code.