I`m trying to update a VBO which is also referenced by a VAO.
The first time I write to the VBO the data seems to be accepted and renders correctly.
(This is an abstract example how my VBO is inited)
GeometryBuilder.BeginSetup(allocatedIndices, BufferUsageHint.StaticDraw, 4);
GeometryBuilder.BeginBuffer(allocatedData, BufferUsageHint.StreamDraw);
GeometryBuilder.BindBufferToAttribute(0, 3, VertexAttribPointerType.Float, false, 48, 0);
GeometryBuilder.BindBufferToAttribute(1, 2, VertexAttribPointerType.Float, false, 48, 12);
GeometryBuilder.BindBufferToAttribute(2, 4, VertexAttribPointerType.Float, false, 48, 20);
GeometryBuilder.BindBufferToAttribute(3, 3, VertexAttribPointerType.Float, false, 48, 36);
GeometryBuilder.EndBuffer();
geometry = GLEXGeometryBuilder.EndSetup();
In the GeometryBuilder.BeginBuffer method the data was allocated using BufferData and after that written using BufferSubData.
As I said. It renders the data from "allocatedData" correctly.
But this is a dynamic mesh so I want to update the VBO using this:
GL.BindBuffer(BufferTarget.ArrayBuffer, ID);
// Uncommenting this will make my mesh flicker
//GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(length * sizeof(float)), IntPtr.Zero, BufferUsageHint.DynamicDraw);
GL.BufferSubData(BufferTarget.ArrayBuffer, (IntPtr)offset, (IntPtr)(length * sizeof(float)), data);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
But this doesnt even remove the original data from "allocatedData". It still renders it like nothing happened.(Except if I remove the comment from GL.BufferData, then it renders the initial "allocatedData" one frame, but on the next frame it is gone, and so on...)
The rendering code is just binding the VAO and calling DrawArrays. It works for all other meshes too.
I havent found any solution in the internet.
Could It be a driver bug? (Got an Intel HD Graphics on this PC)
Or do I miss something else...
Edit
I solved the problem.
It was caused by uploading too few vertices to actually override the current mesh.
\o_o/
Related
I found myself in trouble working with openGl es, in particular feeding the VBO with my data.
Here is the code causing the problem:
void Renderer::DrawCurrentData()
{
glBufferData(GL_ARRAY_BUFFER, (currentVertexIndex) * sizeof(Vertex), vertices, GL_STREAM_DRAW);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, (currentIndexIndex) * sizeof(GLushort), indices, GL_STREAM_DRAW);
glDrawElements(GL_TRIANGLE_STRIP, currentIndexIndex, GL_UNSIGNED_SHORT, 0);
currentVertexIndex = 0;
currentIndexIndex = 0;
bufVertex = &vertices[currentVertexIndex];
}
It works fine until I have only one draw call per frame, so glBufferData is called only once for each buffer per frame before calling glDrawElements. But if I want to make several draw calls per frame like
frame begin
glBufferData
glDrawElements
.. repeat steps 2,3
presentRenderbuffer
Then I got crash in glBufferData. This probably happens because of buffer is still locked by previous draw call when I try to feed it with new data. I know I should use synchronization here but I'm not sure how to do that in right way.
I tried to orphan the buffer using glBufferData with NULL data pointer before providing new data but it din't help. I got the crash on presenting frame buffer in that case. The question is how to manage buffer feeding with new data while the old data still in use?
I tried to use GL_STREAM_DRAW and GL_DYNAMIC_DRAW both showed same results.
I'm interested in using a vertex shader to process a buffer without producing any rendered output. Here's the relevant snippet:
glUseProgram(program);
GLuint tfOutputBuffer;
glGenBuffers(1, &tfOutputBuffer);
glBindBuffer(GL_ARRAY_BUFFER, tfOutputBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(double)*4*3, NULL, GL_STATIC_READ);
glEnable(GL_RASTERIZER_DISCARD_EXT);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, tfOutputBuffer);
glBeginTransformFeedbackEXT(GL_TRIANGLES);
glBindBuffer(GL_ARRAY_BUFFER, positionBuffer);
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute, 4, GL_FLOAT, GL_FALSE, sizeof(double)*4, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementBuffer);
glDrawElements(GL_TRIANGLES, 1, GL_UNSIGNED_INT, 0);
This works fine up until the glDrawElements() call, which results in GL_INVALID_FRAMEBUFFER_OPERATION. And glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); returns GL_FRAMEBUFFER_UNDEFINED.
I presume this is because my GL context does not have a default framebuffer, and I have not bound another FBO. But, since I don't care about the rendered output and I've enabled GL_RASTERIZER_DISCARD_EXT, I thought a framebuffer shouldn't be necessary.
So, is there a way to use transform feedback without a framebuffer, or do I need to generate and bind a framebuffer even though I don't care about its contents?
This is actually perfectly valid behavior, as-per the specification.
OpenGL 4.4 Core Specification - 9.4.4 Effects of Framebuffer Completeness on Framebuffer Operations
A GL_INVALID_FRAMEBUFFER_OPERATION error is generated by attempts to render to or read from a framebuffer which is not framebuffer complete. This error is generated regardless of whether fragments are actually read from or written to the framebuffer. For example, it is generated when a rendering command is called and the framebuffer is incomplete, even if GL_RASTERIZER_DISCARD is enabled.
What you need to do to work around this is create an FBO with a 1 pixel color attachment and bind that. You must have a complete FBO bound or you get GL_INVALID_FRAMEBUFFER_OPERATION and one of the rules for completeness is that at least 1 complete image is attached.
OpenGL 4.3 actually allows you to skirt around this issue by defining an FBO with no attachments of any sort (see: GL_ARB_framebuffer_no_attachments). However, because you are using the EXT form of FBOs and Transform Feedback, I doubt you have a 4.3 implementation.
I am trying to render points from a VBO and a Element Buffer Object with glDrawRangeElements.
The VBO and EBO are instanciated like this:
glGenBuffers(1, &vertex_buffer);
glGenBuffers(1, &index_buffer);
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBufferData(GL_ARRAY_BUFFER, vertex_buffer_size, NULL, GL_STREAM_DRAW);
glVertexAttribPointer(0,3,GL_FLOAT, GL_FALSE, 0, (char*)(NULL));
glEnableVertexAttribArray(0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, index_buffer_size, NULL, GL_STREAM_DRAW);
as you can see they do not have any "static" data.
I use glMapBuffer to populate the buffers and then I render them with glDrawRangeElements.
Problem:
Concretly, what I want to do is to make a terrain with Continuous LOD.
The code I use and posted majorly comes from Ranger Mk2 by Andras Balogh.
My problem is this: when I want to render the triangle strip, there seems to be a point on the 3 points of a triangle which is somewhere where it should not be.
For example,
this is what I get in wireframe mode -> http://i.stack.imgur.com/lCPqR.jpg
and this is what I get in point mode (Note the column that stretches up which is the points that are not well placed) -> http://i.stack.imgur.com/CF04p.jpg
Before you ask me to go to the post named "Rendering with glDrawRangeElements() not working properly", I wanted to let you know that I already went there.
Code:
So here is the render process:
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(4, GL_FLOAT, 0, 0);
glEnableVertexAttribArray(0);
glDrawRangeElements(GL_TRIANGLE_STRIP, 0, va_index, ia_index, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY);
and just before I do this (pre_render function):
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
vertex_array = (v4f*)(glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY));
index_array = (u32*)(glMapBuffer(GL_ELEMENT_ARRAY_BUFFER, GL_WRITE_ONLY));
//[...] Populate the buffers
glUnmapBuffer(GL_ARRAY_BUFFER);
glUnmapBuffer(GL_ELEMENT_ARRAY_BUFFER);
PS: When I render the terrain like this:
glBegin(GL_TRIANGLE_STRIP);
printf("%u %u\n", va_index, ia_index);
for(u32 i = 0; i < va_index; ++i){
//if(i <= va_index)
glVertex4fv(&vertex_array[i].x);
}
glEnd();
strangely it works (part of the triangles are not rendered though but that is another problem).
So my question is how can I make glDrawRangeElements function properly?
If you need any more information please ask, I will be glad to answer.
Edit: I use Qt Creator as IDE, with Mingw 4.8 on Windows 7. My Graphic card supports Opengl 4.4 (from Nvidia).
Not sure if this is causing your problem, but I notice that you have a mixture of API calls for built-in vertex attributes and generic vertex attributes.
Calls like glVertexAttribPointer, glEnableVertexAttribArray and glDisableVertexAttribArray are used for generic vertex attributes.
Calls like glVertexPointer, glEnableClientState and glDisableClientState are used for built-in vertex attributes.
You need to decide which approach you want to use, and then use a consistent set of API calls for that approach. If you use the fixed rendering pipeline, you need to use the built-in attributes. If you write your own shaders with the compatibility profile, you can use either. If you use the core profile, you need to use generic vertex attributes.
This call also looks suspicious, since it specifies a size of 3, where the rest of your code suggests that you're using positions with 4 coordinates:
glVertexAttribPointer(0,3,GL_FLOAT, GL_FALSE, 0, (char*)(NULL));
Im facing an issue which I believe to be VAO-dependant, but Im not sure..
I am not sure about the correct usage of a VAO, what I used to do during GL initialization was a simple
glGenVertexArrays(1,&vao)
followed by a
glBindVertexArray(vao)
and later, in my drawing pipeline, I just called glBindBuffer(), glVertexAttribPointer(), glEnableVertexAttribArray() and so on.. without caring about the initally bound VAO
is this a correct practice?
VAOs act similarly to VBOs and textures with regard to how they are bound. Having a single VAO bound for the entire length of your program will yield no performance benefits because you might as well just be rendering without VAOs at all. In fact it may be slower depending on how the implementation intercepts vertex attribute settings as they're being drawn.
The point of a VAO is to run all the methods necessary to draw an object once during initialization and cut out all the extra method call overhead during the main loop. The point is to have multiple VAOs and switch between them when drawing.
In terms of best practice, here's how you should organize your code:
initialization:
for each batch
generate, store, and bind a VAO
bind all the buffers needed for a draw call
unbind the VAO
main loop/whenever you render:
for each batch
bind VAO
glDrawArrays(...); or glDrawElements(...); etc.
unbind VAO
This avoids the mess of binding/unbinding buffers and passing all the settings for each vertex attribute and replaces it with just a single method call, binding a VAO.
No, that's not how you use VAO.
You should use VAO in same way how you are using VBO or textures, or shaders. First set it up. And during rendering only Bind them, without modifying it.
So with VAO you do following:
void Setup() {
glGenVertexArrays(..);
glBindVertexArray(..);
// now setup all your VertexAttribPointers that will be bound to this VAO
glBindBuffer(..);
glVertexAttribPointer(..);
glEnableVertexAttribArray(..);
}
void Render() {
glBindVertexArray(vao);
// that's it, now call one of glDraw... functions
// no need to set up vertex attrib pointers and buffers!
glDrawXYZ(..)
}
See also these links:
http://www.swiftless.com/tutorials/opengl4/4-opengl-4-vao.html
http://www.lastrayofhope.com/2011/07/30/using-vertex-array-objects/
is this a correct practice?
Yes, this is perfectly legal and valid. Is it good? Well...
There has been some informal performance testing on this sort of thing. And it seems, at least on NVIDIA hardware where this was tested, the "proper" use of VAOs (ie: what everyone else advocated) actually is slower in many cases. This is especially true if changing VAOs does not change which buffers are bound.
No similar performance testing has taken place on AMD hardware, to my knowledge. In general, unless something changes with them, this is an acceptable use of VAOs.
Robert's answer above worked for me when I tried it. For what it's worth here is the code, in Go, of using multiple Vertex Attribute Objects:
// VAO 1
vao1 := gl.GenVertexArray()
vao1.Bind()
vbo1 := gl.GenBuffer()
vbo1.Bind(gl.ARRAY_BUFFER)
verticies1 := []float32{0, 0, 0, 0, 1, 0, 1, 1, 0}
gl.BufferData(gl.ARRAY_BUFFER, len(verticies1)*4, verticies1, gl.STATIC_DRAW)
pa1 := program.GetAttribLocation("position")
pa1.AttribPointer(3, gl.FLOAT, false, 0, nil)
pa1.EnableArray()
defer pa1.DisableArray()
vao1.Unbind()
// VAO 2
vao2 := gl.GenVertexArray()
vao2.Bind()
vbo2 := gl.GenBuffer()
vbo2.Bind(gl.ARRAY_BUFFER)
verticies2 := []float32{-1, -1, 0, -1, 0, 0, 0, 0, 0}
gl.BufferData(gl.ARRAY_BUFFER, len(verticies2)*4, verticies2, gl.STATIC_DRAW)
pa2 := program.GetAttribLocation("position")
pa2.AttribPointer(3, gl.FLOAT, false, 0, nil)
pa2.EnableArray()
defer pa2.DisableArray()
vao2.Unbind()
Then in your main loop you can use them as such:
for !window.ShouldClose() {
gl.Clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT)
vao1.Bind()
gl.DrawArrays(gl.TRIANGLES, 0, 3)
vao1.Unbind()
vao2.Bind()
gl.DrawArrays(gl.TRIANGLES, 0, 3)
vao2.Unbind()
window.SwapBuffers()
glfw.PollEvents()
if window.GetKey(glfw.KeyEscape) == glfw.Press {
window.SetShouldClose(true)
}
}
If you want to see the full source, it is available as a Gist and derived from the examples in go-gl:
https://gist.github.com/mdmarek/0f73890ae2547cdba3a7
Thanks everyone for the original answers, I had the same question as ECrownofFire.
I'm starting out with the Android NDK and OpenGL. I know I'm doing something (probably a few) things wrong here and since I keep getting a black screen when I test I know the rendering isn't being sent to the screen.
In the Java I have a GLSurfaceView.Renderer that calls these two native methods. They are being called correctly but not drawing to the device screen.
Could someone point me in the right direction with this?
Here are the native method implementations:
int init()
{
sendMessage("init()");
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glGenRenderbuffersOES(1, &colorRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES, 854, 480);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer);
GLuint depthRenderbuffer;
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, 854, 480);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
if(status != GL_FRAMEBUFFER_COMPLETE_OES)
sendMessage("Failed to make complete framebuffer object");
return 0;
}
void draw()
{
sendMessage("draw()");
GLfloat vertices[] = {1,0,0, 0,1,0, -1,0,0};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableClientState(GL_VERTEX_ARRAY);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
}
The log output is:
init()
draw()
draw()
draw()
etc..
I don't think that this is a real solution at all.
I'm having the same problem here, using framebuffer objects inside native code, and by doing
framebuffer = (GLuint) 0;
you're only using the default frame buffer, which always exist and is reserved to 0.
Technically, you could erase all your code related to framebuffers and your app should be working properly as framebuffer 0 is always generated and is the one binded by defaut.
But, you should be able to generate multiple frame buffers and swap between them using the binding function (glBindFramebuffer) as you please. But that doesn't seems to be working on my end and I haven't found the real solution yet. There's not much documentation on the android part, and I'm starting to wonder if fbo are really supported in native code. They do work properly inside the java code though, I've tested it with succes !
Oh ! And I just noticed that your buffer dimensions are not power of 2...that usually should be the case for all textures/buffers like structure in Opengl.
UPDATE :
Now I'm fairly sure you cannot use FBOs with android (2.2 or lower) and ndk (version r5b or lower). It is a whole different game if you use the new 3.1 release though, where you can code all of your app with native code (no more jni wrapper necessary), but I haven't tested it yet !
On the other hand, I've manage to make Stencil buffers and textures work flawlessly !
So the workaround will be to use those for my rendering logic, and just forget about FBO offscreen rendering.
I finally found the problem after MUCH tinkering.
Turns out that because I was calling the code from a GLSurfaceView.Renderer in Java the frame buffer already existed and so by calling:
glGenFramebuffersOES(1, &framebuffer);
I was unintentionally allocating a NEW buffer that was not attached to the target display. By removing this line and replacing it with:
framebuffer = (GLuint) 0;
It now renders to the correct buffer and displays properly on the screen. Note that even though I don't really use the buffer in this snippet, changing it is what messed up the proper display.
I had similar issues when moving form iOS to Android NDK here is my complete solution too.
OpenGLES 1.1 with FrameBuffer / ColorBuffer / DepthBuffer for Android with NDK r7b