"Empty" rendering with openGL - c++

specs: Radeon 3870HD w/ openGL 3.3 & GLSL 1.5
I am rendering data through computational shader. Because of dependencies I had to put all my data to uniform textures and nothing left for attributes. Only value that change per primitve is index and I can get this one from gl_VertexID. But I'm having problem with setting up proper render call, looks like if there are no attribute pointers set the render doesn't even run, setting pointer but not storage results in error(ofc..). Setting storage isn't empty rendering ;). Is there any way to render this setup?
Yeah and I forgot some important things..
I render with:
glDrawArrays(GL_POINTS, 0, elements);
and the reason I think it doesn't run shader is because query on processed primitives results 0.. Setting some dummy attribute pointer with data results in right number of primitives...

I had the same issue with ATI. In order to bypass it I'm using any accessible buffer for a dummy 1-byte per vertex input array (not used by the shader).

Related

How Many Shader Programs Do I Really Need?

Let's say I have a shader set up to use 3 textures, and that I need to render some polygon that needs all the same shader attributes except that it requires only 1 texture. I have noticed on my own graphics card that I can simply call glDisableVertexAttrib() to disable the other two textures, and that doing so apparently causes the disabled texture data received by the fragment shader to be all white (1.0f). In other words, if I have a fragment shader instruction (pseudo-code)...
final_red = tex0.red * tex1.red * tex2.red
...the operation produces the desired final value regardless whether I have 1, 2, or 3 textures enabled. From this comes a number of questions:
Is it legit to disable expected textures like this, or is it a coincidence that my particular graphics card has this apparent mathematical safeguard?
Is the "best practice" to create a separate shader program that only expects a single texture for single texture rendering?
If either approach is valid, is there a benefit to creating a second shader program? I'm thinking it would be cost less time to make 2 glDisableVertexAttrib() calls than to make a glUseProgram() + 5-6 glGetUniform() calls, but maybe #4 addresses that issue.
When changing the active shader program with glUseProgram() do I need to call glGetUniform... functions every time to re-establish the location of each uniform in the program, or is the location of each expected to be consistent until the shader program is deallocated?
Disabling vertex attributes would not really disable your textures, it would just give you undefined texture coordinates. That might produce an affect similar to disabling a certain texture, but to do this properly you should use a uniform or possibly subroutines (if you have dozens of variations of the same shader).
As far as time taken to disable a vertex array state, that's probably going to be slower than changing a uniform value. Setting uniform values don't really affect the render pipeline state, they're just small changes to memory. Likewise, constantly swapping the current GLSL program does things like invalidate shader cache, so that's also significantly more expensive than setting a uniform value.
If you're on a modern GL implementation (GL 4.1+ or one that implements GL_ARB_separate_shader_objects) you can even set uniform values without binding a GLSL program at all, simply by calling glProgramUniform* (...)
I am most concerned with the fact that you think you need to call glGetUniformLocation (...) each time you set a uniform's value. The only time the location of a uniform in a GLSL program changes is when you link it. Assuming you don't constantly re-link your GLSL program, you only need to query those locations once and store them persistently.

OpenGL/OpenGL ES update texture

I need to change parts from a texture, but to be aware of the current texture data instead of just replaceing it.
I tried to use glTexSubImage2D, but it replaces the current data without giving me the posibility to specify some kind of operation between current data and new data.
One solution wold be to cache texture data in memory and do the blend operation before using glTexSubImage2Dand use glTexSubImage2D with the result, but this will just waste the memory...
Is there any function common to both desktop OpenGL and OpenGL ES 2.0 that will allow me to do this?
Of course glTexSubImage2D overwrites any previous data and doing it on the CPU isn't an option at all (it won't just waste memory, but even more important, time).
What you can do though is use a framebuffer object (FBO). You attach the destination texture as color render target of the FBO and then just render the new data on top of it by rendering a textured quad. The sub-region can be adjusted by either the viewport setting or the quad size and position. For the actual operation you can then either use the existing OpenGL blending functionality if sufficient, or you use a custom fragment shader for it (but in this case you can't just render the new data on top of the old, but have to use both new and old data as textures and render the stuff into a completely new texture, since otherwise you don't have access to the old data inside the shader).

Nothing displays onscreen when attempting to render a Vertex Buffer Object via OpenGL

I am trying to switch my OpenGL application from the old fixed function system to using Vertex Buffer Objects. However, with my current setup nothing is displaying on the screen. I'm sure I'm making some simple error, but I can't see it.
gltest.h
gltest.cpp
model and index hold the IDs for my vbo and ibo respectively. The buffer objects are set up in the GLTest::makeModel method. The struct i'm using to store vertex data has 3 floats for the position, followed by 4 unsigned chars for the color.
It creates three vertices arranged in a triangle, and the buffer object simply contains the numbers 0,1,and 2. I call the method with a QRgb object containing the color blue, so with this setup, I would expect to see a blue triangle displayed on screen. Instead, nothing is displayed.
A full Qt project which shows the error is available here. You will need GLEW installed.
I never programmed with the fixed-pipeline versions of OpenGL, but I've been doing lots of work in v3.0+, so take my advice carefully!
You seem to be mixing old and new together, for example you don't have a vertex or fragment shader loaded. Your glEnableClientState(), glMatrixMode(), glLoadIdentity(), glVertexPointer(), glColorPointer() are depreciated in modern OpenGL; having been replaced by shader functionality.
Also whenever I get stuck with this sort thing, I litter my gl calls with glGetError(), you only have one.

Opengl, DrawArrays without binding VBO

I am rendering array of points with a custom vertex shader.
Shaders looks like:
void mainVP()
in varying int in_vertex_id : VERTEXID
{
foo(in_vertex_id);
}
So the only thing I need - is vertex id.
But I need a lot of vertices and I don't want to store fake VBO for them (it takes around 16mb of memory).
I tried to run my code without binding any VBO. It works.
So my rendering looks like:
size_t num_vertices = ...
glDrawArrays(GL_POINTS, 0, num_vertices);
But can I be sure that rendering without binding VBO is safe?
But can I be sure that rendering without binding VBO is safe?
You can't.
The OpenGL specification's core profile (3.2 and above) clearly states that it should be allowed, that you can render with all attributes disabled. The OpenGL specification's compatibility profile or any versions before 3.2 just as clearly state that you cannot do this.
Of course, that doesn't matter anyway. NVIDIA drivers allow you to do this on any OpenGL version and profile. ATI's drivers don't allow you to do it on any OpenGL version or profile. They're both driver bugs, just in different ways.
You'll just have to accept that you need a dummy vertex attribute. However:
But I need a lot of vertices and I don't want to store fake VBO for them (it takes around 16mb of memory).
A dummy attribute would take up 4 bytes (a single float, or a 4-vector of normalized bytes. Remember: you don't care about the data). So you could fit 4 million of them in 16MB.
Alternatively, you could use instanced rendering via glDrawArraysInstanced. There, you just render one vertex, but with num_vertices instances. Your shader will have to use the instance ID, of course.

*BaseVertex and gl_VertexID

I've skimmed through the specs and OpenGL forum but couldn't really make sense of this:
Are the *BaseVertex version of the drawing commands supposed to add to the GLSL variable gl_VertexID? As it works, gl_VertexID contains the index taken from the the bound ELEMENT_ARRAY_BUFFER before basevertex is added to it.
So, my question is: Is this the correct behavior? I would assume that gl_VertexID should contain the index used to fetch the vertex.
Yes this is the correct bahviour. The usage scenario of BaseVertex is, that you have to switch only this one value, instead of adjusting the buffer offsets into the vertex arrays with the gl*Pointer functions.
The idea is, that you can load the data from multiple meshes (model files) into a single VBO, without the need to adjust the indices.
Your assumption is right. gl_VertexID should include BaseVertex offset.
opengl wiki about GLSL built-in vars:
Note: gl_VertexID​ will have the base vertex applied to it.
about glDrawElementsBaseVertex:
The gl_VertexID​ passed to the Vertex Shader will be the index after being offset by basevertex​, not the index fetched from the buffer.
Unfortunately, some old drivers didn't implement it correctly.