Do I need multiple vertex buffers for similar objects in openGL? - c++

For example, given two cubes with similar vertices, e.g.,
float pVerts[] =
{
0.0, 0.0, 0.0,
1.0, 0.0, 0.0,
...
};
glGenBuffer(1, &mVertexBuffer);
glBindBuffer(...);
glBufferData(...);
Can I just cache this set of vertices out for later usage? Or, in other words, if I wanted a second cube (with the exact same vertex data), do I need to generate another vertex buffer?
And with shaders, does the same apply? Can I use the same program for drawing these cubes?

You can use the same vertex buffer to draw as many objects as you want (shaders or not). If you want to draw a second object, just change the model matrix and draw it again.
Same for shaders, you can use the same shader to draw as many objects as you want. Just bind the shader and then fire off as many draw calls as you need.

Related

Determine distance from each vertex in glsl fragment shader

Say I have a simple OpenGL triangle like this:
//1
glColor3f(1, 0, 0);
glVertex3f(0.5, 0, 0);
//2
glColor3f(0, 1, 0);
glVertex3f(0, 1, 0);
//3
glColor3f(0, 0, 1);
glVertex3f(1, 1, 0);
In a glsl fragment shader I can use the interpolated fragment color to determine my distance from each vertex. In this example the red component of the color determines distance from the first vertex, green determines the distance from the second, and blue from the third.
Is there a way I can determine these distances in the shader without passing vertex data such as texture coordinates or colors?
Not in standard OpenGL. There are two vendor-specific extensions:
AMD_shader_explicit_vertex_parameter
NV_fragment_shader_barycentric
which will give you access to the barycentric coordinates within the primitive. But without such extensions, there are only very clumsy ways to get this data to the FS, and each will have significant drawbacks. Here are some ideas:
You could use per-vertex attributes as you already suggested, but in real meshes, it will require a lot of additional vertex splitting to get the values right.
You could use geometry shaders to generate those attribute values on the fly, but that will come with a huge performance hit as geometry shaders really don't perform well.
You could make your vertex data available to the FS (for example via an SSBO) and basically calculate the barycentric coordinates based on gl_FragCoord and the relevant endpoints. But this requires you to get information on which vertices were used to the FS, which might require extra data structures (i.e. some triangle- and/or vertex-indices lookup table based on gl_PrimitiveID).

OpenGL Camera Movement - Shader vs. Primitive Rendering

In my OpenGL application, I am using gluLookAt() for transforming my camera. I then have two different render functions; one uses primitive rendering (glBegin()/glEnd()) to render a triangle.
glBegin(GL_TRIANGLES);
glVertex3f(0.25, -0.25, 0.5);
glVertex3f(-0.25, -0.25, 0.5);
glVertex3f(0.25, 0.25, 0.5);
glEnd();
The second rendering function uses a shader to display the triangle using the same coordinates and is called with the function glDrawArrays(GL_TRIANGLES, 0, 3). shader.vert is shown below:
#version 430 core
void main()
{
const vec4 verts[3] = vec4[3](vec4(0.25, -0.25, 0.5, 1),
vec4(-0.25, -0.25, 0.5, 1),
vec4(0.25, 0.25, 0.5, 1));
gl_Position = verts[gl_VertexID];
}
Now here is my problem; if I move the camera around using the primitive rendering for the triangle, I see the triangle from different angles like one would expect. When I use the shader rendering function, the triangle remains stationary. Clearly I am missing something about world coordinates and how they related to objects rendered with shaders. Could someone point me in the right direction?
If you do not have an active shader program, you're using what is called the "fixed pipeline". The fixed pipeline performs rendering based on numerous attributes you set with OpenGL API calls. For example, you specify what transformations you want to apply. You specify material and light attributes that control the lighting of your geometry. Applying these attributes is then handled by OpenGL.
Once you use your own shader program, you're not using the fixed pipeline anymore. This means that most of what the fixed pipeline previously handled for you has to be implemented in your shader code. Applying transformations is part of this. To apply your transformation matrix, you have to pass it into the shader, and apply it in your shader code.
The matrix is typically declared as a uniform variable in your vertex shader:
uniform mat4 ModelViewProj;
and then applied to your vertices:
gl_Position = ModelViewProj * verts[gl_VertexID];
In your code, you will then use calls like glGetUniformLocation(), glUniformMatrix4fv(), etc., to set up the matrix. Explaining this in full detail is somewhat beyond this answer, but you should be able to find it in many OpenGL tutorials online.
As long as you're still using legacy functionality with the Compatibility Profile, there's actually a simpler way. You should be aware that this is deprecated, and not available in the OpenGL Core Profile. The Compatibility Profile makes certain fixed function attributes available to your shader code, including the transformation matrices. So you do not have to declare anything, and can simply write:
gl_Position = gl_ModelViewProjectionMatrix * verts[gl_VertexID];

How do I draw multiple objects in OpenGL?

I want to draw two separate objects so that I can perform a query while drawing the second object. The drawing code will look something like this:
glDrawElements(GL_TRIANGLES,...); // draw first object
glBeginQuery(GL_SAMPLES_PASSED, queries[0]);
glDrawElements(GL_TRIANGLES,...); // draw second object
glEndQuery(GL_SAMPLES_PASSED);
glGetQueryObjectiv(queries[0], GL_QUERY_RESULT, &result);
return restult;
Most OpenGL tutorials don't go beyond a single glDraw*() command. As I understand it from this site I need two Vertex Array Objects, but the site doesn't explain how to set the Buffer Data for the separate objects. For the sake of simplicity, let's just say I want the objects to be a single triangle each:
Triangle1:
vertex1: -0.5, 0.0, 0.0
vertex2: -0.5, 0.5, 0.0
vertex3: 0.0, 0.0, 0.0
Triangle2:
vertex1: 0.0, 0.0, 0.0
vertex2: 0.5, 0.5, 0.0
vertex3: 0.5, 0.0, 0.0
Can someone show me how to setup the Vertex Array Objects, Vertex Buffer Objects, and Element Array Buffers to perform this query in C++ and OpenGL 3.2?
Your code for drawing geometry misses two essential steps:
creation of the GL_ARRAY_BUFFER (glGenBuffers, glBindBuffer, glBufferData)
association of drawing state machine with the array buffer (calls to gl…Pointer functions)
It is those which allow drawing multiple meshes.
A couple of suggestions:
You can draw one collection of triangles that aren't connected to each other and appear to be two objects visually.
You can also create two separate OpenGL contexts. One context for each of the objects you want to draw. When drawing each object make the associated context the 'current' context and make your draw calls.

Global Ambient Lighting?

Lets say my display function draws polygons pixel by pixel not using opengl functions, but a drawpixel function.
I call
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, global_ambient);
glShadeModel(GL_SMOOTH);
glEnable(GL_LIGHTING);
where global_ambient is 0.0, 0.0, 0.0, 1.0 and I have material parameters defined, that is glmaterial is never called. Would the global ambient lighting still work as in I will not be able to see the polygon? Or would I need to define material parameters.
Lets say my display function draws polygons pixel by pixel not using opengl functions, but a drawpixel function.
If that's true, then the lighting state is completely irrelevant. Fixed-function OpenGL lighting is per-vertex. You're not sending vertices; you're sending pixel data.

How do I get the current color of a fragment?

I'm trying to wrap my head around shaders in GLSL, and I've found some useful resources and tutorials, but I keep running into a wall for something that ought to be fundamental and trivial: how does my fragment shader retrieve the color of the current fragment?
You set the final color by saying gl_FragColor = whatever, but apparently that's an output-only value. How do you get the original color of the input so you can perform calculations on it? That's got to be in a variable somewhere, but if anyone out there knows its name, they don't seem to have recorded it in any tutorial or documentation that I've run across so far, and it's driving me up the wall.
The fragment shader receives gl_Color and gl_SecondaryColor as vertex attributes. It also gets four varying variables: gl_FrontColor, gl_FrontSecondaryColor, gl_BackColor, and gl_BackSecondaryColor that it can write values to. If you want to pass the original colors straight through, you'd do something like:
gl_FrontColor = gl_Color;
gl_FrontSecondaryColor = gl_SecondaryColor;
gl_BackColor = gl_Color;
gl_BackSecondaryColor = gl_SecondaryColor;
Fixed functionality in the pipeline following the vertex shader will then clamp these to the range [0..1], and figure out whether the vertex is front-facing or back-facing. It will then interpolate the chosen (front or back) color like usual. The fragment shader will then receive the chosen, clamped, interpolated colors as gl_Color and gl_SecondaryColor.
For example, if you drew the standard "death triangle" like:
glBegin(GL_TRIANGLES);
glColor3f(0.0f, 0.0f, 1.0f);
glVertex3f(-1.0f, 0.0f, -1.0f);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex3f(1.0f, 0.0f, -1.0f);
glColor3f(1.0f, 0.0f, 0.0f);
glVertex3d(0.0, -1.0, -1.0);
glEnd();
Then a vertex shader like this:
void main(void) {
gl_Position = ftransform();
gl_FrontColor = gl_Color;
}
with a fragment shader like this:
void main() {
gl_FragColor = gl_Color;
}
will transmit the colors through, just like if you were using the fixed-functionality pipeline.
If you want to do mult-pass rendering, i.e. if you have rendered to the framebuffer and want to to a second render pass where you use the previous rendering than the answer is:
Render the first pass to a texture
Bind this texture for the second pass
Access the privously rendered pixel in the shader
Shader code for 3.2:
uniform sampler2D mytex; // texture with the previous render pass
layout(pixel_center_integer) in vec4 gl_FragCoord;
// will give the screen position of the current fragment
void main()
{
// convert fragment position to integers
ivec2 screenpos = ivec2(gl_FragCoord.xy);
// look up result from previous render pass in the texture
vec4 color = texelFetch(mytex, screenpos, 0);
// now use the value from the previous render pass ...
}
Another methods of processing a rendered image would be OpenCL with OpenGL -> OpenCL interop. This allows more CPU like computationing.
If what you're calling "current value of the fragment" is the pixel color value that was in the render target before your fragment shader runs, then no, it is not available.
The main reason for that is that potentially, at the time your fragment shader runs, it is not known yet. Fragment shaders run in parallel, potentially (depending on which hardware) affecting the same pixel, and a separate block, reading from some sort of FIFO, is usually responsible to merge those together later on. That merging is called "Blending", and is not part of the programmable pipeline yet. It's fixed function, but it does have a number of different ways to combine what your fragment shader generated with the previous color value of the pixel.
You need to sample texture at current pixel coordinates, something like this
vec4 pixel_color = texture2D(tex, gl_TexCoord[0].xy);
Note,- as i've seen texture2D is deprecated in GLSL 4.00 specification - just look for similar texture... fetch functions.
Also sometimes it is better to supply your own pixel coordinates instead of gl_TexCoord[0].xy - in that case write vertex shader something like:
varying vec2 texCoord;
void main(void)
{
gl_Position = vec4(gl_Vertex.xy, 0.0, 1.0 );
texCoord = 0.5 * gl_Position.xy + vec2(0.5);
}
And in fragment shader use that texCoord variable instead of gl_TexCoord[0].xy.
Good luck.
The entire point of your fragment shader is to decide what the output color is. How you do that depends on what you are trying to do.
You might choose to set things up so that you get an interpolated color based on the output of the vertex shader, but a more common approach would be to perform a texture lookup in the fragment shader using texture coordinates passed in the from the vertex shader interpolants. You would then modify the result of your texture lookup according to your chosen lighting calculations and whatever else your shader is meant to do and then write it into gl_FragColor.
The GPU pipeline has access to the underlying pixel info immediately after the shaders run. If your material is transparent, the blending stage of the pipeline will combine all fragments.
Generally objects are blended in the order that they are added to a scene, unless they have been ordered by a z-buffering algo. You should add your opaque objects first, then carefully add your transparent objects in the order to be blended.
For example, if you want a HUD overlay on your scene, you should just create a screen quad object with an appropriate transparent texture, and add this to your scene last.
Setting the SRC and DST blending functions for transparent objects gives you access to the previous blend in many different ways.
You can use the alpha property of your output color here to do really fancy blending. This is the most efficient way to access framebuffer outputs (pixels), since it works in a single pass (Fig. 1) of the GPU pipeline.
Fig. 1 - Single Pass
If you really need multi pass (Fig. 2), then you must target the framebuffer outputs to an extra texture unit rather than the screen, and copy this target texture to the next pass, and so on, targeting the screen in the final pass. Each pass requires at least two context switches.
The extra copying and context switching will degrade rendering performance severely. Note that multi-threaded GPU pipelines are not much help here, since multi pass is inherently serialized.
Fig. 2 - Multi Pass
I have resorted to a verbal description with pipeline diagrams to avoid deprecation, since shader language (Slang/GLSL) is subject to change.
how-do-i-get-the-current-color-of-a-fragment
Some say it cannot be done, but I say this works for me:
//Toggle blending in one sense, while always disabling it in the other.
void enableColorPassing(BOOL enable) {
//This will toggle blending - and what gl_FragColor is set to upon shader execution
enable ? glEnable(GL_BLEND) : glDisable(GL_BLEND);
//Tells gl - "When blending, change nothing"
glBlendFunc(GL_ONE, GL_ZERO);
}
After that call, gl_FragColor will equal the color buffer's clear color the first time the shader runs on each pixel, and the output each run will be the new input upon each successive run.
Well, at least it works for me.