Fixed pipeline to glsl gl_Normal / vnormal / v_normal - c++

The more I read the more confused I become. I am trying to learn getting from old opengl1 fixed pipeline to modern gl. I learned a lot already but for one thing I am still unsure about. In old tutorials its just used as gl_Normal, in newer its often referred to vnormal or v_normal.
In older versions I didn't have to take care about that, also in fixed pipeline it seems to be provided automatically. So where to get this or rather, how to calculate it? Must it be done in c++ or can it be done in vertex or fragment shader as well from vert position (in old tutorials referred as gl_Vertex)?
A sample or pseudo code would be nice.

Normal never came automatically. Even with fixed pipeline, you had to provide normals yourself.
gl_Normal was pre-defined vertex shader attribute that came from glNormalPointer. In latest GLs (not sure about actual version, probably 4.*) these functions was deprecated so all attributes have to come from glVertexAttribPointer - no predefined attributes, programmer have to bind every array to attribute location himself.
So normal, or whatever it is called - is just named attribute. You have to get its location (with glGetAttribLocation) and assign array containing vertex normals (normals to sufrace at the point of specified vertex) to that location.
As for calculating normals - it is trivial for flat surfaces (just a cross-product of two triangle edges), but for smooth shading - normals have to be interpolated between nearest polygons. It is usually done in 3D mesh editors and just exported to file.

Related

How does gl_ClipVertex work relative to gl_ClipDistance?

I was planning on using gl_ClipDistance in my vertex shader until I realized my project is OpenGL 2.1/GLSL 1.2 which means gl_ClipDistance is not available.
gl_ClipVertex is the predecessor to gl_ClipDistance but I can find very little information about how it works and, especially relative to gl_ClipDistance, how it is used.
My goal is to clip the intersection of clipping planes without the need for multiple rendering passes. In the above referenced question, it was suggested that I use gl_ClipDistance. Examples like this one are clear to me, but I don't see how to apply it to gl_ClipVertex.
How can I use gl_ClipVertex for the same purpose?
When in doubt, you should always examine the formal GLSL specification. In particular, since this pre-declared variable was introduced in GLSL 1.3 you know (or should assume) that there will be a discussion of the deprecation of the old technique and the implementation of the new one.
In fact, there is if you look here:
The OpenGL® Shading Language 1.3 - 7.1 Vertex Shader Special Variables - pp. 60-61
The variable gl_ClipVertex is deprecated. It is available only in the vertex language and provides a place for vertex shaders to write the coordinate to be used with the user clipping planes. The user must ensure the clip vertex and user clipping planes are defined in the same coordinate space. User clip planes work properly only under linear transform. It is undefined what happens under non-linear transform.
Further investigation of the actual types used for both should also give a major hint as to the difference between the two:
out float gl_ClipDistance[]; // may be written to
out vec4 gl_ClipVertex; // may be written to, deprecated
You will notice that gl_ClipVertex is a full-blown positional (4 component) vector, where as gl_ClipDistance[] is simply an array of floating-point numbers. What you may not notice is that gl_ClipDistance[] is an input/output for geometry shaders and an input to fragment shaders, where as gl_ClipVertex only exists in vertex shaders.
The clip vertex is the position used for clipping, where as clip distance is the distance from each clipping plane (which you are allowed to calculate yourself). The ability to specify the distance arbitrarily for each clipping plane allows for non-linear transformations as discussed above, prior to this all you could do is set the location used to compute the distance from each clipping plane.
To put this all in perspective:
The calculation of clipping from the clip vertex used to occur as part of the fixed-function pipeline between vertex transformation and fragment shading. When GLSL 1.3 was introduced Shader Model 4.0 had already formally been defined by DX10 for a number of years, which exposed programmable primitive assembly and logically more flexible computation of clipping. We did not get geometry shaders until GLSL 1.5, but many other parts of Shader Model 4.0 were gradually introduced between 1.3 and 1.5

Use triangle normals in OpenGL to get vertex normals

I have a list of vertices and their arrangement into triangles as well as the per-triangle normalized normal vectors.
Ideally, I'd like to do as little work as possible in somehow converting the (triangle,normal) pairs into (vertex,vertex_normal) pairs that I can stick into my VAO. Is there a way for OpenGL to deal with the face normals directly? Or do I have to keep track of each face a given vertex is involved in (which more or less happens already when I calculate the index buffers) and then manually calculate the averaged normal at the vertex?
Also, is there a way to skip per-vertex normal calculation altogether and just find a way to inform the fragment shader of the face-normal directly?
Edit: I'm using something that should be portable to ES devices so the fixed-function stuff is unusable
I can't necessarily speak as to the latest full-fat OpenGL specifications but certainly in ES you're going to have to do the work yourself.
Although the normal was modal under the old fixed pipeline like just about everything else, it was attached to each vertex. If you opted for the flat shading model then GL would use the colour at the first vertex on the face across the entire thing rather than interpolating it. There's no way to recreate that behaviour under ES.
Attributes are per vertex and uniforms are — at best — per batch. In ES there's no way to specify per-triangle properties and there's no stage of the rendering pipeline where you have an overview of the geometry when you could distribute them to each vertex individually. Each vertex is processed separately, varyings are interpolation and then each fragment is processed separately.

Fixed-Function Vs. Shaders - help understand the conceptual differences

My background: I first started experimenting with OpenGL some months ago, for no particular purpose, just fun. I started reading the OpenGL redbook, and got as far as making a planetary system with a lot of different lighting. That lasted for a month, and my interest for openGL went away. It awoke again a week or so ago, and as I gathered from some SO posts, the redbook is outdated and the OpenGL Superbible is a better source for learning. So I started reading it. I like the concept of shaders but there's a real mess going on in my brain because of transition from my old memories of the fixed pipeline and the new concept of shaders.
Question: I would like to write some statements which I think are true and I am asking OpenGL experts to verify them (i.e. whether I am understanding correctly, not quite correctly or absolutely incorrectly). So...
1) If we don't use any shader program, nothing changes. We have current color, current normal, current transformation matrix, current everything, and as soon as we call glVertex**(...) these current values are taken and the vertex is fed to ... I don't know what. The fact is that it's transformed with the current matrix, the current color and normal are applied to it etc.
2) As soon as we use a shader program, all the above stops working. That is, glColor, glRotate etc. make no sense (Do they?). I mean, glColor still does set the current color, glRotate still multiplies the current matrix by the rotation matrix, but these aren't used at all. Instead, we feed vertex attributes by glVertexAttrib. Which attribute means what is totally dependent on our vertex shader and the in variable binding. We also find ans set the values of the uniforms and then call glVertex and the shader is executed ( I don't know immediately or after glEnd() is called). The actual vertex and fragment processing is done entirely manually in the shader program.
3) Shaders don't add anything to depth testing. That is, I don't need to take care of it in a shader. I just call glEnable(GL_DEPTH_TEST). Neither is face culling affected.
4) Alpha blending and antialiasing need not be taken care of in shaders. glEnable calls will suffice.
5) Is it a good idea to use gluPerspective, glRotate, glPushMatrix and other matrix functions, and then retrieve the current matrix and feed it as a uniform to a shader? Thus there won't be any need in using a 3rd party matrix library.
It depends on what version of OpenGL you're talking about. Up through OpenGL 3.0, all the fixed functionality is still present, so yes, if you decide to just use fixed functionality it continues to work like it always did. Starting from 3.0, quite a bit of the fixed pipeline was deprecated, and as of 3.1 it disappears completely. Using these, you no longer really have the option to just use the fixed pipeline.
Again, it depends. For example, up through OpenGL 3.0, glColor is still supported, even when you use a shader. The difference is that instead of automatically being applied to what gets drawn, it's supplied to your shader, which can use it unchanged, modify it as it sees fit, or ignore it completely. So, your fragment shader receives gl_FrontColor and gl_BackColor, and writes the actual fragment color to gl_FragColor. If you're using OpenGL 3.1 or newer, however, glColor (for example) just no longer exists -- a color will be just another value you supply to your shader like you could/would anything else.
That's correct, at least up to OpenGL 3.1. As of 4.0, there's a new compute shader that (I believe) can get involved in things like depth testing (but I haven't used it, so I'm a bit uncertain about that).
Yes, you can still use built-in alpha blending. Depending on your hardware, you may also want to consider using the gl_ARB_draw_buffers_blend extension (which is mandatory as of OpenGL 4, if I recall correctly).
Yet again, it depends on the version of OpenGL you're talking about. Current OpenGL completely eliminates all support for matrices so you have no choice but to use some other matrix library. Older versions supplied things like gl_ModelViewMatrix and gl_NormalMatrix to your shader as a uniform so you could go that route if you chose.
2) In modern OpenGL, there is no glColor, glBegin, glVertex, glRotate etc. so they don't make sense.
5) In modern OpenGL there are no built-in matrices, so you have to use a 3rd party library or write your own. So to answer your question, no, it's not a good idea.

Updating information from the vertex shader

In the vertex shader program of a WebGL application, I am doing the following:
Calculate gl_Position P using a function f(t) that varies in time.
My question is:
Is it possible to store the updated P(t) computed in the vertex shader so I can use it in the next time step? This will be useful for performing some boundary tests.
I have read some information on how textures can be used to store and updated vextex positions, but is this feasible in WebGL, since even texture access from a vertex program is unsupported in OpenGL ES 1.0?
For a more concrete example, let us say that we are trying to move a point according to the equation R(t) = (k*t, 0, 0). These positions are updated in the vertex shader, which makes the point move. Now if I want to make the point bounce at the wall located at R = (C, 0, 0). To do that, we need the position of the point at t - dt. (previous time step).
Any ideas appreciated.
Regards
In addition to the previous answers, you can circumvent vertex texture fetch by PBOs, but I do not know, if they are supported in WebGL or GLES, as I have only desktop GL experience. You write the vertex positions into the framebuffer. But then, instead of using these as vertex texture, you copy them into a vertex buffer (which works best via PBOs) and use them as a usual vertex attribute. That's the old way of doing transform feedback, which I suppose is not supported.
There's no way to store anything in the vertex shader. You can only pass values from it to the fragment shader and write those to the framebuffer pixels. And as you said, vertex texture fetch isn't universally supported (for instance, ANGLE started supporting it only a few days ago), so even that is a bit unworkable.
You can do two things: either do all the position math in JS and pass in the p1 and p0 as uniforms. Or keep track of the previous time value and do the position math twice in the shader, both for t1 and t0 (shouldn't have much of a performance impact unless you're vertex shader -bound).
Is your dt a constant? if so you could retrieve the previous position for your point by evaluating
R(t-dt). If it is not a constant then you could use a uniform to pass it along on every rendering cycle.

GLSL dynamically indexed arrays

I've been using DirectX (with XNA) for a while now, and have recently switched to OpenGL. I'm really loving it, but one thing has got me annoyed.
I've been trying to implement something that requires dynamic indexing in the vertex shader, but I've been told that this requires the equivilant of SM 4.0. However I know that this works in DX even with SM 2.0, possibly even 1.0. XNA's instancing sample uses this to do instancing on SM2.0 only cards http://create.msdn.com/en-US/education/catalog/sample/mesh_instancing.
The compiler can't have been "unrolling" it into a giant list of if statements, since this would surely exceed the instruction limit on SM2 for our 250 instances.
So is DX doing some trickery that I can't do with OpenGL, can I manipulate OpenGL to do the same, or is it a hardware feature that OpenGL doesn't expose?
You can upload an array for your light directions with something like glUniform3fv, then (assuming I understand what you're trying to do correctly) you just need your vertex format to include an index into this array (so there be lots of duplication of these indices if the index only changes once per mesh or something). If you don't already know, you can use glGetAttribLocation + glVertexAttribPointer to send arbitrary vertex attributes like this to the shader (as opposed to using the deprecated built-in attributes like gl_Vertex, gl_Normal, etc).
From your link:
Note that there is no single perfect
instancing technique. This must be
implemented in a different way on
Windows compared to Xbox 360, and on
Windows the ideal technique requires
shader 3.0, but there is also a
fallback approach that will work with
shader 2.0. This sample implements
several different instancing
techniques, so it can work on both
platforms and shader versions.
Not the emboldened part. So ont hat basis you should be able to do similar instancing on shader model 3. Shader model 2's instancing is usually performed using a matrix palette. It sumply means you can render multiple meshes in one call by uploading a load of transformation matrices in one go. This reduces draw calls and improves speed.
Anyway for OpenGL there was a lot of troubles finalising this extension and hence you need shader 4. You CAN, however, still stick a per vertex matrix palette index in yoru vertex structure and do matrix palette rendering using a shader...