OpenGL normal vector from vertex location only - opengl

I am handling some OpenGL lighting tutorial, now on the topic of diffuse lighting. Wondering is it possible to calculate normal vector from "surface" of verticies purely by their location? (i.e. for each triangle, get p1 - p2 and p1 - p3 then cross product them).
Am I correct by thinking that it is prohibited by OpenGL's nature of parallel processing (verticies does not know who they are with, and no way to put their processing in sorted order)? And this is more efficiently handled by the actual verticies model providing them(i.e. vertex data containing x, y, z, nx, ny, nz) since it is mostly static, such that calculation is done in model generation (non-runtime) saving shader resource? Will this change when the model and thus vertex data is dynamically generated(and let me think, able to be handled by the geometry shader)?

Vertex normals are as unchanging as vertex positions (that is, if your positions don't change, the normals won't either). As such, unless the vertex positions change (and I don't mean by applying matrices to them), it's best to calculate normals once. On the CPU. Then put them in your mesh data.
Pretty much every art package can export vertex normals with a mesh.
To your specific question, no. Neither vertex shaders nor geometry shaders can generate normals. GS's have adjacency primitives that may be able to generate them, but the restrictions on such generation are very tight. Also, it would be slow.

Related

How to multiply vertices with model matrix outside the vertex shader

I am using OpenGL ES2 to render a fairly large number of mostly 2d items and so far I have gotten away by sending a premultiplied model/view/projection matrix to the vertex shader as a uniform and then multiplying my vertices with the resulting MVP in there.
All items are batched using texture atlases and I use one MVP per batch. So all my vertices are relative to the translation of that MVP.
Now I want to have rotation and scaling for each of the separate items, which means I need a different model for each of them. So I modified my vertex to include the model (16 floats!) and added a mat4 attribute in my shader and it all works well. But I'm kinda dissapointed with this solution since it dramatically increased the vertex size.
So as I was staring at my screen trying to think of a different solution I thought about transforming my vertices to world space before I send them over to the shader. Or even to screen space if its possible. The vertices I use are unnormalized coordinates in pixels.
So the question is, is such a thing possible? And if yes how do you do it? I can't think why it shouldn't be since its just maths but after a fairly long search on google, it doesn't look like a lot of people are actually doing this...
Strange cause if it is indeed possible, it would be quite a major optimization in cases like this one.
If the number of matrices per batch are limited then you can pass all those matrices as uniforms (preferably in a UBO) and expand the vertex data with an index which specifies which matrix you need to use.
This is similar to GPU skinning used for skeletal animation.

How is normal data usually stored?

This is quite a basic question. I'm learning about 3D graphics and lighting at the moment and I was wondering about how normal data is (normally) stored.
Is it standard to keep this data as part of any external 'model' file you load into your program, or to compute normal data anew from vertex data each time a new model is loaded?
Or are there advantages to both methods and they are sometimes used for different reasons?
This, as always in graphics, depends.
If you have a simple model, it shouldn't be a problem to recompute the normals. Storing them is basically caching, then.
If you use per-pixel lighting, however, normals are stored per-pixel in normal (bump etc) maps. In this case they typically can't be generated procedurally (they are generated from models with higher poly count).
opengl allows several attributes about each vertex when sending to the GPU for processing, most commonly used for this is position, normals and texture mappings
the most basic of all model filed (the obj file) allows the choice of whether to include the data at all. But for most more complicated models the normals aren't standard and require storing in the model files
In the general case, this is not just a performance consideration. The normals are part of the model, and cannot be thrown away and re-generated from the vertex positions.
Picture a typical case where you build a shape in a modelling program. The internal data describing the shape in the software most likely consists of analytical surfaces, e.g. spline surfaces. When you export the shape to a vertex based format, the analytical surface is approximated by a triangle mesh. The vertices of the mesh will be written out as vertex positions, along with connectivity information that defines how the vertices form a mesh. The normals of the analytical surface at the vertex position will be calculated for each vertex, and written out along with the vertex positions.
If you have a mesh of vertices without normals (e.g. because you never exported normals, or you discarded them), you can still calculate surface normals. This is typically done by averaging the face normals of all faces adjacent to the each vertex, possibly as a weighted average that takes the area or angle of the face into account. But no matter how this calculation is done, the result is an approximation of the normals of the original analytical surface.
The difference between using exact normals from the original analytical surface and using approximated normals reconstructed from the triangle mesh can be very visible when rendering. It will depend heavily on how fine the tessellation is, and on what lighting model is used. But in most cases, reconstructed normals are not as good as the normals from the original model, and the rendered surface will look much smoother/cleaner if the original normals are used.

Moving/rotating shapes in the vertex shader

I'm writing a program that draws a number of moving/rotating polygons using OpenGL. Each polygon has a location in world coordinates while its vertices are expressed in local coordinates (relative to polygon location). Each polygon also has a rotation.
The only way I can think of doing this is calculate vertex positions by translation/rotation in each frame and push them to the GPU be drawn, but I was wondering if this could be performed in the vertex shader.
I thought I might express vertex locations in local coordinates and then add location and rotation attributes to each vertex, but then it occurred to me that this won't be any better than pushing new vertex positions on each frame.
Should I do this kind of calculation on the CPU, or is there a way to do it efficiently in the vertex shader?
The vertex shader is indeed responsible for transforming your geometry. However, the vertex shader is run for every single vertex of your scene. If you do transformations inside the vertex shader, you'll do the same calculation over and over again which yields the same result every time (as opposed to simply multiplying the model view projection matrix with the vertex coordinate). So in terms of efficiency you're best off doing that on the CPU side.
If the models are small, like in your case, I don't expect there to be too much of a difference, because you still have to set the coordinates where the polygons are supposed to be drawn somehow. In this case doing the calculations once on the CPU side is still the best, given that it does the calculation once independent of the vertex count of your polygons, as well as that it will probably result in clearer code since it's easier to see what you're doing.
These calculations are usually done on CPU only. As doing them on CPU is efficient in general. your best shot is to send these rotation matrices in as uniform and do multiplication on GPU. Sending uniforms is not very expensive operation in general so u should be be worrying about that.

Use triangle normals in OpenGL to get vertex normals

I have a list of vertices and their arrangement into triangles as well as the per-triangle normalized normal vectors.
Ideally, I'd like to do as little work as possible in somehow converting the (triangle,normal) pairs into (vertex,vertex_normal) pairs that I can stick into my VAO. Is there a way for OpenGL to deal with the face normals directly? Or do I have to keep track of each face a given vertex is involved in (which more or less happens already when I calculate the index buffers) and then manually calculate the averaged normal at the vertex?
Also, is there a way to skip per-vertex normal calculation altogether and just find a way to inform the fragment shader of the face-normal directly?
Edit: I'm using something that should be portable to ES devices so the fixed-function stuff is unusable
I can't necessarily speak as to the latest full-fat OpenGL specifications but certainly in ES you're going to have to do the work yourself.
Although the normal was modal under the old fixed pipeline like just about everything else, it was attached to each vertex. If you opted for the flat shading model then GL would use the colour at the first vertex on the face across the entire thing rather than interpolating it. There's no way to recreate that behaviour under ES.
Attributes are per vertex and uniforms are — at best — per batch. In ES there's no way to specify per-triangle properties and there's no stage of the rendering pipeline where you have an overview of the geometry when you could distribute them to each vertex individually. Each vertex is processed separately, varyings are interpolation and then each fragment is processed separately.

Pairwise vertex attributes in OpenGL

I'm trying to visualise a graph with OpenGL. I have vertex buffer with points in 3D space, and an index buffer that specifies lines between vertices. I use glDrawElements to draw the graph. Everything works. Problem is that I need to visualise the edge weights. My problem is that edge weights are pairwise attributes and I have no idea how to put this information into my shader. Only solutions I can think of is drawing each edge individually with DrawRangeElements and setting the edge weight between every call. Is there a better way of doing this?
There's no need to employ a geometry shader. Just render them as GL_LINES, duplicating the positions as needed as well as providing the same "weight" attribute for each pair of verts on a line. This is ultimately no different from rendering a cube, where each face needs its own normals.
If (and only if) you absolutely must have that memory back, and you can't simply compress your vertex data (using normalized shorts, unnormalized shorts, or whatever), here are some techniques you can use. Be warned: this is a memory-vs-performance tradeoff. So unless you have real memory pressures, just duplicate your vertex data and get it over with.