When reading examples of simple VBOs programs I've noticed there seems to be an association of normal data with vertex data. But from the definition of a normal, I would have thought that the normal data should be associated with the face data.
From the code segment below I can noticed that the normal data for each MyVertex is the same, so the normal for the "triangle face" would make sense. But I am unsure of how one would store the normal data for larger objects where several faces may share the same vertices as stored in GL_ELEMENT_ARRAY_BUFFER.
Questions:
How does OpenGL conceptually handle the normal data? Or have I made a wrong assumption in how normals should work somewhere?
(code below from http://www.opengl.org/wiki/VBO_-_just_examples)
struct MyVertex
{
float x, y, z; //Vertex
float nx, ny, nz; //Normal
float s0, t0; //Texcoord0
};
MyVertex pvertex[3];
//VERTEX 0
pvertex[0].x = 0.0;
pvertex[0].y = 0.0;
pvertex[0].z = 0.0;
pvertex[0].nx = 0.0;
pvertex[0].ny = 0.0;
pvertex[0].nz = 1.0;
pvertex[0].s0 = 0.0;
pvertex[0].t0 = 0.0;
//VERTEX 1
Thanks in advance
In OpenGL, normals are vector attributes, just like position or texture coordinates.
Having per-face normals may seem reasonable, but wouldn't work in practice.
Reason: One triangle is physically flat, but is often an approximation of a curved surface. Having normal vectors different among the vertices of a triangle allows you to interpolate between them to get an approximated normal vector at any point of the surface.
Think of a vertex normal as a sample of the normal at some particular points of a smooth surface.
(Of course, when rendering surfaces with hard edges, like a cube, the above doesn't really help and many require you to have duplicate vertices differing only by the normal.)
Related
I have a vertex format which are all floats, and looks like this:
POSITION POSITION POSITION NORMAL NORMAL NORMAL TEXCOORD TEXCOORD
I was thinking I need to draw lines from the first three floats to the next three floats, then I need to skip the next two floats and continue on. Is there any way of doing this without creating another buffer for each object that's in the correct layout?
I know I can draw just one line per draw call, and just loop over, but that is many draw calls? How is the general way normals are drawn for stuff like debugging?
Also I've thought about indexing, but indexing only helps selecting specific vertices, in this case I want to draw between two attributes of my normal vertex layout.
This cannot be done just by setting appropriate glVertexAttribPointer, since you have to skip the texcoords. Additionally, you don't want to draw a line from position to normal, but from position to position + normal, since normals just describe a direction, not a point in space.
What you can do is to use a geometry shader. Basically, you set up two attributes, one for position, one for normal (as you would do for rendering the model) and issue a draw command with GL_POINTS primitive type. In the geometry shader you then generate a line from position to position + normal.
Normally to draw surface normals you would set up a separate buffer or a geometry shader to do the work. Setting a separate buffer for a mesh to draw just the normals is trivial and doesn't require a draw call for every normal, all of your surface normals would be drawn in a single drawcall
Since you'll be doing it for debugging purposes, there's no need to worry too much about performance and just stick with the quicker method that gets things on screen
The way I'd personally do it depends on whether the mesh has vertex or face normals, we could for instance fill a buffer with a line for each vertex in the mesh whose offset from the vertex itself represent the normal you need to debug with the following pseudocode
var normal_buffer = [];
//tweak to your liking
var normal_length = 10.0;
//this assumes your mesh has 2 arrays of the same length
//containing structs of vertices and normals
for(var i = 0; i < mesh.vertices.length; i++) {
//retrieving the normal associated with this vertex
var nx = mesh.normals[i].x;
var ny = mesh.normals[i].y;
var nz = mesh.normals[i].z;
//retrieving the vertex itself, it'll be the first point of our line
var v1x = mesh.vertices[i].x;
var v1y = mesh.vertices[i].y;
var v1z = mesh.vertices[i].z;
//second point of our line representing the normal direction
var v2x = v1x + nx * normal_length;
var v2y = v1y + ny * normal_length;
var v2z = v1z + nz * normal_length;
buffer.push(v1x, v1y, v1z, v2x, v2y, v2z);
}
You can later on proceed as normal and attach the buffer to a vertex buffer object and use whatever program you like to issue one single draw call that will draw all of your mesh normals
vertbuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertbuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(buffer), gl.STATIC_DRAW);
/* later on in your program */
gl.drawArrays(gl.LINES, 0, buffer.length / 3);
A cool feature of normal debugging is that you can use the normal itself in a fragment shader as an output color to quickly check if it points to the expected direction
I just started learning C++ and OpenGL. I'm trying to calculate vertex normal in OpenGL.
I know there is a function glNormal3f. However, I am not allowed to use that function. Rather I have to calculate vertex normal with codes and an obj file. So what I am trying to do is, I first calculate surface normals and then calculate vertex normal.
I declared operators such as +,-,* , and other functions like innerproduct, crossproduct.
void calcul_color(){
VECTOR kd;
VECTOR ks;
kd=VECTOR(0.8, 0.8, 0.8);
ks=VECTOR(1.0, 0.0, 0.0);
double inner = kd.InnerProduct(ks);
int i, j;
for(i=0;i<cube.vertex.size();i++)
{
VECTOR n = cube.vertex_normal[i];
VECTOR l = VECTOR(100,100,0) - cube.vertex[i];
VECTOR v = VECTOR(0,0,1) - cube.vertex[i];
float xl = n.InnerProduct(l)/n.Magnitude();
VECTOR x = (n * (1.0/ n.Magnitude())) * xl;
VECTOR r = x - (l-x);
VECTOR color = kd * (n.InnerProduct(l)) + ks * pow((v.InnerProduct(r)),10);
cube.vertex_color[i] = color;
}
for(i=0;i<cube.face.size();i++)
{
FACE cur_face = cube.face[i];
glColor3f(cube.vertex_color[cur_face.id1].x,cube.vertex_color[cur_face.id1].y,cube.vertex_color[cur_face.id1].z);
glVertex3f(cube.vertex[cur_face.id1].x,cube.vertex[cur_face.id1].y,cube.vertex[cur_face.id1].z);
glColor3f(cube.vertex_color[cur_face.id2].x,cube.vertex_color[cur_face.id2].y,cube.vertex_color[cur_face.id2].z);
glVertex3f(cube.vertex[cur_face.id2].x,cube.vertex[cur_face.id2].y,cube.vertex[cur_face.id2].z);
glColor3f(cube.vertex_color[cur_face.id3].x,cube.vertex_color[cur_face.id3].y,cube.vertex_color[cur_face.id3].z);
glVertex3f(cube.vertex[cur_face.id3].x,cube.vertex[cur_face.id3].y,cube.vertex[cur_face.id3].z);
}
The way to compute vertex normals is this:
Initialize every vertex normal to (0,0,0)
For every face compute face normal fn, normalize it
For every vertex of the face add fn to the vertex normal
After that loop normalize every vertex normal
This loop is a nice O(n). One thing to pay attention to here is that if vertices are shared, the normals will smooth out like on a sphere. If vertices are not shared, you get hard faces like you want on a cube. Duplicating such vertices should be done before.
If your question was on how to go from normal to color, that is dependent on your lighting equation! The easiest one is to do: color = dot(normal,globallightdir)*globallightcolor
Another way would be color = texturecubemap(normal). But there are endless possibilities!
I am rendering a point based terrain from loaded heightmap data - but the points change their texturing depending on where the camera position is. To demonstrate the bug (and the fact that this isnt occuring from a z-buffering problem) I have taken screenshots with the points rendered at a fixed 5 pixel size from very slightly different camera positions (same angle), shown bellow:
PS: The images are large enough if you drag them into a new tab, didn't realise stack would scale them down this much.
State 1:
State 2:
The code to generate points is relatively simple so I'm posting this merely to rule out the option - mapArray is a single dimensional float array and copied to a VBO:
for(j = 0; j < mHeight; j++)
{
for(i = 0; i < mWidth; i++)
{
height = bitmapImage[k];
mapArray[k++] = 5 * i;
mapArray[k++] = height;
mapArray[k++] = 5 * j;
}
}
I find it more likely that I need to adjust my fragment shader because I'm not great with shaders- although I'm unsure where I could have gone wrong with such simple code and guess it's probably just not fit for purpose (with point based rendering). Bellow is my frag shader:
in varying vec2 TexCoordA;
uniform sampler2D myTextureSampler;
void main(){
gl_FragColor = texture2D(myTextureSampler, TexCoordA.st) * gl_Color;
}
Edit (requested info):
OpenGL version 4.4 no texture flags used.
TexCoordA is passed into the shader directly from my Vertex shader with no alterations at all. Self calculated UV's using this:
float* UVs = new float[mNumberPoints * 2];
k = 0;
for(j = 0; j < mHeight; j++)
{
for(i = 0; i < mWidth; i++)
{
UVs[k++] = (1.0f/(float)mWidth) * i;
UVs[k++] = (1.0f/(float)mHeight) * j;
}
}
This looks just like a subpixel accurate texture mapping side-effect. The problem with texture mapping implementation is that it needs to interpolate the texture coordinates on the actual rasterized pixels (fragments). When your camera is moving, the roundoff error from real position to the integer pixel position affects texture mapping, and is normally required for jitter-free animation (otherwise all the textures would jump by seemingly random subpixel amounts as the camera moves. There was a great tutorial on this topic by Paul Nettle.
You can try to fix this by not sampling texel corners but trying to sample texel centers (add half size of the texel to your point texture coordinates).
Another thing you can try is to compensate for the subpixel accurate rendering by calculating the difference between the rasterized integer coordinate (which you need to calculate yourself in a shader) and the real position. That could be enough to make the sampled texels more stable.
Finally, size matters. If your texture is large, the errors in the interpolation of the finite-precision texture coordinates can introduce these kinds of artifacts. Why not use GL_TEXTURE_2D_ARRAY with a separate layer for each color tile? You could also clamp the S and T texcoords to edge of the texture to avoid this more elegantly.
Just a guess: How are your point rendering parameters set? Perhaps the distance attenuation (GL_POINT_DISTANCE_ATTENUATION ) along with GL_POINT_SIZE_MIN and GL_POINT_SIZE_MAX are causing different fragment sizes depending on camera position. On the other hand I think I remember that when using a vertex shader these functionality is disabled and the vertex shader must decide about the size. I did it once by using
//point size calculation based on z-value as done by distance attenuation
float psFactor = sqrt( 1.0 / (pointParam[0] + pointParam[1] * camDist + pointParam[2] * camDist * camDist) );
gl_PointSize = pointParam[3] * psFactor;
where pointParam holds the three coefficients and the min point size:
uniform vec4 pointParam; // parameters for point size calculation [a b c min]
You may play around by setting your point size in the vertex shader directly with gl_PointSize = [value].
code is here as requested:
void MakeTeapotRed()
{
D3DXCreateTeapot(Device, &Teapot, 0);
}
so how do I change the vertex color of the teapot? If your thinking material, i already know that, I just need to know the color vertex which is supposed to be a much simpler thing than material. I can do this with a geometry mannually layed out with Vertex Buffers and Index Buffers, how do you apply this to a mesh with those VB and IB info filled out already?
class ColorVertex
{
public:
ColorVertex(){}
ColorVertex(float x, float y, float z, D3DCOLOR color)
{
m_x = x;
m_y = y;
m_z = z;
m_color = color;
}
float m_x, m_y, m_z; // 3d coordinates
D3DCOLOR m_color;
static const DWORD FVF;
};
const DWORD ColorVertex::FVF = D3DFVF_XYZ | D3DFVF_DIFFUSE;
The code I just posted is the class for the Vertex information called ColorVertex. As you can see, the code is setup for vertex color, color that doesnt required or must NOT have a light to work properly, as shown in FVF = D3DFVF_XYZ | D3DFVF_DIFFUSE.
Again, people seems to have a hard time understanding the problem, I need to update the color of the vertex to include color, for objects like teapot, sphere, mesh that can be created through D3DCreate[objects] eg. D3DCreateTeapot(arguments stuff).
Pls layout the code line by line, I'm a noob in directx, not in c++.
Look at the section on accessing the vertex buffer. You have to get the vertex declaration end examine it to find how the data for each vertex is laid out.
Once you have identified how the colour is stored you loop through each vertex and changed the value. When you finish and unlock the vertex buffer of the mesh, you will be done.
I just need to know the color vertex which is supposed to be a much simpler thing than material
I would have to disagree, a material looks like it would be a lot easier.
i need to get the color at a particular coordinate from a texture. There are 2 ways i can do this, by getting and looking at the raw png data, or by sampling my generated opengl texture. Is it possible to sample an opengl texture to get the color (RGBA) at a given UV or XY coord? If so, how?
Off the top of my head, your options are
Fetch the entire texture using glGetTexImage() and check the texel you're interested in.
Draw the texel you're interested in (eg. by rendering a GL_POINTS primitive), then grab the pixel where you rendered it from the framebuffer by using glReadPixels.
Keep a copy of the texture image handy and leave OpenGL out of it.
Options 1 and 2 are horribly inefficient (although you could speed 2 up somewhat by using pixel-buffer-objects and doing the copy asynchronously). So my favourite by FAR is option 3.
Edit: If you have the GL_APPLE_client_storage extension (ie. you're on a Mac or iPhone) then that's option 4 which is the winner by a long way.
The most efficient way I've found to do it is to access the texture data (you should have our PNG decoded to make into a texture anyway) and interpolate between the texels yourself. Assuming your texcoords are [0,1], multiply texwidthu and texheightv and then use that to find the position on the texture. If they're whole numbers, just use the pixel directly, otherwise use the int parts to find the bordering pixels and interpolate between them based on the fractional parts.
Here's some HLSL-like psuedocode for it. Should be fairly clear:
float3 sample(float2 coord, texture tex) {
float x = tex.w * coord.x; // Get X coord in texture
int ix = (int) x; // Get X coord as whole number
float y = tex.h * coord.y;
int iy = (int) y;
float3 x1 = getTexel(ix, iy); // Get top-left pixel
float3 x2 = getTexel(ix+1, iy); // Get top-right pixel
float3 y1 = getTexel(ix, iy+1); // Get bottom-left pixel
float3 y2 = getTexel(ix+1, iy+1); // Get bottom-right pixel
float3 top = interpolate(x1, x2, frac(x)); // Interpolate between top two pixels based on the fractional part of the X coord
float3 bottom = interpolate(y1, y2, frac(x)); // Interpolate between bottom two pixels
return interpolate(top, bottom, frac(y)); // Interpolate between top and bottom based on fractional Y coord
}
As others have suggested, reading back a texture from VRAM is horribly inefficient and should be avoided like the plague if you're even remotely interested in performance.
Two workable solutions as far as I know:
Keep a copy of the pixeldata handy (wastes memory though)
Do it using a shader