I have a vertex format which are all floats, and looks like this:
POSITION POSITION POSITION NORMAL NORMAL NORMAL TEXCOORD TEXCOORD
I was thinking I need to draw lines from the first three floats to the next three floats, then I need to skip the next two floats and continue on. Is there any way of doing this without creating another buffer for each object that's in the correct layout?
I know I can draw just one line per draw call, and just loop over, but that is many draw calls? How is the general way normals are drawn for stuff like debugging?
Also I've thought about indexing, but indexing only helps selecting specific vertices, in this case I want to draw between two attributes of my normal vertex layout.
This cannot be done just by setting appropriate glVertexAttribPointer, since you have to skip the texcoords. Additionally, you don't want to draw a line from position to normal, but from position to position + normal, since normals just describe a direction, not a point in space.
What you can do is to use a geometry shader. Basically, you set up two attributes, one for position, one for normal (as you would do for rendering the model) and issue a draw command with GL_POINTS primitive type. In the geometry shader you then generate a line from position to position + normal.
Normally to draw surface normals you would set up a separate buffer or a geometry shader to do the work. Setting a separate buffer for a mesh to draw just the normals is trivial and doesn't require a draw call for every normal, all of your surface normals would be drawn in a single drawcall
Since you'll be doing it for debugging purposes, there's no need to worry too much about performance and just stick with the quicker method that gets things on screen
The way I'd personally do it depends on whether the mesh has vertex or face normals, we could for instance fill a buffer with a line for each vertex in the mesh whose offset from the vertex itself represent the normal you need to debug with the following pseudocode
var normal_buffer = [];
//tweak to your liking
var normal_length = 10.0;
//this assumes your mesh has 2 arrays of the same length
//containing structs of vertices and normals
for(var i = 0; i < mesh.vertices.length; i++) {
//retrieving the normal associated with this vertex
var nx = mesh.normals[i].x;
var ny = mesh.normals[i].y;
var nz = mesh.normals[i].z;
//retrieving the vertex itself, it'll be the first point of our line
var v1x = mesh.vertices[i].x;
var v1y = mesh.vertices[i].y;
var v1z = mesh.vertices[i].z;
//second point of our line representing the normal direction
var v2x = v1x + nx * normal_length;
var v2y = v1y + ny * normal_length;
var v2z = v1z + nz * normal_length;
buffer.push(v1x, v1y, v1z, v2x, v2y, v2z);
}
You can later on proceed as normal and attach the buffer to a vertex buffer object and use whatever program you like to issue one single draw call that will draw all of your mesh normals
vertbuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertbuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(buffer), gl.STATIC_DRAW);
/* later on in your program */
gl.drawArrays(gl.LINES, 0, buffer.length / 3);
A cool feature of normal debugging is that you can use the normal itself in a fragment shader as an output color to quickly check if it points to the expected direction
Related
I'm using instancing to draw the same quad multiple times, for the floor in a game engine. Each floor has different texture co-ordinates depending on it's size and my problem is that all instances are using the texture co-ordinates of the first instance.
This is how I am buffering the data.
public static void UploadTextureCooridnates()
{
// Construct texture co-ordinate array
List<Vector2> textureCoords = new List<Vector2>();
foreach (Floor floor in Floor.collection)
{
float xCoordLeft = 0;
float yCoordBottom = 0;
float yCoordTop = floor.scale.X;
float xCoordRight = floor.scale.Z;
textureCoords.Add(new Vector2(xCoordLeft, yCoordBottom));
textureCoords.Add(new Vector2(xCoordRight, yCoordBottom));
textureCoords.Add(new Vector2(xCoordRight, yCoordTop));
textureCoords.Add(new Vector2(xCoordLeft, yCoordBottom));
textureCoords.Add(new Vector2(xCoordRight, yCoordTop));
textureCoords.Add(new Vector2(xCoordLeft, yCoordTop));
}
Vector2[] texCoords = textureCoords.ToArray();
// Buffer data
GL.BindBuffer(BufferTarget.ArrayBuffer, VBOtexcoordsInstanced);
GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(texCoords.Length * Vector2.SizeInBytes), texCoords, BufferUsageHint.StaticDraw);
GL.EnableVertexAttribArray(1);
GL.VertexAttribPointer(1, 2, VertexAttribPointerType.Float, false, 0, 0);
GL.VertexAttribDivisor(1, 0);
}
If you really are using instancing to draw multiple quads, then clearly you must have:
A buffer object containing the positions of a single quad.
Some mechanism of getting per-instance data to offset those positions.
Your problem is your expectation. You're instancing with quads. That means your render call only uses 4-6 vertices. This will be just as true for your texture coordinates as your positions.
Your problem is that you're treating per-instance data as though it were per-vertex data. Texture coordinates change with each instance; therefore, they are per-instance data. But you don't have them use instancing, so they're treated as per-vertex data. So only the texture coordinates from the first 6 vertices will be used.
Of course, you can't really make them per-instance data either. Each vertex+instance pair has a separate texture coordinate value. And instancing doesn't provide a way to do that directly. Instead, you would have to use gl_VertexID and gl_InstanceID to fetch data directly from a buffer object; either an SSBO or a buffer texture.
And therefore, your problem really is this:
I'm using instancing to draw the same quad multiple times
Stop doing that. It's not worthwhile. Just put the positions in the CPU buffer data. Use proper buffer streaming techniques, and your performance will be reasonable.
I just started learning C++ and OpenGL. I'm trying to calculate vertex normal in OpenGL.
I know there is a function glNormal3f. However, I am not allowed to use that function. Rather I have to calculate vertex normal with codes and an obj file. So what I am trying to do is, I first calculate surface normals and then calculate vertex normal.
I declared operators such as +,-,* , and other functions like innerproduct, crossproduct.
void calcul_color(){
VECTOR kd;
VECTOR ks;
kd=VECTOR(0.8, 0.8, 0.8);
ks=VECTOR(1.0, 0.0, 0.0);
double inner = kd.InnerProduct(ks);
int i, j;
for(i=0;i<cube.vertex.size();i++)
{
VECTOR n = cube.vertex_normal[i];
VECTOR l = VECTOR(100,100,0) - cube.vertex[i];
VECTOR v = VECTOR(0,0,1) - cube.vertex[i];
float xl = n.InnerProduct(l)/n.Magnitude();
VECTOR x = (n * (1.0/ n.Magnitude())) * xl;
VECTOR r = x - (l-x);
VECTOR color = kd * (n.InnerProduct(l)) + ks * pow((v.InnerProduct(r)),10);
cube.vertex_color[i] = color;
}
for(i=0;i<cube.face.size();i++)
{
FACE cur_face = cube.face[i];
glColor3f(cube.vertex_color[cur_face.id1].x,cube.vertex_color[cur_face.id1].y,cube.vertex_color[cur_face.id1].z);
glVertex3f(cube.vertex[cur_face.id1].x,cube.vertex[cur_face.id1].y,cube.vertex[cur_face.id1].z);
glColor3f(cube.vertex_color[cur_face.id2].x,cube.vertex_color[cur_face.id2].y,cube.vertex_color[cur_face.id2].z);
glVertex3f(cube.vertex[cur_face.id2].x,cube.vertex[cur_face.id2].y,cube.vertex[cur_face.id2].z);
glColor3f(cube.vertex_color[cur_face.id3].x,cube.vertex_color[cur_face.id3].y,cube.vertex_color[cur_face.id3].z);
glVertex3f(cube.vertex[cur_face.id3].x,cube.vertex[cur_face.id3].y,cube.vertex[cur_face.id3].z);
}
The way to compute vertex normals is this:
Initialize every vertex normal to (0,0,0)
For every face compute face normal fn, normalize it
For every vertex of the face add fn to the vertex normal
After that loop normalize every vertex normal
This loop is a nice O(n). One thing to pay attention to here is that if vertices are shared, the normals will smooth out like on a sphere. If vertices are not shared, you get hard faces like you want on a cube. Duplicating such vertices should be done before.
If your question was on how to go from normal to color, that is dependent on your lighting equation! The easiest one is to do: color = dot(normal,globallightdir)*globallightcolor
Another way would be color = texturecubemap(normal). But there are endless possibilities!
I am drawing a 3D spherical grid in opengl using a VBO of vertex points and GL_LINES. What I want to achieve is to have one line - the zenith - to be brighter than the rest.
I obviously store x,y,z coords and normals, then figured I might be able to use the texture coordinates to "tag" locations where at creation - y coordinate is 0. Like so:
if (round(y) == 0.0f){
_varray[nr].tex[0] = -1.0; // setting the s variable (s,t texcoord),
// passed in with vbo
}
Now in the fragment shader I recieve this value and do:
if(vs_st[0] == -1){
diffuse = gridColor*2.f;
}else{
diffuse = gridColor;
}
And the results looks kind of awful:
Print Screen
I realize that this is propably due to the fragment shader having to interpolate between two points, can you guys think of a good way to identify the zenith line and make it brighter? I'd rather avoid using geometry shaders...
Solution was this:
if (round(y) == 0.0f) _varray[nr].tex[0] = -2; // set arb. number.
And then do not setthat variable anywhere else! then in fragment:
if( floor(vs_st[0]) == -2){
diffuse = gridColor*2.f;
}else{
diffuse = gridColor;
}
Dont know how neat that is, but it works.
Here's my situation: I need to draw a rectangle on the screen for my game's Gui. I don't really care how big this rectangle is or might be, I want to be able to handle any situation. How I'm doing it right now is I store a single VAO that contains only a very basic quad, then I re-draw this quad using uniforms to modify the size and position of it on the screen each time.
The VAO contains 4 vec4 vertices:
0, 0, 0, 0;
1, 0, 1, 0;
0, 1, 0, 1;
1, 1, 1, 1;
And then I draw it as a GL_TRIANGLE_STRIP. The XY of each vertex is it's position, and the ZW is it's texture co-ordinates*. I pass in the rect for the gui element I'm currently drawing as a uniform vec4, which offsets the vertex positions in the vertex shader like so:
vertex.xy *= guiRect.zw;
vertex.xy += guiRect.xy;
And then I convert the vertex from screen pixel co-ordinates into OpenGL NDC co-ordinates:
gl_Position = vec4(((vertex.xy / screenSize) * 2) -1, 0, 1);
This changes the range from [0, screenWidth | screenHeight] to [-1, 1].
My problem comes in when I want to do texture wrapping. Simply passing vTexCoord = vertex.zw; is fine when I want to stretch a texture, but not for wrapping. Ideally, I want to modify the texture co-ordinates such that 1 pixel on the screen is equal to 1 texel in the gui texture. Texture co-ordinates going beyond [0, 1] is fine at this stage, and is in fact exactly what I'm looking for.
I plan to implement texture atlasses for my gui textures, but managing the offsets and bounds of the appropriate sub-texture will be handled in the fragment shader - as far as the vertex shader is concerned, our quad is using one solid texture with [0, 1] co-ordinates, and wrapping accordingly.
*Note: I'm aware that this particular vertex format isn't neccesarily useful for this particular case, I could be using vec2 vertices instead. For the sake of convenience I'm using the same vertex format for all of my 2D rendering, and other objects ie text actually do need those ZW components. I might change this in the future.
TL/DR: Given the size of the screen, the size of a texture, and the location/size of a quad, how do you calculate texture co-ordinates in a vertex shader such that pixels and texels have a 1:1 correspondence, with wrapping?
That is really very easy math: You just need to relate the two spaces in some way. And you already formulated a rule which allows you to do so: a window space pixel is to map to a texel.
Let's assume we have both vec2 screenSize and vec2 texSize which are the unnormalized dimensions in pixels/texels.
I'm not 100% sure what exactly you wan't to achieve. There is something missing: you actaully did not specify where the origin of the texture shall lie. Should it always be but to the bottom left corner of the quad? Or should it be just gloablly at the bottom left corner of the viewport? I'll assume the lattter here, but it should be easy to adjust this for the first case.
What we now need is a mapping between the [-1,1]^2 NDC in x and y to s and t. Let's first map it to [0,1]^2. If we have that, we can simply multiply the coords by screenSize/texSize to get the desired effect. So in the end, you get
vec2 texcoords = ((gl_Position.xy * 0.5) + 0.5) * screenSize/texSize;
You of course already have caclulated (gl_Position.xy * 0.5) + 0.5) * screenSize implicitely, so this could be changed to:
vec2 texcoords = vertex.xy / texSize;
I am drawing some orthographic representations in bulk around one million in my model drawing.
(I will draw these things with some flag)
Camera is also implemented. rotation etc are possible.
All these orthograhic representations will change their positions when I rotate the model.
So that, it looks like, all these are in the same place on the model.
Now I would like to draw these orthographic things through graphics card, because, when these are huge in number, model rotation is very very slow.
I feel like there would not be any advantage, because, every time I have to recompute the postions based on the projection matrix.
1) Am I correct?
2) And also please let me know, how to improve performance when i am drawing bulk orthographic representations using opengl.
3) I also feel instancing will not work here, because for each orthographic rep is drawn between 2/3 positions. Am I correct ?
Usually, OpenGL does the projection calculation for you while drawing: The positions handed over to GL are world or model coordinates, and GL rendering uses the model-view-projection matrix (while rendering) to calculate the screen coordinates for the current projection etc. If the camera moves, the only thing that changes is the MVP matrix handed to GL.
This shouldn't really depend on the kind of projection you are using. So I don't think you need to / should update the positions in your array.
Here is my approach:
You create a vertex buffer that contains each vertex position 6 times and 6 texture coordinates (that you need anyways if you want to draw your representation with textures) from which you make a quad in the vertex shader. In that you would emulate the openGL projection and then offset the vertex by its texture coordinate to create the quad with constant size.
When constructing the model:
vector<vec3>* positionList = new vector<vec3>();
vector<vec2>* texCoordList = new vector<vec2>();
for (vector<vec3>::iterator it = originalPositions->begin(); it != originalPositions->end(); ++it) {
for (int i = 0; i < 6; i++) //each quad consists of 2 triangles, therefore 6 vertices
positionList->push_back(vec3(*it));
texCoordList->push_back(vec2(0, 0)); //corresponding texture coordinates
texCoordList->push_back(vec2(1, 0));
texCoordList->push_back(vec2(0, 1));
texCoordList->push_back(vec2(1, 0));
texCoordList->push_back(vec2(1, 1));
texCoordList->push_back(vec2(0, 1));
}
vertexCount = positionList->size();
glGenBuffers(1, &VBO_Positions); //Generate the buffer for the vertex positions
glBindBuffer(GL_ARRAY_BUFFER, VBO_Positions);
glBufferData(GL_ARRAY_BUFFER, positionList->size() * sizeof(vec3), positionList->data(), GL_STATIC_DRAW);
glGenBuffers(1, &VBO_texCoord); //Generate the buffer for texture coordinates, which we are also going to use as offset values
glBindBuffer(GL_ARRAY_BUFFER, VBO_texCoord);
glBufferData(GL_ARRAY_BUFFER, texCoordList->size() * sizeof(vec2), texCoordList->data(), GL_STATIC_DRAW);
Vertex Shader:
void main() {
fs_texCoord = vs_texCoord;
vec4 transformed = (transform * vec4(vs_position, 1));
transformed.xyz /= transformed.w; //This is how the openGL pipeline does projection
vec2 offset = (vs_texCoord * 2 - 1) * offsetScale;
//Map the texture coordinates from [0, 1] to [-offsetScale, offsetScale]
offset.x *= invAspectRatio;
gl_Position = vec4(transformed.xy + offset, 0, 1);
//We pass the new position to the pipeline with w = 1 so openGL keeps the position we calculated
}
}
Note that you need to adapt to the aspect ratio yourself, since there is no actual orthogonal matrix in this that would do this for you, which is this line:
offset.x *= invAspectRatio;