I am currently working on a small 2D game with LWJGL. I use line strips to draw randomly generated grass. Each of the blades consists of 6 points and has a random green color.
The problem I face:
To fill the ground with a gapless layer of grass, I need approx. 400 line strips...
In addition, every line strip has to be shifted, when the player moves around and should (optionally) wave in the wind. Therefore I need to change the data of 400 vbos every frame.
Is there any way to accelerate these operations?
My Code:
//upload the vertex data
void uploadGrass(int offset){
FloatBuffer grassBuffer=BufferUtils.createFloatBuffer(5*5);
for(int i=0;i<Storage.grasslist.size();i++){
if(grassvbo[i]==0){
grassvbo[i]=GL15.glGenBuffers();
}
grassBuffer.clear();
for(int j=1;j<6;j++){
grassBuffer.put(Utils.GL_x((int) Storage.grasslist.get(i)[j][0]-offset));
grassBuffer.put(Utils.GL_y((int) Storage.grasslist.get(i)[j][1]));
//index 0 of every blade contains RGB values for the color.
grassBuffer.put((float) Storage.grasslist.get(i)[0][0]);
grassBuffer.put((float) Storage.grasslist.get(i)[0][1]);
grassBuffer.put((float) Storage.grasslist.get(i)[0][2]);
}
grassBuffer.flip();
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER,grassvbo[i]);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, grassBuffer, GL15.GL_STATIC_DRAW);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
}
}
//draw line strips
void drawGrass(){
GL20.glUseProgram(pId2); //color shader
for(int i=0;i<grassvbo.length;i++){ //go through all the vbos
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER,grassvbo[i]);
GL20.glVertexAttribPointer(0, 2, GL11.GL_FLOAT, false, 5*4, 0);
GL20.glVertexAttribPointer(1, 2, GL11.GL_FLOAT, false, 5*4, 2*4);
GL20.glEnableVertexAttribArray(0);
GL20.glEnableVertexAttribArray(1);
GL11.glDrawArrays(GL11.GL_LINE_STRIP, 0, 5);
}
GL20.glUseProgram(0);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER,0);
GL20.glDisableVertexAttribArray(0);
}
until now it looks like that ;) (still needs antialiasing and alpha blending):
http://i.imgur.com/x3qXlQ5.png
Chapter 12 of the OpenGL SuperBible has a section on "Drawing a lot of Geometry Efficiently", in which they have a demo of millions of blades of grass being animated. This is done by using a single vertex description, the glDrawElementsInstanced method, and a shader to modify each 'instance' stamped out in whatever manner you like (e.g. perturb vertices, scale & rotate, change texture lookup, etc.)
This is very similar to your 'go through all the vbos' loop, except that you would only upload vertices for a single blade of grass, and OpenGL will take care of passing a unique gl_InstanceID to your shader. You can then encode the changes each frame either procedurally, or in a 'texture' that you upload as often as needed. The book has sample code (and it may be available from the web site as well).
Edit: Confirmed that the sample code is in the downloads from the site - look at the src\grass\grass.cpp to see a sample using textures to control grass length, orientation, color, and bend.
Related
Before diving into details, I have added opengl tag because JoGL is a Java OpenGL binding and the questions seem to be accessible for experts of both to answer.
Basically what I am trying to do is to render the grid over the texture in JoGL using GLSL. So far, my idea was to render first the texture and draw the grid on top. So what I am doing is:
gl2.glBindTexture(GL2.GL_TEXTURE_2D, textureId);
// skipped the code where I setup the model-view matrix and where I do fill the buffers
gl2.glVertexAttribPointer(positionAttrId, 3, GL2.GL_FLOAT, false, 0, vertexBuffer.rewind());
gl2.glVertexAttribPointer(textureAttrId, 2, GL2.GL_FLOAT, false, 0, textureBuffer.rewind());
gl2.glDrawElements(GL2.GL_TRIANGLES, indices.length, GL2.GL_UNSIGNED_INT, indexBuffer.rewind());
And after I draw the grid, using:
gl2.glBindTexture(GL2.GL_TEXTURE_2D, 0);
gl2.glDrawElements(GL2.GL_LINE_STRIP, indices, GL2.GL_UNSIGNED_INT, indexBuffer.rewind());
Without enabling the depth test, the result look pretty awesome.
But when I start updating the coordinates of the vertices (namely updating one of its axes which corresponds to height), the rendering is done in a wrong way (some things which should be in front appear behind, which makes sense without the depth test enabled). So I have enabled the depth test:
gl.glEnable(GL2.GL_DEPTH_TEST);
gl.glDepthMask(true);
An the result of the rendering is the following:
You can clearly see that the lines of the grid are blured, some of the are displayed thinner then others, etc. What I have tried to do to fix the problem is some line smoothing:
gl2.glHint(GL2.GL_LINE_SMOOTH_HINT, GL2.GL_NICEST);
gl2.glEnable(GL2.GL_LINE_SMOOTH);
The result is better, but I am not still satisfied.
QUESTION: So basically the question is how to improve further the solution, so I can see solid lines and those are displayed nicely when I start updating the vertex coordinates.
If it is required I can provide the code of Shaders (which is really simple, Vertex Shader only calculates the position based on projection, model view matrix and the vertex coords and Fragment Shader calculates the color from texture sampler).
I'm trying to render textured meshes with OpenGL. Currently, my main class holds a state consisting of :
std::vector<vec3d> vertices
std::vector<face> mesh
std::vector<vec3d> colors
vec3d is an implementation of 3D vectors - nothing particular - and face a class holding 3 integers pointing to the index of a vertice in vertices.
So far, I rendered my meshes without a texture with the following code (working fine) :
glShadeModel(params.smooth ? GL_SMOOTH : GL_FLAT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
/*This is my attempt to add the texture
*
*if (_colors.size() != 0) {
* cout << "Hello" << endl;
* glClientActiveTexture(GL_TEXTURE0);
* glEnableClientState(GL_TEXTURE_COORD_ARRAY);
* glTexCoordPointer(3,GL_FLOAT,0,&_colors[0].x);
}*/
glNormalPointer(GL_FLOAT,0,&normals[0].x);
glVertexPointer(3,GL_FLOAT,0,&vertices[0].x);
glDrawElements(GL_TRIANGLES,mesh.size()*3,GL_UNSIGNED_INT,&mesh[0].v1);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
My texture is stored in colors as a list of triples of floats between 0 and 1. However, colors are not applied. I read many examples of texture mapping and tried to do the same, with no luck. Any idea of what I'm doing wrong ?
As seen from your comments, you are using the wrong OpenGL feature to achieve what you want. Texturing means to stick a 2d image onto a mesh by using e.g. uv-coordinates.
What you are doing is to specify a color on each vertex, so you will need to enable GL_COLOR_ARRAY instead of GL_TEXTURE_COORD_ARRAY and use the respective functions for that.
One additional hint: If you are learning OpenGL from scratch you should consider using only modern OpenGL (3.2+)
To answer the last comment:
Well, I read those colors from a texture file, that's what I meant. Is there a way to use such an array to display my mesh in color ?
Yes and no: You will most probably not get the result you expect when doing this. In general there will be multiple pixels in a texture that should be mapped to a face. With vertex-colors you can only apply one color-value per vertex which gets interpolated over the triangle. Have a look on how to apply textures to a mesh, you should be able to find a lot of resources on the internet.
I'm migrating our graphics ending from using the old fixed pipeline functions to making use of the programmable pipeline. Our simplest model is just a collection of points in space where each point can be represented by different shapes. One of these being a cube.
I'm basing my code off the cube example from the OpenGL superbible.
In this example the cubes are placed at somewhat random places whereas I will have a fixed lit of points in space. I'm wondering if there is a way to pass that list to my shader so that a cube is drawn at each point vs looping through the list and calling glDrawElements each time. Is that even worth the trouble (performance wise)?
PS we are limited to OpenGL 3.3 functionality.
Is that even worth the trouble (performance wise)?
Probably yes, but try to profile nonetheless.
What you are looking for is instanced rendering, take a look at glDrawElementsInstanced and glVertexAttribDivisor.
What you want to do is store the 8 vertices of a generic cube (centered on the origin) in one buffer, and also store the coordinates of the center of each cube in another vertex attribute buffer.
Then you can use glDrawElementsInstanced to draw N cubes taking the vertices from the first buffer, and translating them in the shader using the specific position stored in the second buffer.
Something like this:
glVertexAttribPointer( vertexPositionIndex, /** Blah .. */ );
glVertexAttribPointer( cubePositionIndex, /** Blah .. */ );
glVertexAttribDivisor( cubePositionIndex, 1 ); // Advance one vertex attribute per instance
glDrawElementsInstanced( GL_TRIANGLES, 36, GL_UNSIGNED_BYTE, indices, NumberOfCubes );
In your vertex shader you need two attributes:
vec3 vertexPosition; // The coordinates of a vertex of the generic cube
vec3 cubePosition; // The coordinates of the center the specific cube being rendered
// ....
vec3 vertex = vertexPosition + cubePosition;
Obviously you can have also a buffer to store the size of each cube, or another one for the orientation, the idea remains the same.
In your example every cube uses its own model matrix per frame.
If you want to keep that you need multiple drawElements calls.
If some cubes don't move (don't need a per frame model matrix) you should combine these cubes into one VBO.
I have a beamforming program running on CUDA and i have to display the output of the beam in Opengl,I have to draw a rectangle in Opengl which is composed of an array of 24x12 small squares.I have to color each of these squares with a different color based on an output from a CUDA program doing the beamforming. I have been able to draw the reactangle using a VBO to which I pass an array containing the vertices of the squares and the color of each vertices using the following a structure. The overall summary of the problem that I am facing is that I am not able to assign the colors to each of the squares correctly. Some excerpts from the code :
struct attributes {
GLfloat coords[2]; //co-ordinates of the vertices
GLfloat color[3]; //color of the vertices
};
glGenBuffers(1, &vbo_romanis); // vbo_romanis is the VBO for drawing the frame
glBindBuffer(GL_ARRAY_BUFFER, vbo_romanis);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STREAM_DRAW);
glShadeModel (GL_SMOOTH);
glUseProgram(program);
glEnableVertexAttribArray(attribute_coord);
glEnableVertexAttribArray(attribute_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_romanis);
glVertexAttribPointer(
attribute_coord2d, // attribute
2, // number of elements per vertex, here (x,y)
GL_FLOAT, // the type of each element
GL_FALSE, // take our values as-is
sizeof(struct attributes), // next coord2 appears every 5 floats
0 // offset of first element
);
glVertexAttribPointer(
attribute_color, // attribute
3, // number of elements per vertex, here (r,g,b)
GL_FLOAT, // the type of each element
GL_FALSE, // take our values as-is
sizeof(struct attributes), // stride
(GLvoid*) offsetof(struct attributes, color) // offset
);
/* Push each element in buffer_vertices to the vertex shader */
glDrawArrays(GL_QUADS, 0, 4*NUM_SQRS);
So I am facing 2 issues when i draw the array:
the colors not appearing as I want them to. From what I have read about Opengl, the color of the vertices once assigned cannot be changed. But since all the squares share vertices among them, the colors are probably messed up. If I give the same color to all the vertices,it works fine, but not when I want to draw all squares of different colors. So, if someone can point to how I can assign a different color to each of the squares that would really helpful.
How do I update the colors of the vertices for each frame, Do i need to redraw the entire frame or is there a way to just update the colors of the vertices only.
I am completely new to OpenGL programming and any help would be much appreciated.
It is not clear what your vertex data actually is, but this:
But since all the squares share vertices among them, the colors are
probably messed up.
implies to me that you are trying to use the following data for two adjacent squares (A-F being the vertices):
A---B---C
| | |
| | |
D---E---F
However, in OpenGL, a vertex is the set of all attributes, not just the postion. What you get here is that the colors will be smoothly interpolated between the squares. So technically, you need to duplicate the vertices B and E into B1/B2 and E1/E2, with B1,E1 beeing the color of the lieft square, and B2,E2 that of the right square, but the same coordiantes.
However, for your problem, there might be a shortcut, in form of flat shading by declaring your vaertex shader outputs as flat. Vertex shader outputs (varyings) are by default interpolated across the whole primitive. However, defining them as flat will prevent the interpolation. Instead, the value from just one vertex is used for the whole primitive. OpenGL uses the conecpt of the provoking vertex to define which vertex of a primitive will be the one defining the values for such flat outputs.
The command glProvokingVertex() might be used to specify the general rules for which vertex is to be selected, you can choose between the first and the last. If you cleverly construct your vertex data, you can get a vertex to be shared for both triangles of one square that will be the provoking vertex for both, so you can define the color for each "grid cell" with just the color of one corner vertex of the cell, and do not have any need for duplicating vertices.
As a side note: you have the commang glShadeModel(GL_SMOOTH); in your code. This is deprecated and also totally useless when you use the programmable pipeline, as your comments imply. However, conceptually, this is the exact opposite of the flat shading approach I'm suggesting here.
How do I update the colors of the vertices for each frame, Do i need
to redraw the entire frame or is there a way to just update the colors
of the vertices only.
OpenGL is not a scene graph library. It does not remember which objects you have drawn in the past and does not allow changing their attributes. OpenGL is a rendering API, so if you want something different to appear on the screen, you have to tell it to draw again. If you plan on updating the colors without changing the positions of the squares itself, you might be even better off using two non-interleaved VBOs to split color and position data. That way, you can have the positions statically in one buffer, and stream only the color updates in another.
I wrote a simple program using OpenGL 4.3 which displays a triangle, quadrilateral, and pentagon. I defined my vertices in the following array:
vec2 vertices[NumPoints] = {
vec2(-1, -0.75), vec2(-0.75, -0.25), vec2(-0.5, -0.75), //Triangle
vec2(0, -0.25), vec2(0.5, -0.25), vec2(0.5, -0.75), vec2(0, -0.75), //Quad
vec2(0.25, 0.25), vec2(0.5, 0.5), vec2(0.75, 0.25), vec2(0.65, 0), vec2(0.35, 0) // pentagon
};
For the sake of brevity I'll omit most of the boilerplate code. In my display function I have the following code:
glDrawArrays(GL_TRIANGLES, 0, 3); // draw the points
glDrawArrays(GL_TRIANGLE_FAN, 3, 4); //quad
glDrawArrays(GL_TRIANGLE_FAN, 7, 5); //polygon
Everything works fine and there isn't any problems. However, it seems rather tedious and almost impossible to create complex scenes if you need know exactly how many vertices you need upfront. Am I missing something here? Also, if needed to create a circle, how would I do that using just GL_TRIANGLES?
In a real world application you will have scene management, with multiple objects in the scene and multiple sub-objects for each object. Objects are responsible for generating their vertex data and corresponding drawing calls and schedule them appropriately.
For example, you can have a cube object that has a single property - edge length. From that single property you generate the full set of vertices required to render a cube.
You can also chose to convert the cube primitive to another compatible object, for example a box primitive where you have 3 properties - height, width and depth, or even an arbitrary polygonal mesh that is made of faces, which are made of edges which are made of vertices.
It is a good idea to sort the different scene objects in such an order to allow minimizing the number of draw calls, which is the typical bottleneck 3D graphics struggles with. Combined with instancing and adaptive LOD you can get significant performance improvements.
For the circle - in primitive mode the most efficient way to draw it is using a triangle fan. But if you convert the circle primitive to a polygonal mesh, you could render regular triangles. The number of vertices, needed to draw the circle will grow. With triangle fan, you need 3 vertices for the first triangle and then only 1 additional vertex for every additional segment, with regular triangles you will need the full 3 vertices for every segment of the circle.
Am I missing something here?
Yes. You can allocate memory dynamically and read data from files. That's how any real world program deals with this kind of things. You'll have some scene management structure which allows to load a scene and objects from files. The file itself will contain some metadata, as number of faces, vertices, etc. which can be used to prepare the data structures at runtime.