Drawing the grid over the texture - opengl

Before diving into details, I have added opengl tag because JoGL is a Java OpenGL binding and the questions seem to be accessible for experts of both to answer.
Basically what I am trying to do is to render the grid over the texture in JoGL using GLSL. So far, my idea was to render first the texture and draw the grid on top. So what I am doing is:
gl2.glBindTexture(GL2.GL_TEXTURE_2D, textureId);
// skipped the code where I setup the model-view matrix and where I do fill the buffers
gl2.glVertexAttribPointer(positionAttrId, 3, GL2.GL_FLOAT, false, 0, vertexBuffer.rewind());
gl2.glVertexAttribPointer(textureAttrId, 2, GL2.GL_FLOAT, false, 0, textureBuffer.rewind());
gl2.glDrawElements(GL2.GL_TRIANGLES, indices.length, GL2.GL_UNSIGNED_INT, indexBuffer.rewind());
And after I draw the grid, using:
gl2.glBindTexture(GL2.GL_TEXTURE_2D, 0);
gl2.glDrawElements(GL2.GL_LINE_STRIP, indices, GL2.GL_UNSIGNED_INT, indexBuffer.rewind());
Without enabling the depth test, the result look pretty awesome.
But when I start updating the coordinates of the vertices (namely updating one of its axes which corresponds to height), the rendering is done in a wrong way (some things which should be in front appear behind, which makes sense without the depth test enabled). So I have enabled the depth test:
gl.glEnable(GL2.GL_DEPTH_TEST);
gl.glDepthMask(true);
An the result of the rendering is the following:
You can clearly see that the lines of the grid are blured, some of the are displayed thinner then others, etc. What I have tried to do to fix the problem is some line smoothing:
gl2.glHint(GL2.GL_LINE_SMOOTH_HINT, GL2.GL_NICEST);
gl2.glEnable(GL2.GL_LINE_SMOOTH);
The result is better, but I am not still satisfied.
QUESTION: So basically the question is how to improve further the solution, so I can see solid lines and those are displayed nicely when I start updating the vertex coordinates.
If it is required I can provide the code of Shaders (which is really simple, Vertex Shader only calculates the position based on projection, model view matrix and the vertex coords and Fragment Shader calculates the color from texture sampler).

Related

OpenGL – Get distance to farthest face from camera instead of closest face

I'm using OpenGL (with Python bindings) to render depth maps of models saved as .obj files. The output is a numpy array with the depth to the object at each pixel.
The seemingly relevant parts of the code I'm using look like this:
glDepthFunc(GL_LESS) # The Type Of Depth Test To Do
glEnable(GL_DEPTH_TEST) # Enables Depth Testing
... load the vertices and triangles, set the camera viewing position ...
depth = np.array(glReadPixels(0, 0, Width, Height, GL_DEPTH_COMPONENT, GL_FLOAT), dtype=np.float32)
I can use this to successfully render depth images of the object.
However, what I want to do is, instead of obtaining the depth of the face closest to the camera along each ray, I want to obtain the depth of the face furthest from the camera. I want faces to be rendered regardless of in which direction they are facing – i.e. I don't want to backface-cull.
I tried to achieve this by modifying the above code like so:
glDisable(GL_CULL_FACE) # Turn off backface culling
glDepthFunc(GL_GREATER) # The Type Of Depth Test To Do
glEnable(GL_DEPTH_TEST) # Enables Depth Testing
... load the vertices and triangles, set the camera viewing position ...
depth = np.array(glReadPixels(0, 0, Width, Height, GL_DEPTH_COMPONENT, GL_FLOAT), dtype=np.float32)
However, when I do this I don't get any depth at all rendered. I suspect that this is because I am using GL_DEPTH_COMPONENT to read out the depth, however I am not sure what to change to fix this.
How can I get the depth of the faces furthest from the camera?
Switching the depth test to GL_GREATER in priciple will do the trick, you overlooked just a tiny detail: You need to initialize the depth buffer differently. By default, it will be intialized to 1.0 when clearing the depth buffer, so that GL_LESS comparisions might update it to values smaller than 1.0.
Now, you want it to work the other way around, so you must intialize it to 0.0. To do so, just add a glClearDepth(0.0f) before the glClear(... | GL_DEPTH_BUFFER_BIT).
Furthermore, you yourself mention that you don't want backface culling for that. But instead of disabling that you can also switch the face culling around using glCullFace. You probably want GL_FRONT here, as GL_BACK is the default. But disabling it will work too, of course.
Make sure you use glDepthMask(GL_TRUE); to enable depth writing to the buffer.

Get pixel behind the current pixel

I'm coding a programm in C++ with glut, rendering a 3D model in a window.
I'm using glReadPixels to get the image of the scenery displayed in the windows.
And I would like to know how I can get, for a specific pixel (x, y), not directly its color but the color of the next object behind.
If I render a blue triangle, and a red triangle in front of it, glReadPixels gives me red colors from the red triangle.
I would like to know how I can get the colors from the blue triangle, the one I would get from glReadPixels if the red triangle wasn't here.
The default framebuffer only retains the topmost color. To get what you're suggesting would require a specific rendering pipeline.
For instance you could:
Create an offscreen framebuffer of the same dimensions as your target viewport
Render a depth-only pass to the offscreen framebuffer, storing the depth values in an attached texture
Re-render the scene with a special shader that only drew pixels where the post-transformation Z values was LESS than the value in the previously recorded depth buffer
The final result of the last render should be the original scene with the top layer stripped off.
Edit:
It would require only a small amount of new code to create the offscreen framebuffer and render a depth only version of the scene to it, and you could use your existing rendering pipeline in combination with that to execute steps 1 and 2.
However, I can't think of any way you could then re-render the scene to get the information you want in step 3 without a shader, because it both the standard depth test plus a test against the provided depth texture. That doesn't mean there isn't one, just that I'm not well versed in GL tricks to think of it.
I can think of other ways of trying to accomplish the same task for specific points on the screen by fiddling with the rendering system, but they're all far more convoluted than just writing a shader.

OpenGL: Managing and editing large amounts of line strips

I am currently working on a small 2D game with LWJGL. I use line strips to draw randomly generated grass. Each of the blades consists of 6 points and has a random green color.
The problem I face:
To fill the ground with a gapless layer of grass, I need approx. 400 line strips...
In addition, every line strip has to be shifted, when the player moves around and should (optionally) wave in the wind. Therefore I need to change the data of 400 vbos every frame.
Is there any way to accelerate these operations?
My Code:
//upload the vertex data
void uploadGrass(int offset){
FloatBuffer grassBuffer=BufferUtils.createFloatBuffer(5*5);
for(int i=0;i<Storage.grasslist.size();i++){
if(grassvbo[i]==0){
grassvbo[i]=GL15.glGenBuffers();
}
grassBuffer.clear();
for(int j=1;j<6;j++){
grassBuffer.put(Utils.GL_x((int) Storage.grasslist.get(i)[j][0]-offset));
grassBuffer.put(Utils.GL_y((int) Storage.grasslist.get(i)[j][1]));
//index 0 of every blade contains RGB values for the color.
grassBuffer.put((float) Storage.grasslist.get(i)[0][0]);
grassBuffer.put((float) Storage.grasslist.get(i)[0][1]);
grassBuffer.put((float) Storage.grasslist.get(i)[0][2]);
}
grassBuffer.flip();
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER,grassvbo[i]);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, grassBuffer, GL15.GL_STATIC_DRAW);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
}
}
//draw line strips
void drawGrass(){
GL20.glUseProgram(pId2); //color shader
for(int i=0;i<grassvbo.length;i++){ //go through all the vbos
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER,grassvbo[i]);
GL20.glVertexAttribPointer(0, 2, GL11.GL_FLOAT, false, 5*4, 0);
GL20.glVertexAttribPointer(1, 2, GL11.GL_FLOAT, false, 5*4, 2*4);
GL20.glEnableVertexAttribArray(0);
GL20.glEnableVertexAttribArray(1);
GL11.glDrawArrays(GL11.GL_LINE_STRIP, 0, 5);
}
GL20.glUseProgram(0);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER,0);
GL20.glDisableVertexAttribArray(0);
}
until now it looks like that ;) (still needs antialiasing and alpha blending):
http://i.imgur.com/x3qXlQ5.png
Chapter 12 of the OpenGL SuperBible has a section on "Drawing a lot of Geometry Efficiently", in which they have a demo of millions of blades of grass being animated. This is done by using a single vertex description, the glDrawElementsInstanced method, and a shader to modify each 'instance' stamped out in whatever manner you like (e.g. perturb vertices, scale & rotate, change texture lookup, etc.)
This is very similar to your 'go through all the vbos' loop, except that you would only upload vertices for a single blade of grass, and OpenGL will take care of passing a unique gl_InstanceID to your shader. You can then encode the changes each frame either procedurally, or in a 'texture' that you upload as often as needed. The book has sample code (and it may be available from the web site as well).
Edit: Confirmed that the sample code is in the downloads from the site - look at the src\grass\grass.cpp to see a sample using textures to control grass length, orientation, color, and bend.

WebGL render buffers receiving skewed pixel values from shader

I'm rendering a scene of polygons to multiple render targets so that I can perform postprocessing effects. However, the values I'm setting in the fragment shader don't seem to be accurately reflected in the pixel shader.
Right now the pipeline looks like this:
Render basic polygons (using simple shader, below) to an intermediate buffer
Render the buffer as a screen-sized quad to the screen.
I'm using WebGL Inspector (http://benvanik.github.com/WebGL-Inspector/) to view the intermediate buffers (created using gl.createFrameBuffer()).
I have a very simple fragment shader when drawing the polygons, something like this:
gl_FragColor = vec4(1, 0, 0, 0.5);
And this before my draw call:
gl.disable(gl.BLEND);
I would expect this to create a pixel in the buffer with a value of exactly (255,0,0,128), but in fact, it creates a pixel with the value of (255,0,0,64) -- half as much alpha as expected.
The program is fairly large and tangly, so I'll update the post with specific details if the answer isn't immediately apparent.
Thanks!
Do you have premultiplyAlpha set to true? Fiddling with that is the first thing that came to mind re: weird alpha values.

Displaying multiple cubes in OpenGL with shaders

I'm new to OpenGL and shaders. I have a project that involves using shaders to display cubes.
So basically I'm supposed to display eight cubes using a perspective projection at (+-10,+-10,+-10) from the origin each in a different color. In other words, there would be a cube centered at (10, 10, 10), another centered at (10, 10, -10) and so on. There are 8 combinations in (+-10, +-10, +-10). And then I'm supposed to provide a key command 'c' that changes the color of all the cubes each time the key is pressed.
So far I was able to make one cube at the origin. I know I should use this cube and translate it to create the eight cubes but I'm not sure how I would do that. Does anyone know how I would go about with this?
That question is, as mentioned, too broad. But you said that you managed to draw one cube so I can assume that you can set up camera and your window. That leaves us whit how to render 8 cubes. There are many ways to do this, but I'll mention 2 very different ones.
Classic:
You make function that takes 2 parameters - center of cube, and size. Whit these 2 you can build up cube the same way you're doing it now, but instead of fixed values you will use those variables. For example, front face would be:
glBegin(GL_TRIANGLE_STRIP);
glVertex3f(center.x-size/2, center.y-size/2, center.z+size/2);
glVertex3f(center.x+size/2, center.y-size/2, center.z+size/2);
glVertex3f(center.x-size/2, center.y+size/2, center.z+size/2);
glVertex3f(center.x+size/2, center.y+size/2, center.z+size/2);
glEnd();
This is just for showcase how to make it from variables, you can do it the same way you're doing it now.
Now, you mentioned you want to use shaders. Shader topic is very broad, just like openGL itself, but I can tell you the idea. In openGL 3.2 special shaders called geometry were added. Their purpose is to work with geometry as whole - on contrary that vertex shaders works whit just 1 vertex at time or that fragment shaders work just whit one fragment at time - geometry shaders work whit one geometry piece at time. If you're rendering triangles, you get all info about single triangle that is just passing through shaders. This wouldn't be anything serious, but these shaders doesn't only modify these geometries, they can create new ones! So I'm doing in one of my shader programs, where I render points, but when they pass through geometry shader, these points are converted to circles. Similarly you can render just points, but inside geometry shader you can render whole cubes. The point position would work as center for these cubes and you should pass size of cubes in uniform. If size of cubes may vary, you need to make vertex shader also that will pass the size from attribute to variable, which can be read in geometry shader.
As for color problem, if you don't implement fragment shaders, only thing you need to do is call glColor3f before rendering cubes. It takes 3 parameters - red, green and blue values. Note that these values doesn't range from 0 to 255, but from 0 to 1. You can get confused that you cubes aren't rendered if you use white background and think that when you set colors to 200,10,10 you should see red cubes but you don't see anything. That's because in fact you render white cubes. To avoid such errors, I recommend to set background to something like grey whit glClearColor.