Using glDepthFunc(GL_GREATER) would not draw anything - c++

I'm running the following code to draw rectangles using GL_GREATER function,
but instead of getting the color of the farthest rectangle from the camera, I get a white screen.
glClearColor(1, 1, 1, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_GREATER);
glOrtho(-1, 1, -1, 1, -1, 1);
glColor3f(1, 0, 0);
glPushMatrix();
glTranslatef(0, 0, -0.5);
glRectf(-1, -1, 1, 1);
glColor3f(0, 1, 0);
glTranslatef(0, 0, 1);
glRectf(-1, -1, 1, 1);
glColor3f(0, 0, 1);
glPopMatrix();
glRectf(-1, -1, 1, 1);
So I'm expecting to see the farthest rectangle color on the screen, which is green (which is also weird because the zNear is -1 and using GL_LESS draws green instead of red - I don't understand why aswell).
however using GL_GREATER I get a white screen instead of green.
What am I missing here?

By default the values in the depth buffer are in range [0, 1]. See glDepthRange.
When the depth buffer is cleared, then the depth values are set to 1 by default. See glClearDepth.
If every value in the depth buffer is 1 and the depth test is GL_GREATER, then the depth test will fail in any case, because no depth can be grater than 1.
The value which is used to clear the depth buffer can be changed by glClearDepth.
Set the clear value for the depth buffer to 0, instead of 1, before the buffer is cleared:
glClearColor(1, 1, 1, 1);
glClearDepth(0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);

If you are flipping the comparison, you also have to flip the depth buffer clear value with glClearDepth. Set it to 0.

Related

How does data get laid out in am RGBA WebGL texture?

I'm trying to pass a list of integers to the fragment shader and need random access to any of its positions. I can't use uniforms since index must be a constant, so I'm using the usual technique of passing the data through a texture.
Things seem to work, but calling texture2D to obtain specific pixels is not behaving as I'd expect.
My data looks like this:
this.textureData = new Uint8Array([
0, 0, 0, 10, 0, 0, 0, 20, 0, 0, 0, 30, 0, 0, 0, 40,
0, 0, 0, 50, 0, 0, 0, 60, 0, 0, 0, 70, 0, 0, 0, 80,
]);
I then copy that over through a texture:
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_WRAP_S, this.gl.CLAMP_TO_EDGE);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_WRAP_T, this.gl.CLAMP_TO_EDGE);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MIN_FILTER, this.gl.NEAREST);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MAG_FILTER, this.gl.NEAREST);
this.gl.texImage2D(
this.gl.TEXTURE_2D,
0,
this.gl.RGBA,
4, // width: using 4 since its 4 bytes per pixel
2, // height
0,
this.gl.RGBA,
this.gl.UNSIGNED_BYTE,
this.textureData);
So this texture is 4x2 pixels.
When I call texture2D(uTexture, vec2(0,0)); I get a vec4 pixel with the correct values (0,0,0,10).
However, when I call with locations such as (1,0), (2,0), (3,0), (4,0), etc they all return a pixel with (0,0,0,30).
Same for the second row. If I call with (0,1) I get the first pixel of the second row.
Any number greater than 1 for the X coordinate returns the last pixel of the second row.
I'd expect the coordinates to be:
this.textureData = new Uint8Array([
// (0,0) (1,0) (2,0) (3,0)
0, 0, 0, 10, 0, 0, 0, 20, 0, 0, 0, 30, 0, 0, 0, 40,
// (0,1) (1,1) (2,1) (3,1)
0, 0, 0, 50, 0, 0, 0, 60, 0, 0, 0, 70, 0, 0, 0, 80,
]);
What am I missing? How can I correctly access the pixels?
Thanks!
Texture coordinates are not integral, they are in the range [0.0, 1.0]. They map the vertices of the geometry to a point in the texture image. The texture coordinates specifies which part of the texture is placed on an specific part of the geometry and together with the texture parameters (see gl.texParameteri) it specifies how the geometry is wrapped by the texture. In general, the lower left point of the texture is addressed by the texture coordinate (0.0, 0.0) and the upper right point of the texture is addressed by (1.0, 1.0).
Texture coordinates work the same in OpenGL, OpenGL Es and WebGL. See How do opengl texture coordinates work?

opengl Color & Cube is not working properly

#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#include<GL/glut.h>
double cameraAngle;
void grid_and_axes() {
// draw the three major AXES
glBegin(GL_LINES);
//X axis
glColor3f(0, 1, 0); //100% Green
glVertex3f(-150, 0, 0);
glVertex3f(150, 0, 0);
//Y axis
glColor3f(0, 0, 1); //100% Blue
glVertex3f(0, -150, 0); // intentionally extended to -150 to 150, no big deal
glVertex3f(0, 150, 0);
//Z axis
glColor3f(1, 1, 1); //100% White
glVertex3f(0, 0, -150);
glVertex3f(0, 0, 150);
glEnd();
//some gridlines along the field
int i;
glColor3f(0.5, 0.5, 0.5); //grey
glBegin(GL_LINES);
for (i = -10; i <= 10; i++) {
if (i == 0)
continue; //SKIP the MAIN axes
//lines parallel to Y-axis
glVertex3f(i * 10, -100, 0);
glVertex3f(i * 10, 100, 0);
//lines parallel to X-axis
glVertex3f(-100, i * 10, 0);
glVertex3f(100, i * 10, 0);
}
glEnd();
}
void display() {
//codes for Models, Camera
//clear the display
//glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0, 0, 0, 0); //color black
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //clear buffers to preset values
/***************************
/ set-up camera (view) here
****************************/
//load the correct matrix -- MODEL-VIEW matrix
glMatrixMode(GL_MODELVIEW); //specify which matrix is the current matrix
//initialize the matrix
glLoadIdentity(); //replace the current matrix with the identity matrix [Diagonals have 1, others have 0]
//now give three info
//1. where is the camera (viewer)?
//2. where is the camera looking?
//3. Which direction is the camera's UP direction?
//gluLookAt(0,-150,20, 0,0,0, 0,0,1);
gluLookAt(150 * sin(cameraAngle), -150 * cos(cameraAngle), 50, 0, 0, 0, 0, 0, 1);
/*************************
/ Grid and axes Lines
**************************/
grid_and_axes();
/****************************
/ Add your objects from here
****************************/
/*glColor3f(1, 0, 0);
glutSolidCone(20, 20, 20, 20);
glColor3f(0, 0, 1);
GLUquadricObj *cyl = gluNewQuadric();
gluCylinder(cyl, 10, 10, 50, 20, 20);
glTranslatef(0, 0, 50);
glColor3f(1, 0, 0);
glutSolidCone(10, 20, 20, 20);
*/
glColor3f(1, 0, 0);
glutSolidCube(1);
I am not getting any cube here.
However if I use any transformation property like scaling or rotate then I get the desired cube like
glColor3f(1, 0, 0);
glScalef(50,5,60);
glutSolidCube(1);
what is the problem?
Another problem I am facing that color doesn't work if i don't use transformation property like above mentioned. If I write:
glColor3f(1, 0, 0);
glutSolidCone(20, 20, 20, 20);
For above codes color doesn't work; i get the default colored cone
However if I change this two lines to these 3 lines then color works perfectly:
glColor3f(1,0,0);
glTranslatef(0, 0, 50);
glutSolidCone(10,20,20,20);
then color works; what is the problem? Please help
//ADD this line in the end --- if you use double buffer (i.e. GL_DOUBLE)
glutSwapBuffers();
}
void animate() {
//codes for any changes in Models, Camera
cameraAngle += 0.001; // camera will rotate at 0.002 radians per frame.
//codes for any changes in Models
//MISSING SOMETHING? -- YES: add the following
glutPostRedisplay(); //this will call the display AGAIN
}
void init() {
//codes for initialization
cameraAngle = 0; //angle in radian
//clear the screen
glClearColor(0, 0, 0, 0);
/************************
/ set-up projection here
************************/
//load the PROJECTION matrix
glMatrixMode(GL_PROJECTION);
//initialize the matrix
glLoadIdentity();
/*
gluPerspective() — set up a perspective projection matrix
fovy - Specifies the field of view angle, in degrees, in the y direction.
aspect ratio - Specifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
zNear - Specifies the distance from the viewer to the near clipping plane (always positive).
zFar - Specifies the distance from the viewer to the far clipping plane (always positive).
*/
gluPerspective(70, 1, 0.1, 10000.0);
}
int main(int argc, char **argv) {
glutInit(&argc, argv); //initialize the GLUT library
glutInitWindowSize(500, 500);
glutInitWindowPosition(100, 100);
/*
glutInitDisplayMode - inits display mode
GLUT_DOUBLE - allows for display on the double buffer window
GLUT_RGBA - shows color (Red, green, blue) and an alpha
GLUT_DEPTH - allows for depth buffer
*/
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGB);
glutCreateWindow("Some Title");
init(); //codes for initialization
glEnable(GL_DEPTH_TEST); //enable Depth Testing
glutDisplayFunc(display); //display callback function
glutIdleFunc(animate); //what you want to do in the idle time (when no drawing is occuring)
glutMainLoop(); //The main loop of OpenGL
return 0;
}
I am not getting any cube here.
You do get a cube. It is just that tiny speck where the axis intersect. What else would you expect to see when you draw something 2 units big, ~160 units away, with a 70 degree field of view?
Another problem I am facing that color doesn't work if i don't use transformation property like above mentioned.
[...] I get the default colored cone.
I've no idea what you even mean by that. The "default color" would be the initial value of GL's builtin color attribute - which is (1, 1, 1, 1) - white. With the code you have set up, you will get the color which you set before. So the only guess I can make here is that you confused yourself by not properly taking GL's state machine into account.
But besides all that, you should not use that code at all - this is using the fixed function pipeline and immediate mode drawing - features which are deprecated since a decade now, and not supported at all by modern core profiles of OpenGL. Trying to learn that stuff in 2017 is a waste of time. And btw:
glutMainLoop(); //The main loop of OpenGL
Nope. Just NO!!!. OpenGL does not have a "main loop". GLUT is not OpenGL. Honestly, this is all just horrible.

How to texture the different planes in JOGL

I've created 6 planes as a room in JOGL, now I want to texture them each with different images, so how can I do this on each plane? And also how is there any recommended texture image resource that I can use it to decorate the room?
Thank you.
public void render(GL2 gl) {
gl.glClear(GL2.GL_COLOR_BUFFER_BIT|GL2.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
camera.view(glu); // Orientate the camera
doLight(gl); // Place the light
doLight2(gl);
if (axes.getSwitchedOn())
axes.display(gl, glut);
if (objectsOn) { // Render the objects
gl.glPushMatrix();
//Making the room.
double planeParam = animationScene.getParam(animationScene.PLANE_PARAM);
//red x, blue z, green y.
gl.glTranslated(planeParam,0,0);
//Base
plane.renderDisplayList(gl);
//Back wall
gl.glTranslated(0,25,-25);
gl.glRotated(90, 1, 0, 0);
plane.renderDisplayList(gl);
//Right wall
gl.glTranslated(25,25,0);
gl.glRotated(90, 0, 0, 1);
plane.renderDisplayList(gl);
//Front wall
gl.glTranslated(25,25,0);
gl.glRotated(90, 0, 0, 1);
plane.renderDisplayList(gl);
//Roof
gl.glTranslated(0,25,-25);
gl.glRotated(90, 1, 0, 0);
plane.renderDisplayList(gl);
//Left wall
gl.glTranslated(25,25,0);
gl.glRotated(90, 0, 0, 1);
plane.renderDisplayList(gl);
gl.glPopMatrix();
}
You can use TextureIO (recommended) or AWTTextureIO to create a texture from an image file.
You can use a single image containing all your textures and manage them as a single texture or you can use one image per texture.
You have to bind a texture before using it (see Texture.bind() or glBindTexture).
You have to assign some texture coordinates to your vertices. Use glTexCoord if you use the immediate mode.

Strange blending when rendering self-transparent texture to the framebuffer

I'm trying to render self-transparent textures to the framebuffer, but I'm getting not what I guessed: everything previously rendered on the framebuffer gets ignored, and this texture blends with the colour I cleaned my main canvas.
That's what I would like to get, but without using framebuffers:
package test;
import com.badlogic.gdx.*;
import com.badlogic.gdx.graphics.*;
import com.badlogic.gdx.graphics.g2d.*;
public class GdxTest extends ApplicationAdapter {
SpriteBatch batch;
Texture img;
#Override
public void create () {
batch = new SpriteBatch();
Pixmap pixmap = new Pixmap(1, 1, Pixmap.Format.RGBA8888);
pixmap.setColor(1, 1, 1, 1);
pixmap.fillRectangle(0, 0, 1, 1);
// Generating a simple 1x1 white texture
img = new Texture(pixmap);
pixmap.dispose();
}
#Override
public void render () {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.setColor(1, 1, 1, 1);
batch.draw(img, 0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setColor(0, 0, 0, 0.5f);
batch.draw(img, 0, 0, 300, 300);
batch.end();
}
}
And it works as perfectly as it should do:
http://i.stack.imgur.com/wpFNg.png
And that's what I get with using of framebuffer (I can't understand why the second rendered texture doesn't blend with the previous one, as it do without framebuffer):
package test;
import com.badlogic.gdx.*;
import com.badlogic.gdx.graphics.*;
import com.badlogic.gdx.graphics.g2d.*;
import com.badlogic.gdx.graphics.glutils.*;
public class GdxTest extends ApplicationAdapter {
SpriteBatch batch;
Texture img;
FrameBuffer buffer;
TextureRegion region;
#Override
public void create () {
batch = new SpriteBatch();
Pixmap pixmap = new Pixmap(1, 1, Pixmap.Format.RGBA8888);
pixmap.setColor(1, 1, 1, 1);
pixmap.fillRectangle(0, 0, 1, 1);
// Generating a simple 1x1 white texture
img = new Texture(pixmap);
pixmap.dispose();
// Generating a framebuffer
buffer = new FrameBuffer(Pixmap.Format.RGBA8888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);
region = new TextureRegion(buffer.getColorBufferTexture());
region.flip(false, true);
}
#Override
public void render () {
// Filling with red shows the problem
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
buffer.begin();
batch.begin();
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setColor(1, 1, 1, 1);
batch.draw(img, 0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setColor(0, 0, 0, 0.5f);
batch.draw(img, 0, 0, 300, 300);
batch.end();
buffer.end();
batch.begin();
batch.setColor(1, 1, 1, 1);
batch.draw(region, 0, 0);
batch.end();
}
}
And an unpredictable result:
http://i.stack.imgur.com/UdDKD.png
So how could I make the framebuffer version work the way the first version does? ;)
The easy answer is to disable blending when rendering to the screen.
But I think it is good to understand why this is happening if you want to use FBO. So let's walk through what's actually going on.
First make sure to understand what the color of the texture and the color of the batch (the vertex color) does: they are multiplied. So when setting the batch color to 0,0,0,0.5 and the texture pixel (texel) is 1,1,1,1 this will result in a value of 1*0,1*0,1*0,1*0.5 = 0,0,0,0.5.
Next make sure to understand how blending works. Blending is enabled by default and will use the SRC_ALPHA and ONE_MINUS_SRC_ALPHA functions. This means that the source value (the texel) is multiplied by the source alpha and that the destination value (the screen pixel) is multiplied by one minus the source alpha. So if your screen pixel has the value 1,1,1,1 and your texel has the value 0,0,0,0.5 then the screen pixel will be set to:(0.5*0, 0.5*0, 0.5*0, 0.5*0.5) + ((1-0.5)*1, (1-0.5)*1, (1-0.5)*1, (1-0.5)*1) which is (0,0,0,0.25) + (0.5, 0.5, 0.5, 0.5) = (0.5, 0.5, 0.5, 0.75).
So let's see how that works for you in your first code:
You clear the screen with 1, 0, 0, 1, in other words: every pixel of the screen contains the value 1, 0, 0, 1.
Then you render a full rectangle with each texel value 1,1,1,1, every pixel of the screen now contains the value 1, 1, 1, 1.
Then you render a smaller rectangle with each texel value 0,0,0,0.5, every pixel on that part of the screen now contains the value 0.5, 0.5, 0.5, 0.75.
Got a feeling about the issue already? Let's see what happens in your second code:
You clear the screen with 1, 0, 0, 1: every pixel of the screen contains the value 1, 0, 0, 1.
You bind the FBO and clear it with 1, 1, 1, 1: every pixel of the FBO contains the value 1, 1, 1, 1.
You render a full rectangle with each texel value 1,1,1,1 to the FBO: every pixel of the FBO now contains the value 1,1,1,1.
You render a smaller rectangle with each texel value 0,0,0,0.5, every pixel on that part of the FBO now contains the value 0.5, 0.5, 0.5, 0.75.
Then you bind the screen again as the render target of which each pixel still contains the value 1, 0, 0, 1.
Finally you render the FBO texture as full rectangle to the screen, causing these texels to be blended with the screen pixels. For the smaller rectangle this means blending 0.5, 0.5, 0.5, 0.75 multiplied by 0.75 and 1, 0, 0, 1 multiplied by 1-0.75=0.25, which will result in 0.375, 0.375, 0.375, 0.5625 and 0.25, 0, 0, 0.25. So the final color is 0.625, 0.375, 0.375, 0,8125
Make sure to understand this process, otherwise it can cause quite some frustrating weird issues. If you find it hard to follow then you could take pen and paper and manually calculate the value for each step.

glreadpixels stencil buffer always throws GL_INVALID_OPERATION

I'm trying to figure out stencils. Right now I am just drawing some boxes with stencil values, then reading the value. Every time I call glReadPixels with GL_STENCIL_INDEX, I get GL_INVALID_OPERATION. Here is the code in question:
glPixelStorei(GL_PACK_ALIGNMENT, 1);
GLfloat tempStencilVal = 3;
glGetError();
glReadPixels(10, g_window1Height-10, 1, 1, GL_STENCIL_INDEX, GL_FLOAT, &tempStencilVal);
if (glGetError() == GL_INVALID_OPERATION) {std::cout << "GL Invalid Operation\n";}
else {std::cout << "X: " << 10 << " Y: " << 10 << " S: " << tempStencilVal << "\n";}
I've tried 5 different data formats, 3 different glPixelStore modes, and gone over the list of glReadPixels Errors 7 times. (Yes, OGL 2.1) If I change STENCIL_INDEX to DEPTH_COMPONENT it works fine. The only thing I can't confirm is if I have a stencil buffer. Is there some initialization I'm missing or some glGet to check that?
Potentially relevant info: Win7 x64 SP1 | ASUS GTX650Ti | VS2012 Ultimate
Here is the code for the function to draw the boxes, in case that's causing it:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, g_window1Width, -g_window1Height, 0, 0.0, 50.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glScaled(1.0, -1.0, -1.0);
glTranslated(0.0, 0.0, 0.5);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClearStencil(0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
glEnable(GL_STENCIL_TEST);
glStencilOp( GL_REPLACE, GL_REPLACE, GL_REPLACE );
glColor3ub(0, 100, 250);
glStencilFunc(GL_ALWAYS, 1, 1);
glBegin(GL_QUADS);
glVertex3d(0, 0, 0);
glVertex3d(0, 50, 0);
glVertex3d(50, 50, 0);
glVertex3d(50, 0, 0);
glEnd();
glStencilFunc(GL_ALWAYS, 1, 1);
glBegin(GL_QUADS);
glVertex3d(g_window1Width-50, 0, 0);
glVertex3d(g_window1Width, 0, 0);
glVertex3d(g_window1Width, 50, 0);
glVertex3d(g_window1Width-50, 50, 0);
glEnd();
This isn't the first time OGL has done the wrong thing for no apparent reason, but this breaks my plan for coding the interface.
To check if you do have a stencil buffer, you could try doing something with the values such as drawing another quad with glStencilFunc(GL_NOTEQUAL, 1, 1); with and without the stencil test enabled.
To find the actual format used, as you say with a glGet..., it looks like glGetFramebufferAttachmentParameter will give you the answer (with the default framebuffer bound).
The stencil buffer is 8 bits (I don't think it can be anything else) so maybe change the format to GL_UNSIGNED_BYTE.
It's also possible to mix depth and stencil buffers with GL_DEPTH_STENCIL, for which you might use GL_UNSIGNED_INT_24_8. I don't what format the default framebuffer has or if this will work in your case. If you're using a library such as glut, SDL or glfw, that's responsible for setting this up and where you should look for configuring the default framebuffer.