Why isn't the triangle rendered on top of textured quad? - opengl

With the same code as in my previous question Rendering quad with tiling image? I don't understand why the triangle is not being rendered on top of the textured quad.
Can someone point out what am I missing?

You have depth test enabled which defaults to less (only pixels that are closer get drawn).
If you want a background then disable depth writing during the first pass.
void GLViewer::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
m_backgroundShader.bind();
glBindVertexArray(m_backgroundVAO);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_textureID);
glUniform1i(glGetUniformLocation(m_backgroundShader.programId(),"tex"),0);
glDrawArrays(GL_TRIANGLE_STRIP,0,4);
glDepthMask(GL_TRUE);
glEnable(GL_DEPTH_TEST);
m_triangleShader.bind();
glBindVertexArray(m_VAO);
glDrawArrays(GL_TRIANGLES, 0, 3);
update();
}

Related

OpenGL rendering white texture bug

I use OpenGL and SDL2 to render spine animations. In a specific z-order this animations are disposed like white blocks. All texture get white. I guess this error is in OpenGL draw code.
glPushMatrix();
float texw=0, texh=0;
if (texture) {
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
if (SDL_GL_BindTexture(texture, &texw, &texh) != 0)
printf("WTF\n");
}
glEnableClientState(GL_VERTEX_ARRAY);
glColor4f(color[0], color[1], color[2], color[3]);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, uvs);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// if (num_vertices > 0) {
glDrawElements(GL_TRIANGLES, num_indices, GL_UNSIGNED_SHORT, indices);
glDisableClientState(GL_VERTEX_ARRAY);
// glDisableClientState(GL_COLOR_ARRAY);
if (texture) {
SDL_GL_UnbindTexture(texture);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
glColor4f(1.0, 1.0, 1.0, 1.0);
glPopMatrix();
This is my code, some one see something wrong in this code ?
Why i'm getting white textures ?
Two-dimensional texturing has to be enabled by glEnable(GL_TEXTURE_2D) and can be disabled by glDisable(GL_TEXTURE_2D).
If texturing is enables then the texture wich is currently bound when the geometry is drawn is wrapped on the geometry.
If texturing is enabled, then by default the color of the texel is multiplied by the current color, because by default the texture environment mode (GL_TEXTURE_ENV_MODE) is GL_MODULATE. See glTexEnv.
This causes that the color of the texels of the texture is "mixed" by the last color which you have set by glColor4f.
glEnable(GL_TEXTURE_2D);
glDrawElements(GL_TRIANGLES, num_indices, GL_UNSIGNED_SHORT, indices);
glDisable(GL_TEXTURE_2D);
Note that all of this only applies in immediate mode if you are not using a shader program.

Using glDepthFunc and transparency in OpenGL

This image is rendered using three passes.
In the first pass, I render a three axis.
in the second pass a transparent cylinder is rendered (glEnable(GL_BLEND)) alpha = 0.5f.
finally the golden and grey spheres are rendered in the third pass(glEnable(GL_BLEND)).
The alpha value of the golden spheres = 1.0f and the grey sphere = 0.2f.
The problem:
As you can see,
the cylinder overlaps the spheres even though we enable blending.
the axes overlap the cylinder and the spheres!
Here is my code:
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClearDepthf(1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glEnable(GL_CULL_FACE);
glFrontFace(GL_CCW);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
here the data is prepared and sent to shaders (first pass)
glDrawElements(GL_POINTS, 256, GL_UNSIGNED_INT, reinterpret_cast<void*>(0));
ps: a geometry shader is used to render lines from the given points.
Then we prepare and pass the cylinder data
glEnable(GL_BLEND);
glCullFace(GL_FRONT);
glDrawElements(GL_POINTS, 256, GL_UNSIGNED_INT, reinterpret_cast<void*>(0));
glCullFace(GL_BACK);
glDrawElements(GL_POINTS, 256, GL_UNSIGNED_INT, reinterpret_cast<void*>(0));
glDisable( GL_BLEND);
ps: a geometry shader, also, is used to render the mesh of the cylinders from the given points.
Finally, I render the golden sphere and the grey sphere in one pass
glEnable(GL_BLEND);
glDrawElements(GL_LINE_STRIP, goldenSphereNumber, GL_UNSIGNED_INT, (void*)0);
glDrawElements(GL_LINE_STRIP, sphereIndexCount, GL_UNSIGNED_INT, (void*)0);
glDisable( GL_BLEND);
ps: here also a geometry shader is used to render the mesh of the cylinders from the given lines.
Do you see any wrong? Could you help, please?

Can I use the texture mapping result as the new texture data?

I working with OpenGL and "GLFW" to mapping the texture image to 2D polygon which is the set of vertices generated from OpenCV.
My question is, can I use the result of texture mapping as the new texture (which is already distort by the first mapping) to map with other polygon.
I think my explanation is so bad, please look at the example;
Left image is my texture, and the right is the texture after mapping to polygon (the texture divided to 8 block for 8 set of vertices. What I want to do is using the mapping result on the right side as the new texture.
It is possible to do with OpenGL or OpenCV?
Render your scene to a FBO with a texture attachment, then use that texture to render more geometry.
You need 2 FBO and 2 texture.
You render the scene to the first FBO (fbo1) and you send the texture (texture1) to the shader and you render the scene to the second FBO (fbo2). After that you send the seconde texture (texture2) to the shader and render the scene in the main FBO (to display the scene) or in the first FBO (to make another pass).
example :
1) render the scene in FBO1
2) send texture 1 to the shader
3) render the texture 1 modified in FBO2
4) send texture 2 to the shader
5) render the texture 2 modifier in FBO1
6) send texture 1 to the shader
etc, etc...
this is a little code of one of my project to describe what i try to explain (to make a blur).
//render the scene to the fbo1
glBindFramebuffer(GL_FRAMEBUFFER, f1);
glClearColor(0.2f, 0.2f, 0.2f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
[self drawWithShader:_programRTT];
//apply horizontal blur the result go in the fbo2
glBindFramebuffer(GL_FRAMEBUFFER, f2);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, t1);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
[self drawWithShader:_programBH];
//apply vertical blur the result go in the fbo1
glBindFramebuffer(GL_FRAMEBUFFER, f1);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, t2);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
[self drawWithShader:_programBV];
//return to the main fbo and display result on screen
[view bindDrawable];
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, t1);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
[self drawWithShader:_program];

vbo with 2D texture - real-time transparency issue

I am working on a n-body code which models the dynamics of a stellar disk. In the rendering, there are two types of particles : "classic" particles ( whites in the below image) and "dark matter" particles (in blue).
Here's this image at the start of simulation :
Everything seems to be ok with transparency but if I zoom during the run, I notice that actually, some particles keeps the same intermediate color, i.e there are purple.
Here's an example on this image ( which is the stellar disk seen from the side) :
My main problem is so that I don't understand why the color doesn't change as a function of others particles which are behind. For example, I would like a white/blue particle to be partially shaded by the others blue/white particles, and in real-time.
I show you my drawPoints() function where I use transparency :
void drawPoints()
{
glEnable(GL_POINT_SPRITE);
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE_NV);
glEnable(GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//glEnable( GL_DEPTH_TEST );
glUseProgram(m_program);
glUniform1f( glGetUniformLocation(m_program, "pointRadius"), m_particleRadius );
glUniform1f( glGetUniformLocation(m_program, "pointScale"), m_pointScale );
GLuint vbo_disk;
glBindBuffer(GL_ARRAY_BUFFER, vbo_disk);
glVertexPointer(4, GL_DOUBLE, 4*sizeof(double), pos);
glEnableClientState(GL_VERTEX_ARRAY);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glDrawArrays(GL_POINTS, 0, numBodies_disk);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
GLuint vbo_halo;
glBindBuffer(GL_ARRAY_BUFFER, vbo_halo);
glVertexPointer(4, GL_DOUBLE, 4*sizeof(double), &pos[numBodies_disk]);
glEnableClientState(GL_VERTEX_ARRAY);
glColor4f(0.0f, 0.0f, 1.0f, 0.5f);
glDrawArrays(GL_POINTS, 0, numBodies_halo);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisable(GL_BLEND);
glDisable(GL_POINT_SPRITE);
}
I tried to use glEnable( GL_DEPTH_TEST ) but it draws 2D texture with black background squares.
Could you give me some clues to have this cumulated and partial transparency in real-time ?
Make sure you disable depth testing:
glDisable( GL_DEPTH_TEST );
Then, you may want to try different blending modes such as additive blending:
glBlendFunc( GL_SRC_ALPHA, GL_ONE );
While additive blending is really pretty, it may produce too much white which could defeat the purpose of this visualization. You may try lowering alpha values in glColor4f. Another solution would be to use blue and red particles to accentuate the difference.

OpenGL using GL_STENCIL with a sphere

I'm working with OpenGL and I am trying to create a sphere that has a reflective surface. I have it reflecting but the reflection isn't correct. The object in the reflection should be bent and deformed according to the curve of the surface, instead I'm getting only a straight reflection. I haven't used GL_STENCIL much so help would be very much appreciated. I have provided pieces of code such as the creation of the sphere and the draw method. If anyone needs more let me know.
Creation:
sphere = gluNewQuadric();
gluQuadricDrawStyle(sphere, GLU_FILL);
gluQuadricNormals(sphere, GLU_SMOOTH);
gluSphere(sphere, 1, 100, 100);
gluDeleteQuadric(sphere);
Drawing:
glClearColor (0.0,0.0,0.0,1);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0, 0, -10);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); //disable the color mask
glDepthMask(GL_FALSE); //disable the depth mask
glEnable(GL_STENCIL_TEST); //enable the stencil testing
glStencilFunc(GL_ALWAYS, 1, 0xFFFFFFFF);
glStencilOp(GL_REPLACE, GL_REPLACE, GL_REPLACE); //set the stencil buffer to replace our data
sphereDraw(); //the mirror surface
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); //enable the color mask
glDepthMask(GL_TRUE); //enable the depth mask
glStencilFunc(GL_EQUAL, 1, 0xFFFFFFFF);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); //set the stencil buffer to keep our next lot of data
glPushMatrix();
glScalef(1.0f, -1.0f, 1.0f); //flip the reflection vertically
glTranslatef(0,2,-20); //translate the reflection onto the drawing plane
glRotatef(angle,0,1,0); //rotate the reflection
//draw object as our reflection
glPopMatrix();
glDisable(GL_STENCIL_TEST); //disable the stencil testing
glEnable(GL_BLEND); //enable alpha blending
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); //set the blending function
sphereDraw(); //draw our bench
glDisable(GL_BLEND); //disable alpha blending
//draw object
Since I'm new to using GL_STENCIL I wasn't sure if it's just something small or if much more needs to be done to detect that angle of reflection.
Have you considered using reflection/environment mapping?
There are 2 main forms. Spherical environment mapping usually works by having a pre-calculated environment map. It can, however, been done dynamically. Its main drawback is that it is view dependent.
The other system is Cubic Environment mapping. Cubic is very easy to set up and involves simply rendering your scene 6 times in 6 different direction (ie on to each face of the cube). Cubic env mapping is view independent.
There is another system that sits between spherical and cubic. Its called dual paraboloid environment mapping. It has the draw back that generating the dual paraboloids is quite complex (like spherical) but (like cubic) it is view independent.