OpenGL Color Matrix - c++

How do I get the OpenGL color matrix transforms working?
I've modified a sample program that just draws a triangle, and added some color matrix code to see if I can change the colors of the triangle but it doesn't seem to work.
static float theta = 0.0f;
glClearColor( 1.0f, 1.0f, 1.0f, 1.0f );
glClearDepth(1.0);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glRotatef( theta, 0.0f, 0.0f, 1.0f );
glMatrixMode(GL_COLOR);
GLfloat rgbconversion[16] =
{
0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f
};
glLoadMatrixf(rgbconversion);
glMatrixMode(GL_MODELVIEW);
glBegin( GL_TRIANGLES );
glColor3f( 1.0f, 0.0f, 0.0f ); glVertex3f( 0.0f, 1.0f , 0.5f);
glColor3f( 0.0f, 1.0f, 0.0f ); glVertex3f( 0.87f, -0.5f, 0.5f );
glColor3f( 0.0f, 0.0f, 1.0f ); glVertex3f( -0.87f, -0.5f, 0.5f );
glEnd();
glPopMatrix();
As far as I can tell, the color matrix I'm loading should change the triangle to black, but it doesn't seem to work. Is there something I'm missing?

The color matrix only applies to pixel transfer operations such as glDrawPixels which aren't hardware accelerated on current hardware. However, implementing a color matrix using a fragment shader is really easy. You can just pass your matrix as a uniform mat4 then mulitply it with gl_FragColor

It looks like you're doing it correctly, but your current color matrix sets the triangle's alpha value to 0 as well, so while it is being drawn, it does not appear on the screen.

"Additionally, if the ARB_imaging extension is supported, GL_COLOR is also accepted."
From the glMatrixMode documentation. Is the extension supported on your machine?

I have found the possible problem.
The color matrix is supported by the "Image Processing Subset". In most HW, it was supported by driver.(software implementation)
Solution:
Add this line after glEnd():
glCopyPixels(0,0, getWidth(), getHeight(),GL_COLOR);
It's very slow....

Related

Why OpenGL cut off polygons (even if this settings is disabled)?

I read similar suggested questions and their solutions, but could not find an answer.
I'm trying to draw a scene with an isometric view in OpenGL.
Draw func:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glRotatef(atan(0.5f) * 180.0f / PI, 1.0f, 0.0f, 0.0f);
glRotatef(-45.0f, 0.0f, 1.0f, 0.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex3f(0.0f, 0.0f, 0.0f);
glVertex3f(1.0f, 0.0f, 0.0f);
glVertex3f(1.0f, 0.0f, 1.0f);
glVertex3f(0.0f, 0.0f, 1.0f);
glEnd();
glPopMatrix();
In the end, I get this result. The camera does have an isometric projection, but for some reason polygons are clipped.
If I add glTranslatef(-0.8f, 0, -0.8f) before drawing the quad, the result is as follows:
The problem is that I don't apply any optimization to OpenGL render. But why do polygons have to be cut off?
The polygons are clipped by the near or far plane of the viewing volume.
When you do not set a projection matrix, then view space, clip space and normalized device space are the same. The normalized device space is a unique cube with the left, bottom, near of (-1, -1, -1) and right, top, far of (1, 1, 1). All the geometry which is not inside this cube is clipped.
Actually you draw a quad with a side length of 1. One vertex of the quad is at the origin of the view (0, 0, 0). The quad is rotated around the origin by glRotate. Since the length of the diagonal of the quad is sqrt(2.0), one vertex of the rotated quad is clipped by either the near plane or the far plane.
If you construct and rotate a quad whose center is (0, 0 ,0), it will not be clipped, because the length form the center to each vertex is sqrt(2.0)/2.0. That is less than 1 (distance to near and far plane form the center of the viewing volume)
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex3f(-0.5f, 0.0f, -0.5f);
glVertex3f( 0.5f, 0.0f, -0.5f);
glVertex3f( 0.5f, 0.0f, 0.5f);
glVertex3f(-0.5f, 0.0f, 0.5f);
glEnd();
respectively
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(atan(0.5f) * 180.0f / PI, 1.0f, 0.0f, 0.0f);
glRotatef(-45.0f, 0.0f, 1.0f, 0.0f);
glTranslate(-0.5f, 0.0f, -0.5f);
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex3f(0.0f, 0.0f, 0.0f);
glVertex3f(1.0f, 0.0f, 0.0f);
glVertex3f(1.0f, 0.0f, 1.0f);
glVertex3f(0.0f, 0.0f, 1.0f);
glEnd();
Alternatively you can set an Orthographic projection, which enlarges the viewing volume by glOrtho:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1.0, 1.0, -1.0, 1.0, -2.0, 2.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(atan(0.5f) * 180.0f / PI, 1.0f, 0.0f, 0.0f);
glRotatef(-45.0f, 0.0f, 1.0f, 0.0f);
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex3f(0.0f, 0.0f, 0.0f);
glVertex3f(1.0f, 0.0f, 0.0f);
glVertex3f(1.0f, 0.0f, 1.0f);
glVertex3f(0.0f, 0.0f, 1.0f);
glEnd();

opengl - flickering of fragments even with disabled depth test

I'm trying to render a quad with a coloured border. I'm using texture coordinates to detect whether the fragment should be considered part of border or not. If it is part of border, then render it with green colour or else with black colour.
Here are my vertices / normals / tex coordinates.
float vertices[] = {
// posistions // normals // texture coords
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f,
0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f,
0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f,
0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f,
-0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f,
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f,
}
Here's my fragment shader
#version 330 core
in vec3 frag_pos;
in vec3 frag_nor;
in vec2 frag_tex;
out vec4 frag_color;
void main()
{
vec2 origin = vec2(0.01, 0.01);
float width = 1.0 - origin.x * 2.0;
float height = 1.0 - origin.y * 2.0;
if( (frag_tex.x >= origin.x && frag_tex.x < origin.x + width) &&
(frag_tex.y >= origin.y && frag_tex.y < origin.y + height) )
{
frag_color = vec4(0.0);
}
else
{
frag_color = vec4(0.0, 1.0, 0.0, 0.0);
}
}
And this is how I'm rendering
glDisable(GL_DEPTH_TEST);
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, 6);
In right, I'm drawing the same quad with another pass-through fragment shader in wireframe mode.
As you can see the left quad is flickering while moving the camera. Any ideas how to fix this.
The problem exist even with applying a 2D texture. To fix that, I used Mipmaps with these filters.
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

How to draw emf/wmf image in GDI+ with alpha blending?

I use GDI+ Graphics.DrawImage() method to draw metafile images (emf/wmf) in my application. This method allows set color matrix by ImageAttributes. I use color matrix to perform image alpha blending (drawing semi transparent image) like this:
const auto alphaPercent = 0.5f
ColorMatrix colorMatrix = {
1.0f, 0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, alphaPercent, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f, 1.0f
};
ImageAttributes imageAttributes;
imageAttributes.SetColorMatrix(&colorMatrix, ColorMatrixFlagsDefault, ColorAdjustTypeDefault);
pGraphics->DrawImage(pImageToDraw,imagePosition, 0.f, 0.f, sourceWidth, sourceHeight, UnitPixel, &imageAttributes);
This method excellent works for bitmaps but not for emf metafiles.
Changing ColorAdjustTypeDefault to ColorAdjustTypePen or ColorAdjustTypeBrush doesn't help.
Example: GDI+ result:
But should be (alpha = 50%):
How to draw metafile image in GDI+ with alpha blending ?

How to handle OpenGL additive blending and depth test with particles and deeper objects

I have a little sprites particle system made with OpenGL & glut using textures to draw a basic flame. The flame is reproduced symmetrically to illustrate how it behaves in a little box/scene. And as the pictures below demonstrate, there are two problems:
1- To produce a somewhat good looking flame effect I want to use additive blending with my particles, but the blending also takes the color of the deeper cyan panel into account and produce a white flame.
2.1 - Also, to achieve a correct implementation of the additive blending I have to disable the depth test while drawing the particles, but doing so enable the drawing of particles even if they should be "hidden".
2.2 - If I enable the depth test while drawing the particles, here is what it looks like.
If it is of any help, here is the texture I am applying to the particles.
Here is the relevant code that displays the scene and the particles.
void drawParticles()
{
glPushAttrib(GL_ALL_ATTRIB_BITS);
glDisable(GL_LIGHTING);
glDisable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE);
glBindTexture(GL_TEXTURE_2D,explosionTexture[0]);
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);
for (int i = 0; i < particlesNumber; ++i)
{
glPointSize(50.0f);
glBegin(GL_POINTS);
glColor4f(particlesArray[i].color[0],particlesArray[i].color[1],particlesArray[i].color[2],0.5f);
glVertex3f(particlesArray[i].position[0],particlesArray[i].position[1],particlesArray[i].position[2]);
glEnd();
}
glBindTexture(GL_TEXTURE_2D, 0);
glDisable( GL_BLEND );
glEnable( GL_DEPTH_TEST );
glPopAttrib();
}
void drawScene()
{
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
gluPerspective( 60.0f, (GLdouble) g_width / (GLdouble) g_height, 0.1f, 300.0f );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
gluLookAt( dist*sin(phi)*sin(theta), dist*cos(phi), dist*sin(phi)*cos(theta), 0, 0, 0, 0, 1, 0 );
glEnable( GL_DEPTH_TEST );
glDisable( GL_BLEND );
glBegin( GL_LINES );
glColor3f( 1.0f, 0.0f, 0.0f );
glVertex3f( 0.0f, 0.0f, 0.0f );
glVertex3f( 0.5f, 0.0f, 0.0f );
glColor3f( 0.0f, 1.0f, 0.0f );
glVertex3f( 0.0f, 0.0f, 0.0f );
glVertex3f( 0.0f, 0.5f, 0.0f );
glColor3f( 0.0f, 0.0f, 1.0f );
glVertex3f( 0.0f, 0.0f, 0.0f );
glVertex3f( 0.0f, 0.0f, 0.5f );
glEnd();
glBegin( GL_QUADS );
glColor4f( 1.0f, 1.0f, 1.0f , 0.0f);
glVertex3f( -1.0f, -0.2f, 1.0f );
glVertex3f( 1.0f, -0.2f, 1.0f );
glVertex3f( 1.0f, -0.2f, -1.0f );
glVertex3f( -1.0f, -0.2f, -1.0f );
glColor4f( 1.0f, 1.0f, 0.0f , 0.0f );
glVertex3f( 1.0f, -2.0f, 1.0f );
glVertex3f( 1.0f, -2.0f, -1.0f );
glVertex3f( 1.0f, -0.2f, -1.0f );
glVertex3f( 1.0f, -0.2f, 1.0f );
glColor4f( 1.0f, 0.0f, 1.0f , 0.0f );
glVertex3f( -1.0f, -2.0f, 1.0f );
glVertex3f( -1.0f, -2.0f, -1.0f );
glVertex3f( -1.0f, -0.2f, -1.0f );
glVertex3f( -1.0f, -0.2f, 1.0f );
glColor4f( 0.0f, 1.0f, 1.0f , 1.0f );
glVertex3f( 1.0f, -2.0f, -1.0f );
glVertex3f( -1.0f, -2.0f, -1.0f );
glVertex3f( -1.0f, -0.2f, -1.0f );
glVertex3f( 1.0f, -0.2f, -1.0f );
glEnd();
glPushMatrix();
drawParticles();
glScalef(1.0f, -1.0f, 1.0f);
drawParticles();
glPopMatrix();
glutSwapBuffers();
}
I am open to any kind of suggestions even involving shaders (but I would be interested to know if it is even possible to do with just plain OpenGL).
UPDATE:
Maybe I was unclear, I'm not necessarily interested in a strictly fixed-pipeline solution, I want to know how to manage additive blending in a scene even it means adding shaders code to my project.
Now, as Columbo pointed out, enabling the depth testing and disabling the depth writing solved my second problem. Now concerning the additive blending issue, I still have no clue about how to manage additive blending in a scene. Even though there might not such basic colors in a scene, the problem still remains as the flame will still be white and I'm open to know what I have to do with the pixel shader as suggested.
For the additive blending issue, it may not be a problem. You'll never have a block of cyan in a real scene. However, if you really really need a solution, you could try a premultiplied alpha blend (glBlendFunc(GL_ONE,GL_ONE_MINUS_SRC_ALPHA);), then you have a bit more control. In your pixel shader you can multiply the output RGB by the source alpha manually, then you can choose the output alpha. An output alpha of zero produces an additive blend like you have at the moment. Outputting the full alpha value (vertex alpha * texture alpha) gives you a standard modulating alpha blend. You might be able to find some value in-between those two extremes which darkens the background enough to make your flame look yellow even against a cyan background without making it look rubbish. If you're not using pixel shaders, I believe it'd be possible with the fixed function pipeline by manipulating your texture during texture loading. It's all rather fiddly, and I'd suggest it's not worth doing, because you won't have such primary colours in a finished, lit scene. The more correct solution is to use HDR and tone mapping, but that's getting into some quite advanced rendering techniques.
Fixing the depth problem is simple. You need to enable depth testing for your flame, but disable depth writing. glEnable(GL_DEPTH_TEST) and glDepthMask(GL_FALSE) are the relevant commands.

Why Transform Matrix doesn't work in OpenGL

Say I want to draw a Ball in the scene and here are two different ways to do them.
float SUN_TRANSLATION_MATRIX[] = {
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, -15.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
void displaySolarSystem1(){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0f, 0.0f, -15.f);
glColor3f(1.0f, 0.8f, 0.5f);
glutSolidSphere(2.0, 50, 40);
glutSwapBuffers();
}
void displaySolarSystem(){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMultMatrixf(SUN_TRANSLATION_MATRIX);
glColor3f(1.0f, 0.8f, 0.5f);
glutSolidSphere(2.0, 50, 40);
glutSwapBuffers();
}
displaySolarSystem1 applies glTranslatef where displaySolarSystem uses Matrix operation the problem is that displaySolarSystem1 works as expected but the matrix failed.
What went wrong with displaySolarSystem()?
http://www.opengl.org/sdk/docs/man/xhtml/glMultMatrix.xml
Calling glMultMatrix with an argument of m = [...] replaces the current transformation with C × M × v
Which means that transformations are applied by multiplying matrix by vector, not vice versa. So, translation matrix would be something like this
1 0 0 X
0 1 0 Y
0 0 1 Z
0 0 0 1
But this is matrix, that is written in row-major order, and OpenGL takes column-major order matrices, so you need to transpose it.
So, finally, you just need to use glMultTransposeMatrix which, if I remember right, is slightly slower, or transpose your matrix to look like this
float SUN_TRANSLATION_MATRIX[] = {
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, -15.0f, 1.0f
};
Thanks Unwind. The problem was solved by changing glMultMatrixf(SUN_TRANSLATION_MATRIX); to glMultTransposeMatrixf(SUN_TRANSLATION_MATRIX); thanks for your hint of "this is a transposed matrix"