OpenGL tune from one color to another - opengl

I have made a small particle system. What I now would like to do is have the particles fade from one color to another during it's life time. For example from black to white or Yellow to Red.
I use the glColor() functions to set the color of the particle.
How do I do this?

you have to blende the colors by your self:
calculate the blend factor between 0 and 1 and mix the colors
float blend = lifeTime / maxLifeTime;
float red = (destRed * blend) + (srcRed * (1.0 - blend));
float green = (destGreen * blend) + (srcGreen * (1.0 - blend));
float blue = (destBlue * blend) + (srcBlue * (1.0 - blend));
regards
ron

Related

How to make OpenGL camera and Ray-tracer camera show the same image?

I am writing a simple path tracer and I want to make a preview with OpenGL.
But since OpenGL pipeline use the projection matrix, images rendered with OpenGL and Path tracing are a bit different.
For OpenGL rendering I use the basic pipeline, where the final transformation matrix is:
m_transformation = PersProjTrans * CameraRotateTrans * CameraTranslationTrans * TranslationTrans * RotateTrans * ScaleTrans
Parameters for perspective projection are: 45.0f, WINDOW_WIDTH, WINDOW_HEIGHT, 0.01, 10000.0f.
And in my path tracer I generate rays from camera like this:
m_width_recp = 1.0 /0m_width;
m_height_recp = 1.0 / m_height;
m_ratio = (double)m_width / m_height;
m_x_spacing = (2.0 * m_ratio) / (double)m_width;
m_y_spacing = (double)2.0 / (double)m_height;
m_x_spacing_half = m_x_spacing * 0.5;
m_y_spacing_half = m_y_spacing * 0.5;
x_jitter = (Xi1 * m_x_spacing) - m_x_spacing_half;
y_jitter = (Xi2 * m_y_spacing) - m_y_spacing_half;
Vec pixel = m_position + m_direction * 2.0;
pixel = pixel - m_x_direction * m_ratio + m_x_direction * (x * 2.0 * m_ratio * m_width_recp) + x_jitter;
pixel = pixel + m_y_direction - m_y_direction*(y * 2.0 * m_height_recp + y_jitter);
return Ray(m_position, (pixel-m_position).norm());
Here are images rendered by OpenGL (1) and Path Tracer(2).
The problem is the distance between the scene and the camera. I mean, it's not constant. In some cases the path tracer image looks "closer" to the objects and in some cases vise versa.
I haven't found anything in google.
Invert the view-projection matrix as used for OpenGL rendering and use that to transform screen pixel positions into rays for your ray tracer.

how implement own blend function?

I want to implement the following blend function in my program which isn't using OpenGL.
glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA);
In the OpenGL realtime test Application I was able to blend with this function colors on a white background. The blended result should look like http://postimg.org/image/lwr9ossen/.
I have a white background and want to blend red points over it. a high density of red points should be get opaque / black.
glClearColor(1.0f, 1.0f, 1.0f, 0.0f);
for(many many many times)
glColor4f(0.75, 0.0, 0.1, 0.85f);
DrawPoint(..)
I tried something, but I had no success.
Has anyone the equation for this blend function?
The blend function should translate directly to the operations you need if you want to implement the whole thing in your own code.
The first argument specifies a scaling factor for your source color, i.e. the color you're drawing the pixel with.
The second argument specifies a scaling factor for your destination color, i.e. the current color value at the pixel position in the output image.
These two terms are then added, and the result written to the output image.
GL_DST_COLOR corresponds to the color in the destination, which is the output image.
'GL_ONE_MINUS_SRC_ALPHA` is 1.0 minus the alpha component of the pixel you are rendering.
Putting this all together, with (colR, colG, colB, colA) the color of the pixel you are rendering, and (imgR, imgG, imgB) the current color in the output image at the pixel position:
GL_DST_COLOR = (imgR, imgG, imgB)
GL_ONE_MINUS_SRC_ALPHA = (1.0 - colA, 1.0 - colA, 1.0 - colA)
GL_DST_COLOR * (colR, colG, colB) + GL_ONE_MINUS_SRC_ALPHA * (imgR, imgG, imgB)
= (imgR, imgG, imgB) * (colR, colG, colB) +
(1.0 - colA, 1.0 - colA, 1.0 - colA) * (imgR, imgG, imgB)
= (imgR * colR + (1.0 - colA) * imgR,
imgG * colG + (1.0 - colA) * imgG,
imgB * colB + (1.0 - colA) * imgB)
This is the color you write to your image as the result of rendering the pixel.

OpenGL ReadPixels (Screenshot) Alpha

I'm using a Tile Rendering (using glReadPixels) setup to take screenshots of a scene. My output looks correct:
But looking at the alpha channel the pixels are still transparent even if the transparent texture is rendered on top of an opaque texture.
In particular the "inside" of the car should be completely opaque where we see the ceiling, but partially transparent when going through the back windows.
Is there a method for testing the alpha component of each texture at a pixel, not just the closest one?
I think you got some useful direction from the comments and the other answer, but not the solution in full detail. Instead of just giving the result, let me walk through it, so that you know how to figure it out yourself next time.
I'm assuming that you draw your translucent objects back to front, using a GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA blend function. You don't directly mention that, but it's fairly standard, and consistent with what you're seeing. The correct solution will also need an alpha component in the framebuffer, but you have that already, otherwise you would get nothing when reading back the alpha.
To illustrate the whole process, I'm going to use two examples on the way. I'll only list the (R, A) components for the colors, G and B would behave just like R.
Draw a layer with color (R1, 1.0), then a layer with (R2, 0.4) on top of it.
Draw a layer with color (R1, 0.5), then a layer with (R2, 0.4) on top of it.
The background color is (Rb, 0.0), you always want to clear with an alpha value of 0.0 for this kind of blending.
First, let's calculate the result we want to achieve for the colors:
For case 1, drawing the first layer completely covers the background, since it has alpha = 1.0. Then we blend the second layer on top of it. Since it has alpha = 0.4, we keep 60% of the first layer, and add 40% of the second layer. So the color we want is
0.6 * R1 + 0.4 * R2
For case 1, drawing the first layer keeps 50% of the background background, since it has alpha = 0.5. So the color so far is
0.5 * Rb + 0.5 * R1
Then we blend the second layer on top of it. Again we keep 60% of the previous color, and add 40% of the second layer. So the color we want is
0.6 * (0.5 * Rb + 0.5 * R1) + 0.4 * R2
= 0.3 * Rb + 0.3 * R1 + 0.4 * R2
Now, let's figure out what we want the result for alpha to be:
For case 1, our first layer was completely opaque. One way of looking at opacity is as a measure of what fraction of light is absorbed by the object. Once we have a layer that absorbs all the light, anything else we render will not change that. Our total alpha should be
1.0
For case 2, we have one layer that absorbs 50% of the light, and one that absorbs 40% of the remaining light. Since 40% of 50% is 20%, a total of 70% is absorbed. Our total alpha should be
0.7
Using GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA for blending gives you the desired result for the color. But as you noticed, not for alpha. Doing the calculation on the example:
Case 1: Drawing layer 1, SRC_ALPHA is 1.0, the source value is S = (R1, 1.0) and the destination value is D = (Rb, 0.0). So the blend function evaluates as
SRC_ALPHA * S + ONE_MINUS_SRC_ALPHA * D
= 1.0 * (R1, 1.0) + 0.0 * (Rb, 0.0)
= (R1, 1.0)
This is written to the framebuffer, and becomes the destination value for drawing layer 2. The source for layer 2 is (R2, 0.4). Evaluating with 0.4 for SRC_ALPHA gives
SRC_ALPHA * S + ONE_MINUS_SRC_ALPHA * D
= 0.4 * (R2, 0.4) + 0.6 * (R1, 1.0)
= (0.4 * R2 + 0.6 * R1, 0.76)
Case 2: Drawing layer 1, SRC_ALPHA is 0.5, the source value is S = (R1, 0.5) and the destination value is D = (Rb, 0.0). So the blend function evaluates as
SRC_ALPHA * S + ONE_MINUS_SRC_ALPHA * D
= 0.5 * (R1, 0.5) + 0.5 * (Rb, 0.0)
= (0.5 * R1 + 0.5 * Rb, 0.25).
This is written to the framebuffer, and becomes the destination value for drawing layer 2. The source for layer 2 is (R2, 0.4). Evaluating with 0.4 for SRC_ALPHA gives
SRC_ALPHA * S + ONE_MINUS_SRC_ALPHA * D
= 0.4 * (R2, 0.4) + 0.6 * (0.5 * R1 + 0.5 * Rb, 0.25)
= (0.4 * R2 + 0.3 * R1 + 0.3 * Rb, 0.31).
So we confirmed what you already knew: We get the desired colors, but the wrong alphas. How do we fix this? We need a different blend function for alpha. Fortunately OpenGL has glBlendFuncSeparate(), which allows us to do exactly that. All we need to figure out is what blend function to use for alpha. Here is the thought process:
Let's say we already rendered some translucent objects, with a total alpha of A1, which is stored in the framebuffer. What we rendered so far absorbs a fraction A1 of the total light, and lets a fraction 1.0 - A1 pass through. We render another layer with alpha A2 on top of it. This layer absorbs a fraction A2 of the light that passed through before, so it absorbs an additional (1.0 - A1) * A2 of all the light. We need to add this to the amount of light that was already absorbed, so that a total of (1.0 - A1) * A2 + A1 is now absorbed.
All that's left to do is translate that into an OpenGL blend equation. A2 is the source value S, and A1 the destination value D. So our desired alpha result becomes
(1.0 - A1) * A2 + A1
= (1.0 - A1) * S + 1.0 * D
What I called A1 is the alpha value in the framebuffer, which is referred to as DST_ALPHA in the blend function specification. So we use ONE_MINUS_DST_ALPHA to match our source multiplier of 1.0 - A1. We use GL_ONE to match the destination multiplier 1.0.
So the blend function parameters for alpha are (GL_ONE_MINUS_DST_ALPHA, GL_ONE), and the complete blend function call is:
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA,
GL_ONE_MINUS_DST_ALPHA, GL_ONE);
We can double check the math for alpha with the examples one more time:
Case 1: Drawing layer 1, DST_ALPHA is 0.0, the source value is S = (.., 1.0) and the destination value is D = (.., 0.0). So the blend function evaluates as
ONE_MINUS_DST_ALPHA * S + ONE * D
= 1.0 * (.., 1.0) + 1.0 * (.., 0.0)
= (.., 1.0)
This is written to the framebuffer, and becomes the destination value for drawing layer 2. The source for layer 2 is (.., 0.4), and DST_ALPHA is now 1.0. Evaluating the blend equation for layer 2 gives
ONE_MINUS_DST_ALPHA * S + ONE * D
= 0.0 * (.., 0.4) + 1.0 * (.., 1.0)
= (.., 1.0)
We got the desired alpha value of 1.0!
Case 2: Drawing layer 1, DST_ALPHA is 0.0, the source value is S = (.., 0.5) and the destination value is D = (.., 0.0). So the blend function evaluates as
ONE_MINUS_DST_ALPHA * S + ONE * D
= 1.0 * (.., 0.5) + 1.0 * (.., 0.0)
= (.., 0.5)
This is written to the framebuffer, and becomes the destination value for drawing layer 2. The source for layer 2 is (.., 0.4), and DST_ALPHA is now 0.5. Evaluating the blend equation for layer 2 gives
ONE_MINUS_DST_ALPHA * S + ONE * D
= 0.5 * (.., 0.4) + 1.0 * (.., 0.5)
= (.., 0.7)
We got the desired alpha value of 0.7!
Is there a method for testing the alpha component of each texture at a pixel, not just the closest one?
No. OpenGL stores one RGBA value for each pixel; there's no way to get the previous values (as that would need a ton of RAM).
What alpha values get written to the framebuffer depend on your alpha blending equation, which you can set with glBlendFunc or glBlendFuncSeparate. See the blending page on the OpenGL wiki for more info, and this JavaScript app lets you see the effects of various blending modes.

Copy texture sub rectangles using shaders and rtt

I need to write a function which shall take a sub-rectangle from a 2D texture (non power-of-2) and copy it to a destination sub-rectangle of an output 2D texture, using a shader (no glSubImage or similar).
Also the source and the destination may not have the same size, so I need to use linear filtering (or even mipmap).
void CopyToTex(GLuint dest_tex,GLuint src_tex,
GLuint src_width,GLuint src_height,
GLuint dest_width,GLuint dest_height,
float srcRect[4],
GLuint destRect[4]);
Here srcRect is in normalized 0-1 coordinates, that is the rectangle [0,1]x[0,1] touch the center of every border pixel of the input texture.
To achieve a good result when the input and source dimensions don't match, I want to use a GL_LINEAR filtering.
I want this function to behave in a coherent manner, i.e. calling it multiple times with many subrects shall produce the same result as one invocation with the union of the subrects; that is the linear sampler should sample the exact center of the input pixel.
Moreover, if the input rectangle fit exactly the destination rectangle an exact copy should occur.
This seems to be particularly hard.
What I've got now is something like this:
//Setup RTT, filtering and program
float vertices[4] = {
float(destRect[0]) / dest_width * 2.0 - 1.0,
float(destRect[1]) / dest_height * 2.0 - 1.0,
//etc..
};
float texcoords[4] = {
(srcRect[0] * (src_width - 1) + 0.5) / src_width - 0.5 / dest_width,
(srcRect[1] * (src_height - 1) + 0.5) / src_height - 0.5 / dest_height,
(srcRect[2] * (src_width - 1) + 0.5) / src_width + 0.5 / dest_width,
(srcRect[3] * (src_height - 1) + 0.5) / src_height + 0.5 / dest_height,
};
glBegin(GL_QUADS);
glTexCoord2f(texcoords[0], texcoords[1]);
glVertex2f(vertices[0], vertices[1]);
glTexCoord2f(texcoords[2], texcoords[1]);
glVertex2f(vertices[2], vertices[1]);
//etc...
glEnd();
To write this code I followed the information from this page.
This seems to work as intended in some corner cases (exact copy, copying a row or a column of one pixel).
My hardest test case is to perform an exact copy of a 2xN rectangle when both the input and output textures are bigger than 2xN.
I probably have some problem with offsets and scaling (the trivial ones don't work).
Solution:
The 0.5/tex_width part in the definition of the texcoords was wrong.
An easy way to work around is to completely remove that part.
float texcoords[4] = {
(srcRect[0] * (src_width - 1) + 0.5) / src_width,
(srcRect[1] * (src_height - 1) + 0.5) / src_height,
(srcRect[2] * (src_width - 1) + 0.5) / src_width,
(srcRect[3] * (src_height - 1) + 0.5) / src_height
};
Instead, we draw a smaller quad, by offsetting the vertices by:
float dx = 1.0 / (dest_rect[2] - dest_rect[0]) - epsilon;
float dy = 1.0 / (dest_rect[3] - dest_rect[1]) - epsilon;
// assume glTexCoord for every vertex
glVertex2f(vertices[0] + dx, vertices[1] + dy);
glVertex2f(vertices[2] - dx, vertices[1] + dy);
glVertex2f(vertices[2] - dx, vertices[3] - dy);
glVertex2f(vertices[0] + dx, vertices[3] - dy);
In this way we draw a quad which pass from the exact center of every border pixel.
Since OpenGL may or may not draw the border pixels in this case, we need the epsilons.
I believe that my original solution (don't offset vertex coords) can still work, but need a bit of extra math to compute the right offsets for the texcoords.

Rotating/aligning texture coordinates

I have 1024x1 gradient texture that I want to map to a quad. This gradient should be aligned along the line (p1,p2) inside that quad. The texture has the GL_CLAMP_TO_EDGE property, so it will fill the entire quad.
I now need to figure out the texture coordinates for the four corners (A,B,C,D) of the quad, but I can't wrap my head around the required math.
I tried to calculate the angle between (p1,p2) and then rotate the corner points around the center of the line between (p1,2), but I couldn't get this to work right. It seems a bit excessive anyway - is there an easier solution?
Are you using shaders? If yes, then assign your quad just default UVs from 0 to 1 .Then based on the p1 p2 segment slope calculate the degrees for rotation (don't forget to convert those to radians). Then in the vertex shader construct 2x2 rotation matrix and rotate the UVs the amount defined by the segment.At the end pass the rotated UVs into fragment shader and use with your gradient texture sampler.
I found another approach that actually works as I want it to, using only additions, multiplications and one div; sparing the expensive sqrt.
I first calculate the slope of the (p1,p2) line and the one orthogonal to it. I then work out the intersection points of the slope (starting at p1) and the orthogonal stating at each corner.
Vector2 slope = {p2.x - p1.x, p2.y - p1.y};
Vector2 ortho = {-slope.y, slope.x};
float div = 1/(slope.y * ortho.x - slope.x * ortho.y);
Vector2 A = {
(ortho.x * -p1.y + ortho.y * p1.x) * div,
(slope.x * -p1.y + slope.y * p1.x) * div
};
Vector2 B = {
(ortho.x * -p1.y + ortho.y * (p1.x - 1)) * div,
(slope.x * -p1.y + slope.y * (p1.x - 1)) * div
};
Vector2 C = {
(ortho.x * (1 - p1.y) + ortho.y * p1.x) * div,
(slope.x * (1 - p1.y) + slope.y * p1.x) * div
};
Vector2 D = {
(ortho.x * (1 - p1.y) + ortho.y * (p1.x - 1)) * div,
(slope.x * (1 - p1.y) + slope.y * (p1.x - 1)) * div
};