Can't use Blue Color Value in OpenGL 11 - opengl

If i try to use GL11.glColor4f(85, 255, 0, 1) it works perfectly fine but as soon as i try to use the blue color value (GL11.glColor4f(85, 255, 255, 1) it just doesn't color at all.

glColor4f expects values between 0 and 1, so that (0,0,0) is black and (1,1,1) is white. If you pass values greater than 1 they are getting clamped. Thus glColor4f(85, 255, 255, 1) will produce a white color. You should instead use
glColor4f(85/255., 1, 0, 1)
glColor4f(85/255., 1, 1, 1)

Related

How does data get laid out in am RGBA WebGL texture?

I'm trying to pass a list of integers to the fragment shader and need random access to any of its positions. I can't use uniforms since index must be a constant, so I'm using the usual technique of passing the data through a texture.
Things seem to work, but calling texture2D to obtain specific pixels is not behaving as I'd expect.
My data looks like this:
this.textureData = new Uint8Array([
0, 0, 0, 10, 0, 0, 0, 20, 0, 0, 0, 30, 0, 0, 0, 40,
0, 0, 0, 50, 0, 0, 0, 60, 0, 0, 0, 70, 0, 0, 0, 80,
]);
I then copy that over through a texture:
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_WRAP_S, this.gl.CLAMP_TO_EDGE);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_WRAP_T, this.gl.CLAMP_TO_EDGE);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MIN_FILTER, this.gl.NEAREST);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MAG_FILTER, this.gl.NEAREST);
this.gl.texImage2D(
this.gl.TEXTURE_2D,
0,
this.gl.RGBA,
4, // width: using 4 since its 4 bytes per pixel
2, // height
0,
this.gl.RGBA,
this.gl.UNSIGNED_BYTE,
this.textureData);
So this texture is 4x2 pixels.
When I call texture2D(uTexture, vec2(0,0)); I get a vec4 pixel with the correct values (0,0,0,10).
However, when I call with locations such as (1,0), (2,0), (3,0), (4,0), etc they all return a pixel with (0,0,0,30).
Same for the second row. If I call with (0,1) I get the first pixel of the second row.
Any number greater than 1 for the X coordinate returns the last pixel of the second row.
I'd expect the coordinates to be:
this.textureData = new Uint8Array([
// (0,0) (1,0) (2,0) (3,0)
0, 0, 0, 10, 0, 0, 0, 20, 0, 0, 0, 30, 0, 0, 0, 40,
// (0,1) (1,1) (2,1) (3,1)
0, 0, 0, 50, 0, 0, 0, 60, 0, 0, 0, 70, 0, 0, 0, 80,
]);
What am I missing? How can I correctly access the pixels?
Thanks!
Texture coordinates are not integral, they are in the range [0.0, 1.0]. They map the vertices of the geometry to a point in the texture image. The texture coordinates specifies which part of the texture is placed on an specific part of the geometry and together with the texture parameters (see gl.texParameteri) it specifies how the geometry is wrapped by the texture. In general, the lower left point of the texture is addressed by the texture coordinate (0.0, 0.0) and the upper right point of the texture is addressed by (1.0, 1.0).
Texture coordinates work the same in OpenGL, OpenGL Es and WebGL. See How do opengl texture coordinates work?

Texel data in case of GL_LUMINANCE when glTexImage2D is called

Usually a texel is an RGBA value. What data does a texel represent in the following code:
const int TEXELS_W = 2, TEXELS_H = 2;
GLubyte texels[] = {
100, 200, 0, 0,
200, 250, 0, 0
};
glBindTexture(GL_TEXTURE_2D, textureId);
glTexImage2D(
GL_TEXTURE_2D,
0, // mipmap reduction level
GL_LUMINANCE,
TEXELS_W,
TEXELS_H,
0, // border (must be 0)
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
texels);
GLubyte texels[] = {
100, 200, 0, 0,
200, 250, 0, 0
};
OpenGL will only read 4 of these values. Because GL_UNPACK_ALIGNMENT defaults to 4, OpenGL expects each row of pixel data to be aligned to 4 bytes. So the two 0's in each row are just padding, because the person who wrote this code didn't know how to change the alignment.
So OpenGL will read 100, 200 as the first row, then skip to the next 4 byte boundary and read 200, 250 as the second row.
GL_LUMINANCE:
Each element is a single luminance value. The GL converts it to floating point, then assembles it into an RGBA element by replicating the luminance value three times for red, green, and blue and attaching 1 for alpha. Each component is then multiplied by the signed scale factor GL_c_SCALE, added to the signed bias GL_c_BIAS, and clamped to the range [0,1] (see glPixelTransfer).

Change RGB value of pixel based on threshold OpenCV C++

I have an image in which pixels above a certain value I'd like to turn red, and pixels below a certain value I'd like to turn blue.
So far, I can get a matrix of red pixels, and a matrix of blue pixels by using thresholding, and bitwise operators to set the pixel value:
cvtColor(displayImage, displayImage, COLOR_GRAY2BGR);
threshold(displayImage, highThresh, highThreshVal, 255, 0);
highThresh = highThresh & Scalar(0, 0, 255); // Turn it red
threshold(displayImage, lowThresh, lowThreshVal, 255, 1);
lowThresh = lowThresh & Scalar(255, 0, 0); // Turn it blue
displayImage = lowThresh + highThresh;
When I display the displayImage, I see almost exactly what I want. It's an image where all the pixels below lowThreshVal are blue, and all pixels above highThreshVal are red. However, the pixels that are in between these values are all set to 0. Whereas, I would like to show the original image overlayed with the blue and red images. I'm not sure how to do this, or if I'm taking the best approach.
I know I can't add the images because I want to make sure every pixel above the threshold is pure red, not a mix of red and the original image, this yields pink-ish pixels instead of bright red pixels, which defeats the purpose of what I'm trying to build. But as of right now, I'm kind of stuck on what to do.
This worked.
cvtColor(displayImage, displayImage, COLOR_GRAY2BGR);
origImage = displayImage.clone();
threshold(origImage, highThresh, highThreshVal, 255, 0); // Binary thresholding
cvtColor(highThresh, highThresh, CV_BGR2GRAY);
displayImage.setTo(Scalar(0, 0, 255), highThresh);
threshold(origImage, lowThresh, lowThreshVal, 255, 1); // Binary thresholding
cvtColor(lowThresh, lowThresh, CV_BGR2GRAY);
displayImage.setTo(Scalar(255, 0, 0), lowThresh);

CAIRO_OPERATOR_CLEAR not working as expected

I want to remove parts of a previously filled shape with Cairo and C++.
Consider the following MWE:
void test(cairo_t *cr){
cairo_set_source_rgb(cr, 1, 1, 1);
cairo_paint(cr); //background
cairo_rectangle(cr, 50, 50, 150, 150);
cairo_set_source_rgb(cr, 0, 0, 1);
cairo_fill(cr); //first rect
cairo_set_operator(cr,CAIRO_OPERATOR_CLEAR);
cairo_arc(cr, 100, 100, 100, 0, M_PI * 2);
cairo_fill(cr); //circle that show remove a part of the rect
}
It results in the following picture:
According to the documentation I would have expected no black color at all, and all parts of the blue rectangle, that are under the circle to become removed (and therefore white as the background).
Did I misunderstand the operator? Did I make any mistake?
How would cairo know what you consider the background?
The documentation that you link to mentions that the alpha channels and all color channels are set to 0. This is fully transparent black.
The example in the documentation is an image with an alpha channel and thus the cleared parts become transparent.
You are using an image without an alpha channel and thus the cleared parts become black.

Image blending problem when rendering to texture

This is related to my last question. To get this image:
http://img252.imageshack.us/img252/623/picture8z.png
I draw a white background (color = (1, 1, 1, 1)).
I render-to-texture the two upper-left squares with color = (1, 0, 0, .8) and blend function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), and then draw the texture with color = (1, 1, 1, 1) and blend function (GL_ONE, GL_ONE_MINUS_SRC_ALPHA).
I draw the lower-right square with color = (1, 0, 0, .8) and blend function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
By my calculation, the render-to-texture squares should have color
.8 * (1, 0, 0, .8) + (1 - .8) * (0, 0, 0, 0) = (.8, 0, 0, .64)
and so after drawing that texture on the white background, they should have color
(.8, 0, 0, .64) + (1 - .8) * (1, 1, 1, 1) = (1, .2, .2, .84)
and the lower-right square should have color
.8 * (1, 0, 0, .8) + (1 - .8) * (1, 1, 1, 1) = (1, .2, .2, .84)
which should look the same! Is my reasoning wrong? Is my computation wrong?
In any case, my goal is to cache some of my scene. How do I render-to-texture and then draw that texture so that it is equivalent to just drawing the scene inline?
If you want to render blended content to a texture and composite that texture to the screen, the simplest way is to use premultiplied alpha everywhere. It’s relatively simple to show that this works for your case: the color of your semi-transparent squares in premultiplied form is (0.8, 0, 0, 0.8), and blending this over (0, 0, 0, 0) with (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) essentially passes your squares’ color through to the texture. Blending (0.8, 0, 0, 0.8) over opaque white with (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) gives you (1.0, 0.2, 0.2, 1.0). Note that the color channels are the same as your third calculation, but the alpha channel is still 1.0, which is what you would expect for an opaque object covered by a blended object.
Tom Forsyth has a good article about premultiplied alpha. The whole thing is worth reading, but see the “Compositing translucent layers” section for an explanation of why the math works out in the general case.
Whoops, my computation is wrong! the second line should be
(.8, 0, 0, .64) + (1 - .64) * (1, 1, 1, 1) = (1, .36, .36, .84)
which indeed seems to match what I see (when I change the last square to color (1, .2, .2, .8), all three squares appear the same color).
Regarding your last question: Replacing parts of the scene by textures is not trivial. A good starting point is Stefan Jeschke's PhD thesis.