glDrawPixels ouputs some pixels with not quite the expected RGB value - c++

I am currently working on a video player for Windows using OpenGL. It works great, but one of my main goal is accuracy. That is, the image displayed should be exactly the image that was saved as a video.
Taking away everything video/file/input related, I have something along these lines as my glutDisplayFunc:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLubyte frame[256*128*4]; // width*height*RGBA_size
for (int i=0;i<256*128;i++)
{
frame[i*4] = 0x00; // R
frame[i*4+1] = 0xAA; // G
frame[i*4+2] = 0x00; // B
frame[i*4+3] = 0x00; // A
}
glRasterPos2i(0,0);
glDrawPixels(256,128,GL_RGBA,GL_UNSIGNED_BYTE,frame);
glutSwapBuffers();
Combined with the code for glut..
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
glutInitWindowSize(256, 128);
glutCreateWindow("...");
glutDisplayFunc(Render);
glPixelZoom(1,-1); // use top-to-bottom display
glEnable(GL_DEPTH_TEST);
glutMainLoop();
This is pretty straightforward and a huge simplification of the actual code. A frame with a RGB value #00AA00 is created and drawn with glDrawPixels.
As expected, the output is a green window. On first glance all seems good.
But when I use a tool such as http://instant-eyedropper.com/ to know the exact RGB value of a pixel, I realize that not all of the pixels are displayed as #00AA00.
Some pixels will have a value of #00A900. It will be the case of the first pixel in the top-left corner, as well as the 3rd, the 5th and so on on the same line and for every uneven line.
Now it can't be a problem with Instant Eyedropper or with Windows since other programs output the right color for the same file.
Now my question:
Could it be possible that glDrawPixels somehow changes the pixels values, maybe as a way to go faster?
I would expect such a function to display exactly what we input in it, so I'm not quite sure what to think of it.

OpenGL has enabled color dithering by default. So your GPU actually may perform color dithering for some reason. Try disabling it with glDisable(GL_DITHER);. Also be aware of colour management issues.

Related

Glew and Glut how do you activate pixel format?

So I've come to c++ Glew and Glut from java LWJGL. And I've got a spinning rectangle with simple glBegin(GL_QUADS) and that stuff working. But how do i activate pixel format like in LWJGL.
Best you can do is glutInitDisplayMode() on/off flags:
GLUT_RGBA: Bit mask to select an RGBA mode window. This is the default if neither GLUT_RGBA nor GLUT_INDEX are specified.
GLUT_RGB: An alias for GLUT_RGBA.
GLUT_INDEX: Bit mask to select a color index mode window. This overrides GLUT_RGBA if it is also specified.
GLUT_SINGLE: Bit mask to select a single buffered window. This is the default if neither GLUT_DOUBLE or GLUT_SINGLE are specified.
GLUT_DOUBLE: Bit mask to select a double buffered window. This overrides GLUT_SINGLE if it is also specified.
GLUT_ACCUM: Bit mask to select a window with an accumulation buffer.
GLUT_ALPHA: Bit mask to select a window with an alpha component to the color buffer(s).
GLUT_DEPTH: Bit mask to select a window with a depth buffer.
GLUT_STENCIL: Bit mask to select a window with a stencil buffer.
GLUT_MULTISAMPLE: Bit mask to select a window with multisampling support. If multisampling is not available, a non-multisampling window
will automatically be chosen. Note: both the OpenGL client-side and
server-side implementations must support the GLX_SAMPLE_SGIS extension
for multisampling to be available.
GLUT_STEREO: Bit mask to select a stereo window.
GLUT_LUMINANCE: Bit mask to select a window with a ``luminance'' color model. This model provides the functionality of OpenGL's RGBA
color model, but the green and blue components are not maintained in
the frame buffer. Instead each pixel's red component is converted to
an index between zero and glutGet(GLUT_WINDOW_COLORMAP_SIZE)-1 and
looked up in a per-window color map to determine the color of pixels
within the window. The initial colormap of GLUT_LUMINANCE windows is
initialized to be a linear gray ramp, but can be modified with GLUT's
colormap routines.
You can't request a specific number of alpha/depth/stencil/etc. bits like you can with LWJGL's PixelFormat.

Difference between single buffered(GLUT_SINGLE) and double buffered drawing(GLUT_DOUBLE)

I'm using example here it works under
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
but it become a transparent window when I set it to
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
but I need that example work with some drawing under GLUT_DOUBLE mode.
So what's the difference between GLUT_DOUBLE and GLUT_SINGLE?
When using GL_SINGLE, you can picture your code drawing directly to the display.
When using GL_DOUBLE, you can picture having two buffers. One of them is always visible, the other one is not. You always render to the buffer that is not currently visible. When you're done rendering the frame, you swap the two buffers, making the one you just rendered visible. The one that was previously visible is now invisible, and you use it for rendering the next frame. So the role of the two buffers is reversed each frame.
In reality, the underlying implementation works somewhat differently on most modern systems. For example, some platforms use triple buffering to prevent blocking when a buffer swap is requested. But that doesn't normally concern you. The key is that it behaves as if you had two buffers.
The main difference, aside from specifying the different flag in the argument for glutInitDisplayMode(), is the call you make at the end of the display function. This is the function registered with glutDisplayFunc(), which is DrawCube() in the code you linked.
In single buffer mode, you call this at the end:
glFlush();
In double buffer mode, you call:
glutSwapBuffers();
So all you should need to do is replace the glFlush() at the end of DrawCube() with glutSwapBuffers() when using GLUT_DOUBLE.
When drawing to a single buffered context (GLUT_SINGLE), there is only one framebuffer that is used to draw and display the content. This means, that you draw more or less directly to the screen. In addition, things draw last in a frame are shown for a shorter time period then objects at the beginning.
In a double buffered scenario (GLUT_DOUBLE), there exist two framebuffer. One is used for drawing, the other one for display. At the end of each frame, these buffers are swapped. Doing so, the view is only changed at once when a frame is finished and all objects are visible for the same time.
That being said: Are you sure that a transparent window is caused by GL_DOUBLE and not by using GL_RGBA instead of GL_RGB?

How to make a step by step display animation in openGL?

How to make a step by step display animation in openGL??
I'M doing a reprap printer project to read a GCode file and interpret it into graphic.
now i have difficulty make a step by step animation of drawing the whole object.
i need to draw many short lines to make up a whole object.
for example:
|-----|
| |
| |
|-----|
the square is made up of many short lines, and each line is generated by code like:
glPushMatrix();
.....
for(int i=0; i< instruction[i].size(),i++)
{ ....
glBegin(GL_LINES);
glVertex3f(oldx, oldy, oldz);
glVertex3f(x, y, z);
glEnd();
}
glPopMatrix();
now i want to make a step animation to display how this square is made. I tried to refresh the screen each time a new line is drawn, but it doesn't work, the whole square just come out at once. anyone know how to make this?
Typical OpenGL implementations will queue up large number of calls to batch them together into bursts of activity to make optimal use of available communication bandwidth and GPU time resources.
What you want to do is basically the opposite of double buffered rendering, i.e. rendering where each drawing step is immediately visible. One way to do this is by rendering to a single buffered window and call glFinish() after each step. Major drawback: It's likely to not work well on modern systems, which use compositing window managers and similar.
Another approach, which I recommend, is using a separate buffer for incremental drawing, and constantly refreshing the main framebuffer from this one. The key subjects are Frame Buffer Object and Render To Texture.
First you create a FBO (tons of tutorials out there and as answers on StackOverflow). A FBO is basically an abstraction to which you can connect target buffers, like textures or renderbuffers, and which can be bound as the destination of drawing calls.
So how to solve your problem with them? First you should not do the animation by delaying a drawing loop. This has several reasons, but the main issue is, that you loose program interactivity by this. Instead you maintain a (global) counter at which step in your animation you are. Let's call it step:
int step = 0;
Then in your drawing function you have to phases: 1) Texture update 2) Screen refresh
Phase one consists of binding your framebuffer object as render target. For this to work the target texture must be unbound
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, animFBO);
glViewport(0, 0, fbo.width, fbo.height);
set_animFBO_projection();
the trick now is, that you clear the animFBO only once, namely after creation, and then never again. Now you draw your lines according to the animation step
draw_lines_for_step(step);
and increment the step counter (could do this as a compound statement, but this is more explicit)
step++;
After updating the animation FBO it's time to update the screen. First unbind the animFBO
glBindFramebuffer(GL_FRAMEBUFFER, 0);
We're now on the main, on-screen framebuffer
glViewport(0, 0, window.width, window.height);
set_window_projection(); //most likely a glMatrixMode(GL_PROJECTION); glOrtho(0, 1, 0, 1, -1, 1);
Now bind the FBO attached texture and draw it to a full viewport quad
glBindTexture(GL_TEXTURE_2D, animFBOTexture);
draw_full_viewport_textured_quad();
Finally do the buffer swap to show the animation step iteration
SwapBuffers();
You should have the SwapBuffer method called after each draw call.
Be sure you don't screw the matrix stack and you'll probably need something to "pause" the rendering like a breakpoint.
If you only want the Lines to appear one after another and you dont have to be nit-picking about efficiency or good programming style try something like:
(in your drawing routine)
if (timer > 100)
{
//draw the next line
timer = 0;
}
else
timer++;
//draw all the other lines (you have to remember wich one already appeared)
//for example using a boolean array "lineDrawn[10]"
The timer is an integer that tells you, how often you have drawn the scene. If you make it larger, stuff happens more slowly on the screen when you run your program.
Of course this only works if you have a draw routine. If not, I strongly suggest using one.
->plenty tutorials pretty everywhere, e.g.
http://nehe.gamedev.net/tutorial/creating_an_opengl_window_%28win32%29/13001/
Goor luck to you!
PS: I think you have done nearly the same, but without a timer. thats why everything was drawn so fast that you thought it appeared all at the same time.

How does one use clip() to perform alpha testing?

This is an HLSL question, although I'm using XNA if you want to reference that framework in your answer.
In XNA 4.0 we no longer have access to DX9's AlphaTest functionality.
I want to:
Render a texture to the backbuffer, only drawing the opaque pixels of the texture.
Render a texture, whose texels are only drawn in places where no opaque pixels from step 1 were drawn.
How can I accomplish this? If I need to use clip() in HLSL, how to I check the stencilbuffer that was drawn to in step 1, from within my HLSL code?
So far I have done the following:
_sparkStencil = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.GreaterEqual,
ReferenceStencil = 254,
DepthBufferEnable = true
};
DepthStencilState old = gd.DepthStencilState;
gd.DepthStencilState = _sparkStencil;
// Only opaque texels should be drawn.
DrawTexture1();
gd.DepthStencilState = old;
// Texels that were rendered from texture1 should
// prevent texels in texture 2 from appearing.
DrawTexture2();
Sounds like you want to only draw pixels that are within epsilon of full Alpha (1.0, 255) the first time, while not affecting pixels that are within epsilon of full Alpha the second.
I'm not a graphics expert and I'm operating on too little sleep, but you should be able to get there from here through an effect script file.
To write to the stencil buffer you must create a DepthStencilState that writes to the buffer, then draw any geometry that is to be drawn to the stencil buffer, then switch to a different DepthStencilState that uses the relevant CompareFunction.
If there is some limit on which alpha values are to be drawn to the stencil buffer, then use a shader in the first pass that calls the clip() intrinsic on floor(alpha - val) - 1 where val is a number in (0,1) that limits the alpha values drawn.
I have written a more detailed answer here:
Stencil testing in XNA 4

OpenGL - mask with multiple textures

I have implemented masking in OpenGL according to the following concept:
The mask is composed of black and white colors.
A foreground texture should only be visible in the white parts of the mask.
A background texture should only be visible in the black parts of the mask.
I can make the white part or the black part work as supposed by using glBlendFunc(), but not the two at the same time, because the foreground layer not only blends onto the mask, but also onto the background layer.
Is there anyone who knows how to accomplish this in the best way? I have been searching the net and read something about fragment shaders. Is this the way to go?
This should work:
glEnable(GL_BLEND);
// Use a simple blendfunc for drawing the background
glBlendFunc(GL_ONE, GL_ZERO);
// Draw entire background without masking
drawQuad(backgroundTexture);
// Next, we want a blendfunc that doesn't change the color of any pixels,
// but rather replaces the framebuffer alpha values with values based
// on the whiteness of the mask. In other words, if a pixel is white in the mask,
// then the corresponding framebuffer pixel's alpha will be set to 1.
glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ZERO);
// Now "draw" the mask (again, this doesn't produce a visible result, it just
// changes the alpha values in the framebuffer)
drawQuad(maskTexture);
// Finally, we want a blendfunc that makes the foreground visible only in
// areas with high alpha.
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
drawQuad(foregroundTexture);
This is fairly tricky, so tell me if anything is unclear.
Don't forget to request an alpha buffer when creating the GL context. Otherwise it's possible to get a context without an alpha buffer.
Edit: Here, I made an illustration.
Edit: Since writing this answer, I've learned that there are better ways to do this:
If you're limited to OpenGL's fixed-function pipeline, use texture environments
If you can use shaders, use a fragment shader.
The way described in this answer works and is not particularly worse in performance than these 2 better options, but is less elegant and less flexible.
Stefan Monov's is great answer! But for those who still have issues to get his answer working:
you need to check GLES20.glGetIntegerv(GLES20.GL_ALPHA_BITS, ib) - you need non zero result.
if you got 0 - goto EGLConfig and ensure that you pass alpha bits
EGL14.EGL_RED_SIZE, 8,
EGL14.EGL_GREEN_SIZE, 8,
EGL14.EGL_BLUE_SIZE, 8,
EGL14.EGL_ALPHA_SIZE, 8, <- i havn't this and spent a much of time
EGL14.EGL_DEPTH_SIZE, 16,