My task : make weighted antialiasing algorithm with accumulation buffer in OpenGL. In other words, i have pixels array, which i have to shift by one pixel in all directions with multipliers.
My problem : I don`t really understand what my code does.
while(!glfwWindowShouldClose(window)) {
glReadBuffer(GL_FRONT);
glDrawBuffer(GL_FRONT);
glDrawPixels(800, 800, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glAccum(GL_LOAD, 1.0);
glRasterPos2d(1.0, 0.0);
glAccum(GL_ACCUM, 2.0);
glRasterPos2d(0.0, 1.0);
glAccum(GL_ACCUM, 2.0);
glAccum(GL_RETURN, 1.0);
glfwPollEvents();
glfwSwapBuffers(window);
}
As i understand, glRasterPos changes "cursor" position, so with changing it
my picture should shift to one pixel to right and to bottom. But i don't see any antialiasing results, just blinking of my shape(pixels array contains only white pixels). I understand that i should do GL_ACCUM to all nine directions. And i should use exactly glRasterPos for this.
What don't i understand with glAccum and glRasterPos?
Calling glRasterPos doesn't move anything, it just sets current position - treat it as assigning values to internal posx and posy variables. You should draw your image again (call glDrawPixels) in order to see the effect of this call.
Related
I'm writing code that display hidden part of the object.
Here is example : the plate is larger polygonal object, and cylinder is smaller polygonal object. cylinder is hidden by plate. (See the lower half window : cylinder penetrates the plate. some part of the cylinder is hidden by plate. )
The image is made by below code.
draw plate (not draw it to RGB buffer. only catch the depth values)
draw cylinder (if depth test 'less' passes : that means visible part of the cylinder (smaller depth) is drawn)
The model is rotated along y axis for each frame. It gives me a correct result for every frame.
Now, I'd like to display hidden part of the cylinder as transparent.
Before using blending, I want to display only the hidden part of the cylinder. That is, I have to display cylinder's region that have more greater depth values than plate's depth. Then, I just change
glDepthFunc(GL_LESS);
to
glDepthFunc(GL_GREATER);
However, If I changed it to GL_GREATER, it does not give me a correct result.
I got correct result at first frame, but after then, the model is gone. (That means, the model is not displayed on window. Both of upper, and lower viewport)
I cannot catch the reason. Help me!
void MyDisplay()
{
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glRotatef(rot, 0, 1, 0);
glViewport(0,0, width, height/2);
glEnable(GL_DEPTH_TEST);
DrawPlate();
glColor4f(0,0,0,1);
DrawCylinder();
glDisable(GL_DEPTH_TEST);
glViewport(0,height/2, width, height/2);
glEnable(GL_DEPTH_TEST);
glClearDepth(1.0);
//glDrawBuffer(GL_NONE); // No color buffers are written
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
DrawPlate();
glDepthFunc(GL_LESS);
// glDepthFunc(GL_GREATER); // doesn't work !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glColor4f(0,0,0,0.5f);
DrawCylinder();
delay(1);
glFlush();
glutPostRedisplay();
}
Hey guys i found a solution.
That is : depth test (greater) does not initialized after one frame.
the depth goes to 1 after frame, then no pixel passes the test.
Thus I have to input glDepthFunc(GL_LESS) at first line.
So, I've been trying to rotate a single object in an OpenGL/GLUT environment.
After going through several questions on SO and the like, I've written what appears to be correct code, but no dice. Does anyone know how to make this work?
PS: I've tried changing the GLMatrixmode to Projection, but that just shows a black screen. Same thing happens if I use glLoadIdentity().
Here's my rendering code
void display()
{
preProcessEvents();
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(Camera::position.x, Camera::position.y, Camera::position.z,
Camera::position.x+Math::sind(Camera::rotationAngles.x)*Math::cosd(Camera::rotationAngles.y),
Camera::position.y+Math::cosd(Camera::rotationAngles.x),
Camera::position.z+Math::sind(Camera::rotationAngles.x)*Math::sind(Camera::rotationAngles.y),
0.0, 1.0, 0.0);
glBegin(GL_TRIANGLES);
glColor3f(1, 0, 0);
glVertex3f(-1, 0,-3);
glColor3f(0, 1, 0);
glVertex3f(0.0f, 2.0f,-3);
glColor3f(0, 0, 1);
glVertex3f(1.0f, 0.0f,-3);
glEnd();
glBindTexture(GL_TEXTURE_2D, tex->textureID);
glBegin(GL_QUADS);
glColor3f(1, 1, 1);
glTexCoord2f(100, 100);
glVertex3f(100,0,100);
glTexCoord2f(-100, 100);
glVertex3f(-100,0,100);
glTexCoord2f(-100,-100);
glVertex3f(-100,0,-100);
glTexCoord2f(100,-100);
glVertex3f(100,0,-100);
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
object1.draw();
glTranslatef(-10.0, 10.0, 0.0);
glBindTexture(GL_TEXTURE_2D, tex2->textureID);
gluQuadricTexture(quad,1);
gluSphere(quad,10,20,20);
glBindTexture(GL_TEXTURE_2D, 0);
//RELEVANT CODE STARTS HERE
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
glRotatef(190, 0.0, 0.0, 1.0);
glPopMatrix();
glutSwapBuffers();
}
Are you aware what glPushMatrix and glPopMatrix do? They save and restore the "current" matrix.
By enclosing your rotation in that and then doing no actual drawing operation before restoring the matrix the entire sequence of code beginning with //RELEVANT CODE STARTS HERE is completely pointless.
Even if you did not push/pop, your rotation would only be applied the next time you draw something. Logically you might think that would mean the next time you call display (...), but one of the first things you do in display (...) is replace the current matrix with an identity matrix (line 3).
In all honesty, you should consider abandoning whatever resource you are currently using to learn OpenGL. You are using deprecated functionality and missing a few fundamentals. A good OpenGL 3.0 tutorial will usually touch on the basics of transformation matrices.
As for why changing the matrix mode to projection produces a black screen, that is because the next time you call display (...), gluLookAt operates on the projection matrix. In effect, you wind up applying the camera transformation twice. You should really add glMatrixMode (GL_MODELVIEW) to the beginning of display (...) to avoid this.
You do the rotation (and reset it with glPopMatrix) after you draw, do the rotation code before the glBegin/glEnd calls.
Or just move to the shader based pipeline and manage you own transformation matrices.
I need to render a sphere to a texture (done using a Framebuffer Object (FBO)), and then alpha blend that texture with the back buffer. So far I'm not doing any processing with the texture except clearing it at the beginning of every frame.
I should say that my scene consists of nothing but a planet in empty space, the sphere should appear next to or around the planet (kind of like a moon for now). When I render the sphere directly to the back buffer, it displays correctly; but when I do the intermediary step of rendering it to a texture and then blending that texture with the back buffer, the sphere only shows up when it is in front of the planet, the part that isn't in front is just "cut off":
I render the sphere using glutSolidSphere to a RGBA8 fullscreen texture that's bound to an FBO, making sure that every sphere pixel receives an alpha value of 1.0. I then pass the texture to a fragment shader program, and use this code to render a fullscreen quad - with the texture mapped onto it - to the backbuffer while alpha blending:
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glBegin(GL_QUADS);
glTexCoord2i(0, 1);
glVertex3i(-1, 1, -1); // TOP LEFT
glTexCoord2i(0, 0);
glVertex3i(-1, -1, -1); // BOTTOM LEFT
glTexCoord2i(1, 0);
glVertex3i( 1, -1, -1); // BOTTOM RIGHT
glTexCoord2i(1, 1);
glVertex3i( 1, 1, -1); // TOP RIGHT
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
This is the shader code (taken from an FX file written in Cg):
sampler2D BlitSamp = sampler_state
{
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = LINEAR;
AddressU = Clamp;
AddressV = Clamp;
};
float4 blendPS(float2 texcoords : TEXCOORD0) : COLOR
{
float4 outColor = tex2D(BlitSamp, texcoords);
return outColor;
}
I don't even know whether this is a problem with the depth buffer or with alpha blending, I've tried a lot of combinations of enabling and disabling depth testing (with a depth buffer attached to the FBO) and alpha blending.
EDIT: I tried just rendering a blank fullscreen quad straight to the back buffer and even that was cropped around the planet's edges. For some reason, enabling depth testing for rendering the quad (that is, removing the lines glDisable(GL_DEPTH_TEST) and glEnable(GL_DEPTH_TEST) in the code above) got rid of the problem, but now everything but the planet and the sphere appears white:
I made sure (and could confirm) that the alpha channel of the texture is 0 at every pixel but the sphere's, so I don't understand where the whiteness could be introduced. (Would also still be interested in an explanation why enabling depth testing has this effect.)
I see two possible sources of error here:
1. Rendering to the FBO
If the missing pixels are not even present in the FBO after rendering, there must be some mechanism which discarded the corresponding fragments. The OpenGL pipeline includes four different types of fragment tests which can lead to fragments being discarded:
Scissor Test: Unlikely to be the cause, as the scissor test only affects a rectangular portion of the screen.
Alpha Test: Equally unlikely, as your fragments should all have the same alpha value.
Stencil Test: Also unlikely, unless you use stencil operations when drawing the background planet and copy over the stencil buffer from the back buffer to the FBO.
Depth Test: Same as for stencil test.
So there's a good chance that rendering into FBO is not the issue here. But just to be absolutely sure, you should read back your color attachment texture and dump it into a file for inspection. You can use the following function for that:
void TextureToFile(GLuint texture, const char* filename) {
glBindTexture(GL_TEXTURE_2D, texture);
GLint width, height;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &height);
std::vector<GLubyte> pixels(3 * width * height);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGB, GL_UNSIGNED_BYTE, &pixels[0]);
std::ofstream out(filename, std::ios::out | std::ios::binary);
out << "P6\n"
<< width << '\n'
<< height << '\n'
<< 255 << '\n';
out.write(reinterpret_cast<const char*>(&pixels[0]), pixels.size());
}
The resulting file is a portable pixmap (.ppm). Be sure to unbind the FBO before reading back the texture.
2. Texture mapping
Assuming rendering into the FBO works as expected, the only other source of error is blending the texture over the previously rendered scene. There are two scenarios:
a) Fragments get discarded
The possible reasons for fragments to get discarded are the same as in 1.:
Scissor Test: Nope, affects rectangular areas only.
Alpha Test: Probably not, the texels covered sphere should all have the same alpha value.
Stencil Test: Might be the cause if you use stencil operations/stencil testing when drawing the background planet and the old stencil state is still active.
Depth Test: Might be the cause, but as you already disable it, it really shouldn't have any effect.
So you should make sure that all of these tests are disabled, especially the stencil test.
b) Wrong results from blending
Assuming all fragments reach the back buffer, blending is the only thing which could still cause the wrong result. With your blending function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) the values in the back buffer are irrelevant for blending, and we assume that the alpha values in the texture are correct. So I see no reason for why blending should be the root cause here.
Conclusion
In conclusion, the only sensible cause for the observed result seems to be stencil testing. If it's not, I'm out of options :)
I solved it or at least came up with a work around.
First off, the whiteness stems from the fact that glClearColor had been set to glClearColor(1.0f, 1.0f, 1.0f, 1000.0f), so everything but the planet wasn't even written to in the end. I now copy the contents of the back buffer (which is the planet, the atmosphere, and the space around it) to the texture before rendering the sphere, and I render the atmosphere and space before that copy/blit operation, so they are included in it. Previously, everything but the planet itself was rendered after my quad, which - when using depth testing - apparently placed everything behind the quad, making it invisible.
The reference implementation of the effect I'm trying to achieve has always used this kind of blit operation in its code but I didn't think it was necessary for the effect. Now I feel like there might be no other way...
I just started working with OpenGL, but I ran into a problem after implementing a Font system.
My plan is to simply visualize several Pathfinding Algorithms.
Currently OpenGL gets set up like this (OnSize gets called once on window creation manually):
void GLWindow::OnSize(GLsizei width, GLsizei height)
{
// set size
glViewport(0,0,width,height);
// orthographic projection
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0,width,height,0.0,-1.0,1.0);
glMatrixMode(GL_MODELVIEW);
m_uiWidth = width;
m_uiHeight = height;
}
void GLWindow::InitGL()
{
// enable 2D texturing
glEnable(GL_TEXTURE_2D);
// choose a smooth shading model
glShadeModel(GL_SMOOTH);
// set the clear color to black
glClearColor(0.0, 0.0, 0.0, 0.0);
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.0f);
}
In theory I don't need blending, because I will only use untextured Quads to visualize obstacles and line etc to draw paths... So everything will be untextured, except the fonts...
The Font Class has a push and pop function, that look like this (if I remember right my Font system is based on a NeHe Tutorial that I was following quite a while ago):
inline void GLFont::pushScreenMatrix()
{
glPushAttrib(GL_TRANSFORM_BIT);
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(viewport[0],viewport[2],viewport[1],viewport[3], -1.0, 1.0);
glPopAttrib();
}
inline void GLFont::popProjectionMatrix()
{
glPushAttrib(GL_TRANSFORM_BIT);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glPopAttrib();
}
So the Problem:
If I don't draw a Text I can see the Quads I want to draw, but they are quite dark, so there must be something wrong with my general OpenGL Matrix Properties.
If I draw Text (so the font related push and pop functions get called) I can't see any Quads.
The question:
How do I solve this problem and some background information why this happened would also be nice, because I am still a beginner/student, who just started.
If your quads are untextured, you will run into undefined behaviour. What will probably happen is that any previous texture will be used, and the colour at point (0,0) will be used, which could be what is causing them to be invisible.
Really, you need to disable texturing before trying to draw untextured quads using glDisable(GL_TEXTURE_2D). Again, if you don't, it'll just use the previous texture and texture co-ordinates, which without seeing your draw() loop, I'm assuming to be undefined.
I've got a screen-aligned quad, and I'd like to zoom into an arbitrary rectangle within that quad, but I'm not getting my math right.
I think I've got the translate worked out, just not the scaling. Basically, my code is the following:
//
// render once zoomed in
glPushMatrix();
glTranslatef(offX, offY, 0);
glScalef(?wtf?, ?wtf?, 1.0f);
RenderQuad();
glPopMatrix();
//
// render PIP display
glPushMatrix();
glTranslatef(0.7f, 0.7f, 0);
glScalef(0.175f, 0.175f, 1.0f);
RenderQuad();
glPopMatrix();
Anyone have any tips? The user selects a rect area, and then those values are passed to my rendering object as [x, y, w, h], where those values are percentages of the viewport's width and height.
Given that your values are passed as [x, y, w, h] I think you want to first translate in the negative direction to get the upper left hand corner at 0,0, then scale by 1/w and 1/h to fill it to the screen. Like this:
//
// render once zoomed in
glPushMatrix();
glTranslatef(-x, -y, 0);
glScalef(1.0/w, 1.0/h, 1.0f);
RenderQuad();
glPopMatrix();
Does this work?
When I've needed to do this, I've always just changed the parameters I passed to glOrtho, glFrustrum, gluPerspective, or whatever (whichever I was using).
From your comment it looks like you want to draw the same quad (RenderQuad()) as full image and in PIP mode.
Assuming you have widthPIP and heightPIP and startXY position of PIP window then use widthPIP/totalWidth, heightPIP/totalHeight to scale original quad and re-render at given startXY.
Is this what you are looking for?