I'm learning OpenGL using Visual Studio C++. Just wondering how I go about doing anti-alias techniques for when I do glBegin(GL_Triangles)... it doesn't seem like a primitive type, or am I wrong?
Though its not specific what level of antialiasing you are expecting.
If you are using glut you can definitely try following code.
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_MULTISAMPLE);
glEnable(GL_MULTISAMPLE);
This will give you multi sampling anti aliasing. From what code you have given you are not doing anything in programmable mode or modern opengl so I think that should be sufficient to you as you dont have to do anything extra.
Related
As an OpenGL beginner I would like to know what do they do and why these are required. For instance in the call
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GL_COLOR_BUFFER_BIT and GL_DEPTH_BUFFER_BIT aren't functions, they're constants. You use them to tell glClear() which buffers you want it to clear - in your example, the depth buffer and the "buffers currently enabled for color writing". You can also pass GL_ACCUM_BUFFER_BIT to clear the accumulation buffer and/or GL_STENCIL_BUFFER_BIT to clear the stencil buffer.
The actual values of the constants shouldn't matter to you when using the library - the important implementation detail is that the binary representations for each constant don't overlap with each other. It's that characteristic that lets you pass the bitwise OR of multiple constants to a single call to glClear().
Check out the glClear() documention for more details.
A call to glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) clears the OpenGL color and depth buffers (or any other buffer or combination of buffers). OpenGL being a state machine, it is good practice to start each frame with a clean slate.
I stumble upon this question while reading about this and thought to add some details for anyone confused. I see these two variables are constant represent options in bit values.
The glClear() method needs to know what type of buffer to clear. However, there are many buffers such as color and depth buffers and others (https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/glClear.xml).
By using bits to represent options it become easier to set multiple options by performing an OR operation.
For more details check out this "Bit Manipulation" tutorial, namely "When are bit flags most useful?" section at : https://www.learncpp.com/cpp-tutorial/bit-manipulation-with-bitwise-operators-and-bit-masks/
First, some context :
The 3D engine I wrote for my game allows me to switch between DirectX 9 and OpenGL, thanks to an intermediate API layer.
Both allow the user to enable multisampling (via GL_ARB_multisample for OpenGL, D3DMULTISAMPLE_x_SAMPLES for DirectX). Multisampling is enabled for the game window buffer.
The models for my characters use one big texture with texture atlases, so I disabled mipmapping there in order to avoid texture bleeding.
I experience the following results :
As I should, I get the same result when disabling multi-sampling for DirectX or OpenGL.
As I should, I correctly get edge smoothing on polygons when enabling multi-sampling for both.
However, in OpenGL, it seems that multi-sampling also has an effect akin to texture filtering, probably multi-sampling at different spots in the texture for each pixel, as the results are comparable to what mipmap would achieve, without texture bleeding - obviously, great. On the other hand, however, DirectX doesn't seem to provide this benefit as the result of texture mapping isn't anti-aliased, and the same as when multi-sampling is disabled.
I would very much like to know if there is anything I can do in order to get the same result in DirectX as in OpenGL. Maybe I am not aware of the good keywords, but I haven't been able to find documentation that relates to this specific aspect of multisampling.
I am currently drawing some geometry with "modern OpenGL" using QUAD and QUADSTRIP primitives.
I have faced with strange artifacts. My quads actually tessellates with visible triangles. Hope you'll see while lines across quads.
Any ideas?
Modern OpenGL (3.1+ Core Profile) does not support QUADS or QUADSTRIPS. Check for example here for allowed primitive types.
The culprit most likely is that you enabled polygon smooth antialiasing (still supported in compatibility profile), i.e. did glEnable(GL_POLYGON_SMOOTH) + some blending function. Artifacts like the one you observe are the reason, nobody really bothered to use that method of antialiasing.
However, it may very well be, that you did enable antialiasing in your graphics driver settings and the AA method used doesn't play along nicely with your program.
I have to implement simple texturing in OpenGL using the inverse(or backwards) mapping approach. In theory, I know what it is and how it works, but I can't figure out how to implement it in OpenGL. Is there a way to specify how OpenGL handles textures in this regard?
I know, more or less, how to texture polygons, but this little side-note on the homework assignment is what bugs me:
Note: The texture mapping is implemented using the inverse mapping
approach (do not use OpenGL texture mapping function)
Does anyone have any idea what is meant by this? I'm drawing the polygon to be textured with glBegin and glEnd and using glEnable(GL_TEXTURE_2D) and glTexCoord2f to texture it. Is there another way of doing it, or am I reading too much into the assignment?
I'm using Visual Studio 2012, which came with OpenGL installed, and have an AMD Radeon HD 6850 graphics card. This is for a simple homework assignment, so the simplest solution will suffice.
To save time, yes, this is for a homework, no, I don't want anyone to do it for me, just an indication as to how I would go about doing it myself, and no, an extensive google search gave me no insight whatsoever as to how this would be implemented in code.
I am currently working on a video player for Windows using OpenGL. It works great, but one of my main goal is accuracy. That is, the image displayed should be exactly the image that was saved as a video.
Taking away everything video/file/input related, I have something along these lines as my glutDisplayFunc:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLubyte frame[256*128*4]; // width*height*RGBA_size
for (int i=0;i<256*128;i++)
{
frame[i*4] = 0x00; // R
frame[i*4+1] = 0xAA; // G
frame[i*4+2] = 0x00; // B
frame[i*4+3] = 0x00; // A
}
glRasterPos2i(0,0);
glDrawPixels(256,128,GL_RGBA,GL_UNSIGNED_BYTE,frame);
glutSwapBuffers();
Combined with the code for glut..
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
glutInitWindowSize(256, 128);
glutCreateWindow("...");
glutDisplayFunc(Render);
glPixelZoom(1,-1); // use top-to-bottom display
glEnable(GL_DEPTH_TEST);
glutMainLoop();
This is pretty straightforward and a huge simplification of the actual code. A frame with a RGB value #00AA00 is created and drawn with glDrawPixels.
As expected, the output is a green window. On first glance all seems good.
But when I use a tool such as http://instant-eyedropper.com/ to know the exact RGB value of a pixel, I realize that not all of the pixels are displayed as #00AA00.
Some pixels will have a value of #00A900. It will be the case of the first pixel in the top-left corner, as well as the 3rd, the 5th and so on on the same line and for every uneven line.
Now it can't be a problem with Instant Eyedropper or with Windows since other programs output the right color for the same file.
Now my question:
Could it be possible that glDrawPixels somehow changes the pixels values, maybe as a way to go faster?
I would expect such a function to display exactly what we input in it, so I'm not quite sure what to think of it.
OpenGL has enabled color dithering by default. So your GPU actually may perform color dithering for some reason. Try disabling it with glDisable(GL_DITHER);. Also be aware of colour management issues.