OpenGL programming in C++: anti-aliasing - c++

I am quite newbie in OpenGL and all my knowledge comes from tutorials in the Internet. For now I can program rotating cube lighted by point source. My new goal is to implement antialiasing - possibly without any magical techniques like a hypothetical glEnable(Smooth_Lines) function. The method I'm using currently does not work.
My first attempt was to try glAccum function and implement jittering anti-aliasing - I don't know if this is not possible on my Radeon graphic card or I just wrote something wrong.
My second attempt is to use glGenFrameBuffers (and rest of family). But now (after ~10 hours of research) I just don't have any strength to continue. Can you tell me where I should implement antialiasing process? Or which functions should I use? Here is cutted version of my whole code:
class Window;
static Window* ptr;
class Window{
GLuint vbo[4],program;
GLint att[5],mvp,mvp2;
void init_res(){
a0=glutGet(GLUT_ELAPSED_TIME);
GLfloat cube[]= ... something
GLfloat colors[]= ... still something
GLushort elements[]= ... oh you can just guess how it should look :)
//now I am binding buffers and send cube data to shaders
program=glCreateProgram();
//little more code here
}
void Draw(){
// here is subfunction for drawing all cube.
}
void onDisplay(){
//here is main function which is send to glutDisplayFunc
Draw();
//should I add something here?
glutSwapBuffers();
}
Window(int s_w,int s_h){
glutInit(&ac, av);
glutInitDisplayMode(GLUT_RGBA|GLUT_ALPHA|GLUT_DOUBLE|GLUT_DEPTH);
glutInitWindowSize(s_w, s_h);
glutCreateWindow("My Rotating Cube");
glewInit();
init_res();
ptr=this;
glutDisplayFunc(Display);
glutReshapeFunc(Reshape);
glutIdleFunc(Idle);
glutKeyboardFunc(Key);
glEnable(GL_LINE_SMOOTH_HINT);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_POLYGON_SMOOTH);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
glEnable(GL_BLEND);
glHint(GL_POLYGON_SMOOTH_HINT,GL_NICEST);
glEnable(GL_DEPTH_TEST);
glEnable(GL_MULTISAMPLE);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//here is a lot of garbage - you can call it 'voodoo programming'
glutMainLoop();
free_resources();
}
};
int main(int argc, char** argv){
ac=argc;
av=argv;
Window *okno;
okno = new Window(800,600);
}
For now, everything in this program works (well, before cuts of course :P ). I did not attach shaders - they are as simple as it is possible (with lighting and added cube borders).
Any ideas how to implement anti-aliasing here?

Related

Why glutWireCube() does not display my wire cube?

I am novice to OpenGL and I read that glutWireCubedraws a wire cube. Now that it is not appearing on when I run my code, I am wondering what does it do? Does it draw a cube or where have I gone wrong in my code?
#include<GL/glut.h>
GLdouble cubeSize= 10.0;
//FUNCTIONS DECLARATIONS - PROTOTYPES
void init(void);
void display(void);
int main(int argc, char ** argv){
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_SINGLE |GLUT_RGBA);
glutInitWindowPosition(100,100);
glutInitWindowSize(500,500);
glutCreateWindow("Wire Cube");
init();
glutDisplayFunc(display);
glutMainLoop();
}
//FUNCTIONS IMPLEMENTAION - DEFINITION
void init(void){
glClearColor(0.0,0.0,1.0,0.0);
glClear(GL_COLOR_BUFFER_BIT);
}
void display(void){
glBegin(GL_POLYGON);
glutWireCube(5.0);
glEnd();
glFlush();
}
Calling glutWireCube is not allowed inside a glBegin/glEnd block. Only glVertex to specify vertices and all the functions to set the current values of some vertex attributes (like glNormal, glColor, ...) can be used there.
How glutWireCube internally works is not specified. It might as well use immediate mode, but in this case, it will do its own glBegin/glEnd calls.
Conceptually, trying to put a cube into a GL_POLYGON is also not going to work. GL_POLYGON is for drawing a single, flat, convex polygon, and it is totally impossible to draw a whole cube as one polygon.
Furthermore, you do not set up any of the GL_MODELVIEW or GL_PROJECTION matrices. This means you directly draw in clip space, and glutWireCube with size 5 will draw a cube which completely lies outside of your viewing frustum, so you will see nothing.

simple graphic pipeline in OpenGL

I am reading Schaum's outlines COMPUTER GRAPHICS. Book says that a simple graphic pipeline is something like this: geometric representation --> transformation --> scan conversion
(though the author has decided to teach scan conversion chapter before transformation chapter). I wish to learn this simple pipeline through an example in openGL. suppose I wish to create a line with end coordinates (150,400) and (700,100) in window of size (750,500). Below code works very well. All I am asking to experts is to explain the 'steps in sequence' when is transformation happening and when scan conversion. I know it may sound stupid but I really need to get this straight. I am just an adult beginner learning graphics at my own as a hobby.
My guess is that scan conversion is not happening here in program. it is done by openGL automatically between glBegin and glEnd calls. Am I right?
#include <GL/glut.h>
void init(void)
{
glClearColor (0.5, 0.2, 0.3, 0.0);
glClear (GL_COLOR_BUFFER_BIT);
glColor4f(0.5,0.7,0.3,0.0);
glLineWidth(3);
}
void display(void)
{
glBegin(GL_LINES);
glVertex2i(50, 400);
glVertex2i(700, 100);
glEnd();
glutSwapBuffers();
}
void reshape(int w, int h)
{
glViewport(0, 0, (GLsizei) w, (GLsizei) h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, (GLdouble)w, 0.0, (GLdouble)h);
}
int main (int argc, char *argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_RGBA | GLUT_DEPTH);
glutInitWindowPosition(100,150);
glutInitWindowSize(750,500); // aspect ratio of 3/2
glutCreateWindow (argv[0]);
init();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop(); // this is when the frame buffer is displayed on the screen
return (0);
}
All stages done within OpenGL implementation (mostly in hardware). You specify states and data, then GL will - if speaking in terms of old GL 1.0 - assemble data into vertices, pass every vertex through transformation stage, rasterize resulting primitives into fragments, perform per-fragment tests (that may discard some fragments), and update resulting pixels on render target.
There is no point in user code that may be on 'one stage' in pipeline - it doesn't have callbacks, and usually as many as possible stages working at the same time.

How to display RGB24 video frame using opengl

My task is to render a set of 50 RGB frames using openGL's glut library.
I tried: In 3D cube rotation, i have a set of vertices using which i render it to the window. However, in case of rendering the RGB frames what should be done? Below is the code using which i render my 3d cube:
#include <glut.h>
GLfloat vertices[24]={-1.0,-1.0,-1.0,1.0,-1.0,-1.0,1.0,1.0,-1.0,-1.0,1.0,-1.0,-1.0,-1.0,1.0,1.0,-1.0,1.0,1.0,1.0,1.0,-1.0,1.0,1.0};
GLfloat colors[24]={-1.0,-1.0,-1.0,1.0,-1.0,-1.0,1.0,1.0,-1.0,-1.0,1.0,-1.0,-1.0,-1.0,1.0,1.0,-1.0,1.0,1.0,1.0,1.0,-1.0,1.0,1.0};
GLubyte cubeIndices[24]={0,3,2,1,2,3,7,6,0,4,7,3,1,2,6,5,4,5,6,7,0,1,5,4};
static GLfloat theta[3]={0,0,0};
static GLint axis=2;
void display()
{
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glRotatef(theta[0],1.0,0.0,0.0);
glRotatef(theta[1],0.0,1.0,0.0);
glRotatef(theta[2],0.0,0.0,1.0);
glDrawElements(GL_QUADS,24,GL_UNSIGNED_BYTE,cubeIndices);
glutSwapBuffers();
glFlush();
}
void spinCude()
{
theta[axis]+=2.0;
if(theta[axis]>360.0)
theta[axis]-=360.0;
display();
}
void init()
{
glMatrixMode(GL_PROJECTION);
glOrtho(-2.0,2.0,-2.0,2.0,-10.0,10.0);
glMatrixMode(GL_MODELVIEW);
}
void mouse(int btn,int state, int x,int y)
{
if(btn==GLUT_LEFT_BUTTON&& state==GLUT_DOWN) axis=0;
if(btn==GLUT_MIDDLE_BUTTON&& state==GLUT_DOWN) axis=1;
if(btn==GLUT_RIGHT_BUTTON&& state==GLUT_DOWN) axis=2;
}
void main(int argc, char **argv)
{
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGB|GLUT_DEPTH);
glutInitWindowSize(500,500);
glutCreateWindow("Simple YUV Player");
init();
glutDisplayFunc(display);
glutIdleFunc(spinCude);
glutMouseFunc(mouse);
glEnable(GL_DEPTH_TEST);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3,GL_FLOAT,0,vertices);
//glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(3,GL_FLOAT,0,colors);
glutMainLoop();
}
Can anyone suggest me some example or tutorial such that i can modify above code to display RGB frames.
Once you have your RGB-Frame as raw-data in memory things are pretty straight-forward. Create a texture using glGenTextures, bind it using glBindTexture and upload the data via glTexImage2D or glTexSubImage2D. Then render a fullscreen quad or whatever you like with that texture. The benefit of that is that you could render multiple 'virtual' TVs in your scene just by rendering multiple quads with that same texture, imagine a TV-Store where the same video runs on dozen of TVs.
glDrawPixels might also work but it is much less versatile.
I don't know if uploading via texture is the way to go (hardware accelerated movie playback programs like VLC are most likely doing something far more advanced), but it should be a good start.
As Marius already suggested, implement texture mapping first. It's rather straigth forward any texture mapping tutorial will do.
Rendering frames are not the best with OpenGL you should try to avoid them as much as you can since it may involve a client -> host memory copy which is really costy ( takes too much time ) or simply it just takes up too much memory. Anyways if you really have to do it just generate as much textures as you need with glGenTextures load them up with the textures by glTexImage2D and then flip over the frames with a simple loop in each frame.
P.S. Judging by your application's name "YUV Player" you may also need to convert the input data since OpenGL mostly uses RGB not YUV.

wxWidgets OpenGl shaders, trouble getting anything to draw

I've been trying to incorporate shaders and OpenGl into a wxWidgets program. I've used the links below:
http://nehe.gamedev.net/article/glsl_an_introduction/25007/
http://www.lighthouse3d.com/tutorials/glsl-tutorial/hello-world-in-glsl/
Now I've been trying in a test program to use the shaders provided by the lighthouse3d tutorial and recreate the output, (a blue teapot spinning slowly on a white background). I can't seem to get anything to draw though and all I can see is a black screen. My code so far is below, (I'm going to ignore most of the shaders intially as I'm 99% sure they're fine):
void BasicGLPane::render( wxPaintEvent& evt )
{
//wxGLCanvas::SetCurrent(*m_context);
wxPaintDC(this);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//prepare2DViewport(0,0,getWidth()/2, getHeight());
glLoadIdentity();
gluLookAt(0.0,0.0,5.0,
0.0,0.0,-1.0,
0.0f,1.0f,0.0f);
glLightfv(GL_LIGHT0, GL_POSITION, lpos);
//glRotatef(a,0,1,1);
glutSolidTeapot(1);
glFlush();
//a+=0.1;
SwapBuffers();
}
void BasicGLPane::InitializeGLEW()
{
//prepare2DViewport(0,0,getWidth(), getHeight());
// The current canvas has to be set before GLEW can be initialized.
wxGLCanvas::SetCurrent(*m_context);
GLenum err = glewInit();
// If Glew doesn't initialize correctly.
if(GLEW_OK != err)
{
std::cerr << "Error:" << glewGetString(err) << std::endl;
const GLubyte* String = glewGetErrorString(err);
wxMessageBox("GLEW is not initialized");
}
BasicGLPane::BasicGLPane(wxFrame* parent, int* args) :
wxGLCanvas(parent, wxID_ANY, args, wxDefaultPosition, wxDefaultSize, wxFULL_REPAINT_ON_RESIZE)
{
m_context = new wxGLContext(this);
// To avoid flashing on MSW
SetBackgroundStyle(wxBG_STYLE_CUSTOM);
}
I've had thoughts as to why I'm not getting any output. One thought I'm having is something to do with the m_context. I'm having to set the current context for WxWidgets before I can run GLEW. There's also a number of properties that in the tutorial are initialized and I'm not using these functions in my wxWidgets version and I'm wondering if I should. These are:
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(100,100);
glutInitWindowSize(320,320);
glutCreateWindow("MM 2004-05");
glutDisplayFunc(renderScene);
glutIdleFunc(renderScene);
glutReshapeFunc(changeSize);
glutKeyboardFunc(processNormalKeys);
glEnable(GL_DEPTH_TEST);
glClearColor(1.0,1.0,1.0,1.0);
glEnable(GL_CULL_FACE);
But I'm quite keen to avoid using glut and have managed to avoid it up until now. The only reason I've previously added it is to try and replicate the tutorial's behaviour.
Edit:
I'm going to add a bit more as I have noticed one or two bits of odd behaviour. If I call this function in my draw:
void BasicGLPane::prepare2DViewport(int topleft_x, int topleft_y, int bottomrigth_x, int bottomrigth_y)
{
glClearColor(1.0f, 1.0f, 1.0f, 1.0f); // Black Background
glEnable(GL_TEXTURE_2D); // textures
glEnable(GL_COLOR_MATERIAL);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glViewport(topleft_x, topleft_y, bottomrigth_x-topleft_x, bottomrigth_y-topleft_y);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(topleft_x, bottomrigth_x, bottomrigth_y, topleft_y);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
I can get the background to change colour when I change the window size. I should also mention, it's NOT refreshing every frame, It only draws one frame and then won't call the render function again until I change the window size.
Your code looks good so far. One thing you find in a lot of tutorials, but is bad practice is, that there's apparently some initialization happening. This is not the case. OpenGL is not initialized, it's a state machine and you're supposed to set state when you need it. The lines
glEnable(GL_DEPTH_TEST);
glClearColor(1.0,1.0,1.0,1.0);
glEnable(GL_CULL_FACE);
Are perfectly happy in the drawing function. You also need to setup a projection. In tutorials you often find them to be set in the window resize handler. Please don't fall into this bad habit. Projection and viewport are drawing state, so set them in the drawing function.
If you're using OpenGL-3 (core profile) or later you must supply at least a vertex and a fragment shader. In the older versions each shader stage is optional and there are builtin variables to provide a common grounds for communication between fixed function and programmable pipeline. However I strongly advise against mixed operation. Always use shaders and use both a vertex and a fragment shader. In the long term they make things soooo much easier.
Turns out I didn't need the gluLookAt in my render.

C++ OpenGl/Glut failing to draw

I am trying to simply draw a triangle in a window. I've drawn shapes before in previous code, and have looked up common issues such as failure to flush or not clearing the color buffer.
No matter what I seem to try though, I can't get anything to draw on screen, even after I've simplified my code to basically look exactly like my previous (working!) code. All I have is a main and a render:
// Declarations //
void Render(void); //Call the drawing functions
int main(int argc, char *argv[])
{
glutInit(&argc,argv);
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(500, 500);
glutInitWindowPosition(20,20);
glutCreateWindow("Triangle Test");
//prepare for drawing
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
//now draw
glutDisplayFunc(Render);
glutMainLoop();
}
// ---- Render Function ----
void Render(void)
{
// Draw a triangle
glColor3f(1.0f, 1.0f, 1.0f);
glBegin(GL_LINE_STRIP);
glVertex2f(100.0f, 20.0f);
glVertex2f(0.0f, 20.0f);
glVertex2f(20.0f, 50.0f);
glEnd();
glFlush();
}
On run, it draws a window with the background color I set (in this case black) and nothing else. I'm completely stumped. All of the other questions on stack seem to be resolved by things I have in here (i.e. glFlush) and its virtually identical to my old code, which draws fine. Any ideas?
You're drawing a line strip that's bigger than your window. You need to either set your matrices so you see a larger area, draw a smaller polygon, or draw a filled polygon by drawing a triangle instead of a line strip.