I've been trying to incorporate shaders and OpenGl into a wxWidgets program. I've used the links below:
http://nehe.gamedev.net/article/glsl_an_introduction/25007/
http://www.lighthouse3d.com/tutorials/glsl-tutorial/hello-world-in-glsl/
Now I've been trying in a test program to use the shaders provided by the lighthouse3d tutorial and recreate the output, (a blue teapot spinning slowly on a white background). I can't seem to get anything to draw though and all I can see is a black screen. My code so far is below, (I'm going to ignore most of the shaders intially as I'm 99% sure they're fine):
void BasicGLPane::render( wxPaintEvent& evt )
{
//wxGLCanvas::SetCurrent(*m_context);
wxPaintDC(this);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//prepare2DViewport(0,0,getWidth()/2, getHeight());
glLoadIdentity();
gluLookAt(0.0,0.0,5.0,
0.0,0.0,-1.0,
0.0f,1.0f,0.0f);
glLightfv(GL_LIGHT0, GL_POSITION, lpos);
//glRotatef(a,0,1,1);
glutSolidTeapot(1);
glFlush();
//a+=0.1;
SwapBuffers();
}
void BasicGLPane::InitializeGLEW()
{
//prepare2DViewport(0,0,getWidth(), getHeight());
// The current canvas has to be set before GLEW can be initialized.
wxGLCanvas::SetCurrent(*m_context);
GLenum err = glewInit();
// If Glew doesn't initialize correctly.
if(GLEW_OK != err)
{
std::cerr << "Error:" << glewGetString(err) << std::endl;
const GLubyte* String = glewGetErrorString(err);
wxMessageBox("GLEW is not initialized");
}
BasicGLPane::BasicGLPane(wxFrame* parent, int* args) :
wxGLCanvas(parent, wxID_ANY, args, wxDefaultPosition, wxDefaultSize, wxFULL_REPAINT_ON_RESIZE)
{
m_context = new wxGLContext(this);
// To avoid flashing on MSW
SetBackgroundStyle(wxBG_STYLE_CUSTOM);
}
I've had thoughts as to why I'm not getting any output. One thought I'm having is something to do with the m_context. I'm having to set the current context for WxWidgets before I can run GLEW. There's also a number of properties that in the tutorial are initialized and I'm not using these functions in my wxWidgets version and I'm wondering if I should. These are:
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(100,100);
glutInitWindowSize(320,320);
glutCreateWindow("MM 2004-05");
glutDisplayFunc(renderScene);
glutIdleFunc(renderScene);
glutReshapeFunc(changeSize);
glutKeyboardFunc(processNormalKeys);
glEnable(GL_DEPTH_TEST);
glClearColor(1.0,1.0,1.0,1.0);
glEnable(GL_CULL_FACE);
But I'm quite keen to avoid using glut and have managed to avoid it up until now. The only reason I've previously added it is to try and replicate the tutorial's behaviour.
Edit:
I'm going to add a bit more as I have noticed one or two bits of odd behaviour. If I call this function in my draw:
void BasicGLPane::prepare2DViewport(int topleft_x, int topleft_y, int bottomrigth_x, int bottomrigth_y)
{
glClearColor(1.0f, 1.0f, 1.0f, 1.0f); // Black Background
glEnable(GL_TEXTURE_2D); // textures
glEnable(GL_COLOR_MATERIAL);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glViewport(topleft_x, topleft_y, bottomrigth_x-topleft_x, bottomrigth_y-topleft_y);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(topleft_x, bottomrigth_x, bottomrigth_y, topleft_y);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
I can get the background to change colour when I change the window size. I should also mention, it's NOT refreshing every frame, It only draws one frame and then won't call the render function again until I change the window size.
Your code looks good so far. One thing you find in a lot of tutorials, but is bad practice is, that there's apparently some initialization happening. This is not the case. OpenGL is not initialized, it's a state machine and you're supposed to set state when you need it. The lines
glEnable(GL_DEPTH_TEST);
glClearColor(1.0,1.0,1.0,1.0);
glEnable(GL_CULL_FACE);
Are perfectly happy in the drawing function. You also need to setup a projection. In tutorials you often find them to be set in the window resize handler. Please don't fall into this bad habit. Projection and viewport are drawing state, so set them in the drawing function.
If you're using OpenGL-3 (core profile) or later you must supply at least a vertex and a fragment shader. In the older versions each shader stage is optional and there are builtin variables to provide a common grounds for communication between fixed function and programmable pipeline. However I strongly advise against mixed operation. Always use shaders and use both a vertex and a fragment shader. In the long term they make things soooo much easier.
Turns out I didn't need the gluLookAt in my render.
Related
I am quite newbie in OpenGL and all my knowledge comes from tutorials in the Internet. For now I can program rotating cube lighted by point source. My new goal is to implement antialiasing - possibly without any magical techniques like a hypothetical glEnable(Smooth_Lines) function. The method I'm using currently does not work.
My first attempt was to try glAccum function and implement jittering anti-aliasing - I don't know if this is not possible on my Radeon graphic card or I just wrote something wrong.
My second attempt is to use glGenFrameBuffers (and rest of family). But now (after ~10 hours of research) I just don't have any strength to continue. Can you tell me where I should implement antialiasing process? Or which functions should I use? Here is cutted version of my whole code:
class Window;
static Window* ptr;
class Window{
GLuint vbo[4],program;
GLint att[5],mvp,mvp2;
void init_res(){
a0=glutGet(GLUT_ELAPSED_TIME);
GLfloat cube[]= ... something
GLfloat colors[]= ... still something
GLushort elements[]= ... oh you can just guess how it should look :)
//now I am binding buffers and send cube data to shaders
program=glCreateProgram();
//little more code here
}
void Draw(){
// here is subfunction for drawing all cube.
}
void onDisplay(){
//here is main function which is send to glutDisplayFunc
Draw();
//should I add something here?
glutSwapBuffers();
}
Window(int s_w,int s_h){
glutInit(&ac, av);
glutInitDisplayMode(GLUT_RGBA|GLUT_ALPHA|GLUT_DOUBLE|GLUT_DEPTH);
glutInitWindowSize(s_w, s_h);
glutCreateWindow("My Rotating Cube");
glewInit();
init_res();
ptr=this;
glutDisplayFunc(Display);
glutReshapeFunc(Reshape);
glutIdleFunc(Idle);
glutKeyboardFunc(Key);
glEnable(GL_LINE_SMOOTH_HINT);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_POLYGON_SMOOTH);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
glEnable(GL_BLEND);
glHint(GL_POLYGON_SMOOTH_HINT,GL_NICEST);
glEnable(GL_DEPTH_TEST);
glEnable(GL_MULTISAMPLE);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//here is a lot of garbage - you can call it 'voodoo programming'
glutMainLoop();
free_resources();
}
};
int main(int argc, char** argv){
ac=argc;
av=argv;
Window *okno;
okno = new Window(800,600);
}
For now, everything in this program works (well, before cuts of course :P ). I did not attach shaders - they are as simple as it is possible (with lighting and added cube borders).
Any ideas how to implement anti-aliasing here?
I am reading Schaum's outlines COMPUTER GRAPHICS. Book says that a simple graphic pipeline is something like this: geometric representation --> transformation --> scan conversion
(though the author has decided to teach scan conversion chapter before transformation chapter). I wish to learn this simple pipeline through an example in openGL. suppose I wish to create a line with end coordinates (150,400) and (700,100) in window of size (750,500). Below code works very well. All I am asking to experts is to explain the 'steps in sequence' when is transformation happening and when scan conversion. I know it may sound stupid but I really need to get this straight. I am just an adult beginner learning graphics at my own as a hobby.
My guess is that scan conversion is not happening here in program. it is done by openGL automatically between glBegin and glEnd calls. Am I right?
#include <GL/glut.h>
void init(void)
{
glClearColor (0.5, 0.2, 0.3, 0.0);
glClear (GL_COLOR_BUFFER_BIT);
glColor4f(0.5,0.7,0.3,0.0);
glLineWidth(3);
}
void display(void)
{
glBegin(GL_LINES);
glVertex2i(50, 400);
glVertex2i(700, 100);
glEnd();
glutSwapBuffers();
}
void reshape(int w, int h)
{
glViewport(0, 0, (GLsizei) w, (GLsizei) h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, (GLdouble)w, 0.0, (GLdouble)h);
}
int main (int argc, char *argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_RGBA | GLUT_DEPTH);
glutInitWindowPosition(100,150);
glutInitWindowSize(750,500); // aspect ratio of 3/2
glutCreateWindow (argv[0]);
init();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop(); // this is when the frame buffer is displayed on the screen
return (0);
}
All stages done within OpenGL implementation (mostly in hardware). You specify states and data, then GL will - if speaking in terms of old GL 1.0 - assemble data into vertices, pass every vertex through transformation stage, rasterize resulting primitives into fragments, perform per-fragment tests (that may discard some fragments), and update resulting pixels on render target.
There is no point in user code that may be on 'one stage' in pipeline - it doesn't have callbacks, and usually as many as possible stages working at the same time.
I want to read the pixels from the back buffer. But all i get so far is a black screen (the clear color).
The thing is, is that i don't need a glut window to draw to. Once i have the pixel information, then i pass that to another program which will draw the image for me.
My init function looks like this:
// No main function, so no real argv argc
char fakeParam[] = "nothing";
char *fakeargv[] = { fakeParam, NULL };
int fakeargc = 1;
glutInit( &fakeargc, fakeargv );
GLenum err = glewInit();
if (GLEW_OK != err)
{
MessageBoxA(NULL, "Failed to initialize OpenGL", "ERROR", NULL);
}
else
{
glEnable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);
// Not sure if this call is needed since i don't use a glut window to render too..
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
}
Then in my render function i do:
void DisplayFunc(void)
{
/* Clear the buffer, clear the matrix */
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// TEAPOT
glTranslatef(0.0f, 0.0f, -5.0f); // Translate back 5 units
glRotatef(rotation_degree, 1.0f, 1.0f, 0.0f); // Rotate according to our rotation_degree value
glFrontFace(GL_CW);
glutSolidTeapot(1.0f); // Render a teapot
glFrontFace(GL_CCW);
glReadBuffer(GL_BACK);
glReadPixels(0, 0, (GLsizei)1024, (GLsizei)768, GL_RGB, GL_UNSIGNED_BYTE, pixels);
int r = glGetError();
}
This is basically all i do. At the end of the last function is where i'm trying to read all the pixels. But the output is just a black image. glGetError() doesn't give any errors.
Anyone any idea what the problem could be...???
I want to read the pixels from the back buffer. But all i get so far is a black screen (the clear color).
The thing is, is that i don't need a glut window to draw to. Once i have the pixel information, then i pass that to another program which will draw the image for me.
It doesn't work like that. The backbuffer is not some kind of off-screen rendering area, it's part of on-screen window. Actually the whole doublebuffer concept only makes sense for on-screen windows. Each pixel of a double buffered window has two color values, but only one depth, stencil, etc.; upon buffer swap just the pointer to the back and front pixel plane are exchanged. But because we're still talking about a window, when rasterizing all fragments go through the pixel ownership test, i.e. are checked for, if they are actually visible on screen. If not, they're not rendered.
But your problems go further: You don't even create a window, so you don't have an OpenGL context at all. Your calling of OpenGL commands has no effect whatsoever. glReadPixels doesn't return you anything, because there's nothing to read from.
The bad news is, that the only way to get a context with GLUT is, by creating a window. The good news is, you don't have to use GLUT. People, why don't you get this: GLUT is not part of OpenGL, it's a quick and dirty framework for writing small tutorials, nothing more.
What you want is either:
not a window, but a PBuffer, i.e. a off screen drawable, that doesn't got through pixel ownership tests.
or
A hidden window with a OpenGL context created on it, and in this context a Frame Buffer Object for an off-screen rendering target.
Try calling glFlush before glReadPixels.
Also, where do you set the size of your window?
I just started working with OpenGL, but I ran into a problem after implementing a Font system.
My plan is to simply visualize several Pathfinding Algorithms.
Currently OpenGL gets set up like this (OnSize gets called once on window creation manually):
void GLWindow::OnSize(GLsizei width, GLsizei height)
{
// set size
glViewport(0,0,width,height);
// orthographic projection
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0,width,height,0.0,-1.0,1.0);
glMatrixMode(GL_MODELVIEW);
m_uiWidth = width;
m_uiHeight = height;
}
void GLWindow::InitGL()
{
// enable 2D texturing
glEnable(GL_TEXTURE_2D);
// choose a smooth shading model
glShadeModel(GL_SMOOTH);
// set the clear color to black
glClearColor(0.0, 0.0, 0.0, 0.0);
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.0f);
}
In theory I don't need blending, because I will only use untextured Quads to visualize obstacles and line etc to draw paths... So everything will be untextured, except the fonts...
The Font Class has a push and pop function, that look like this (if I remember right my Font system is based on a NeHe Tutorial that I was following quite a while ago):
inline void GLFont::pushScreenMatrix()
{
glPushAttrib(GL_TRANSFORM_BIT);
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(viewport[0],viewport[2],viewport[1],viewport[3], -1.0, 1.0);
glPopAttrib();
}
inline void GLFont::popProjectionMatrix()
{
glPushAttrib(GL_TRANSFORM_BIT);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glPopAttrib();
}
So the Problem:
If I don't draw a Text I can see the Quads I want to draw, but they are quite dark, so there must be something wrong with my general OpenGL Matrix Properties.
If I draw Text (so the font related push and pop functions get called) I can't see any Quads.
The question:
How do I solve this problem and some background information why this happened would also be nice, because I am still a beginner/student, who just started.
If your quads are untextured, you will run into undefined behaviour. What will probably happen is that any previous texture will be used, and the colour at point (0,0) will be used, which could be what is causing them to be invisible.
Really, you need to disable texturing before trying to draw untextured quads using glDisable(GL_TEXTURE_2D). Again, if you don't, it'll just use the previous texture and texture co-ordinates, which without seeing your draw() loop, I'm assuming to be undefined.
Hey all, I'm very new to OpenGL (just started seriously programming with it today) and I'm trying to use it to give my SDL games a 3D boost. I've setup a small test program below:
#include <SDL/SDL.h>
#include <gl/gl.h>
int main(int argc, char *argv[])
{
SDL_Event event;
float theta = 0.0f;
SDL_Init(SDL_INIT_VIDEO);
SDL_Surface *screen = SDL_SetVideoMode(800, 600, 32, SDL_OPENGL | SDL_HWSURFACE | SDL_RESIZABLE | SDL_FULLSCREEN);
glViewport(0, 0, 800, 600);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepth(1.0);
glDepthFunc(GL_LESS);
glEnable(GL_DEPTH_TEST);
glShadeModel(GL_SMOOTH);
glMatrixMode(GL_PROJECTION);
glMatrixMode(GL_MODELVIEW);
int done;
for(done = 0; !done;)
{
SDL_FillRect(screen, 0, SDL_MapRGB(screen->format, 255, 0, 0));
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.0f,0.0f,0.0f);
glRotatef(theta, 0.0f, 0.0f, 1.0f);
glBegin(GL_TRIANGLES);
glColor3f(0.83f, 0.83f, 0.0f);
glVertex2f(0.0f, 1.0f);
glColor3f(0.83f, 0.83f, 0.0f);
glVertex2f(0.87f, -0.5f);
glColor3f(0.83f, 0.83f, 0.0f);
glVertex2f(-0.87f, -0.5f);
glEnd();
theta += 10.0f;
SDL_Flip(screen);
SDL_GL_SwapBuffers();
SDL_PollEvent(&event);
if(event.key.keysym.sym == SDLK_ESCAPE)
done = 1;
}
}
My problem is that the red background I'm trying to rendered is never rendered, only the OpenGL Triangle is rendered.
Thanks in advance to anyone who can help me. It's much appreciated.
There's one simple rule about OpenGL: It doesn't play well with others. What happens in your case is, that the double buffer swap (initiated by SDL_GL_SwapBuffers) will in some way replace everything in the window, not being rendered by OpenGL.
Just draw everything using OpenGL.
You fill the back buffer on one line with SDL_FillRect then you clear it on the next with glClear. Have you tried swapping the order of the operations?
Not that I disagree with the accepted answer; in general trying to mix software rendering methods with OpenGL is a recipe for confusion at best, but you might get lucky in this case.
As for rending textured quads, you should be able to work it out from NeHe lesson 6. People complain about NeHe but it's a reasonable guide for getting started. Just don't use it as an example of good coding or of efficient modern OpenGL usage. Start here and move to more complex stuff later.
If you're using C++, SFML library might be a better option (it has C bindings though, but haven't tried those). It plays nicely with OpenGL and has functions to cooperatively work alongside GL. As far as I understood it, SFML functions themselves use GL to render. Although, I do suggest that you do rendering only with GL calls as noted above.
your SDL_FillRect isn't show as red, because you call glClear with GL_COLOR_BUFFER_BIT set afterwards