Why are those lines appearing in my shape?
I'm initializing OpenGL like this:
glDisable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glDisable(GL_COLOR_MATERIAL);
glEnable(GL_BLEND);
glEnable(GL_POLYGON_SMOOTH);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glClearColor(0, 0, 0, 0);
And drawing the shape like this:
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1, 0, 0);
glBegin(GL_POLYGON);
glVertex2f(-5, -5); // bottom left
glVertex2f(5, -5); // bottom right...
glVertex2f(6, 0);
glVertex2f(5, 5);
glVertex2f(-5, 5);
glEnd();
Doesn't matter if it's clockwise or CCW.
I think disabling GL_POLYGON_SMOOTH would fix that, but you'd lose the antialiasing. FSAA would work as an alternative, but it'd be slower.
Edit: looking around, there are a lot of examples out there using glBlendFunc( GL_SRC_ALPHA_SATURATE, GL_ONE );
GL_POLYGON_SMOOTH is an antiquated and slow method of polygon anti-aliasing. It also results in the problem you see above.
Using the Multisample buffer extension is the best way to perform fast anti-aliasing on modern machines. Read more here.
Related
I am trying to develop a working EFIS display written in C++, OpenGL and the X-Plane SDK for an aircraft in X-Plane. I do not have very much experience with C++ and OpenGL. However, I do know X-Plane fairly well and I know how to use the X-Plane data for moving each element. What I do not know is how to code the EFIS display to draw all of the elements in an efficient way. I only know the very basics of drawing an OpenGL GL_QUAD and binding a texture to it however this seems to be a very low-level way of doing things.
What I would like to be able to do is create the GUI for the EFIS in a more efficient way as there are a lot of texture elements that need to be drawn.
This is an example of what I would like to build for X-Plane:
Here is the code I have currently written that loads in 1 image texture and binds it to a GL_QUAD.
static int my_draw_tex(
XPLMDrawingPhase inPhase,
int inIsBefore,
void* inRefcon)
{
// Note: if the tex size is not changing, glTexSubImage2D is faster than glTexImage2D.
// The drawing part.
XPLMSetGraphicsState(
0, // No fog, equivalent to glDisable(GL_FOG);
1, // One texture, equivalent to glEnable(GL_TEXTURE_2D);
0, // No lighting, equivalent to glDisable(GL_LIGHT0);
0, // No alpha testing, e.g glDisable(GL_ALPHA_TEST);
1, // Use alpha blending, e.g. glEnable(GL_BLEND);
0, // No depth read, e.g. glDisable(GL_DEPTH_TEST);
0); // No depth write, e.g. glDepthMask(GL_FALSE);
//---------------------------------------------- HORIZON -----------------------------------------//
glPushMatrix();
// Bind the Texture
XPLMBindTexture2d(texName[HORIZON], 0);
glColor3f(1, 1, 1);
glBegin(GL_QUADS);
// Initial coordinates for the horizon background
int arry[] = { 838, 465, 838, 2915, 2154, 2915, 2154 ,465 };
// Coordinates for the image
glTexCoord2f(0, 0); glVertex2f(arry[0], arry[1]);
glTexCoord2f(0, 1); glVertex2f(arry[2], arry[3]);
glTexCoord2f(1, 1); glVertex2f(arry[4], arry[5]);
glTexCoord2f(1, 0); glVertex2f(arry[6], arry[7]);
glEnd();
/*glDisable(GL_SCISSOR_TEST);*/
glPopMatrix();
return 1;
}
If someone could help with this, I would greatly appreciate it.
I'm learning OpenGL and I have a problem with my program where I'm supposed to make the solar system.
First of all here's the code I use to setup my ModelView Matrix:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(20, 1, 0, 0);
glTranslatef(0, -20, -60);
And then I draw the orbits using line loops and the sun is a gluSphere:
glPushMatrix();
glColor3f(1, 0.4f, 0);
glTranslatef(0, -2, 0);
gluSphere(gluNewQuadric(), 4, 30, 30);
glPopMatrix();
And here's the result:
But then, when I "zoom in" using this code:
if (key=='w')
{
glTranslatef(0, 1, 2.4);
}
else if (key=='s')
{
glTranslatef(0, -1, -2.4);
}
this happens:
the lines stay in front of the sphere. I know it's probably something dumb I'm doing but I'm just starting to learn and this is really slowing me down..
Thanks!
You probably don't have the depth test turned on.
glEnable(GL_DEPTH_TEST);
You may also need to fiddle with the depth test parameters, though usually the default setting is sufficient.
glDepthfunc(GL_LESS);
I'd also like to take this time to strongly recommend that you stop using OpenGL's Immediate Mode and OpenGL's Fixed Function Pipeline, and learn Modern OpenGL.
I was trying to create a OpenGL version of an old plasma 8bit effect animation in DOS but I am stuck. Since almost every OpenGL program has included something to generate a palette for Win32 I thought it would not be that hard to apply palette animation on my old program.
My purpose is to generate a texture with color indices that does not change and a palette that is rotating. After digging into the web this weekend I am still not able to fix it. I cannot even display a texture with one color index so in that stage something is wrong (if it would work I can create the palette cycling mechanism).
I can force into palette mode by using PFD_TYPE_COLORINDEX and draw some random pixels using glIndexi. I read that glDrawPixels and glReadPixels are slow and the latter is not that accurate when getting pixels back from the framebuffer (due to inaccurate positioning as a result from rounding errors or something).
I tried the GL_COLOR_INDEX keyword. I also tried:
glPixelTransferi(GL_MAP_COLOR, true);
glPixelMapfv( GL_PIXEL_MAP_I_TO_R, ...);
glTexImage2D... Some of the code I tried so far (latest changes):
init part:
void* rawplasma;
GLuint plastexture;
rawplasma = (void*) malloc(256*256);
memset(rawplasma,rand()%256,256*156);
glEnable( GL_TEXTURE_2D );
glGenTextures(1, plastexture);
glBindTexture(GL_TEXTURE_2D, plastexture);
glTexImage2D( GL_TEXTURE_2D, 0, GL_COLOR_INDEX, 256, 256, 0, GL_COLOR_INDEX, GL_UNSIGNED_BYTE, rawplasma );
update/draw:
float Rmap[256];
float Gmap[256];
float Bmap[256];
float Amap[256];
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
memset(rawplasma,rand()%256,256*256); //check if it works
glTexImage2D( GL_TEXTURE_2D, 0, GL_COLOR_INDEX, 256, 256, 0, GL_COLOR_INDEX, GL_UNSIGNED_BYTE, rawplasma );
glBindTexture(GL_TEXTURE_2D, plastexture );
/*
glPixelTransferi( GL_MAP_COLOR, GL_TRUE );
glPixelMapfv(GL_PIXEL_MAP_I_TO_R,mapSize,Rmap);
glPixelMapfv(GL_PIXEL_MAP_I_TO_G,mapSize,Gmap);
glPixelMapfv(GL_PIXEL_MAP_I_TO_B,mapSize,Bmap);
glPixelMapfv(GL_PIXEL_MAP_I_TO_A,mapSize,Amap);
glPixelTransferi(GL_MAP_COLOR,GL_TRUE);
*/
glBegin(GL_QUADS);
glTexCoord2f(1.0, 0.0); glVertex2f(-1.0, -1.0);
glTexCoord2f(0.0, 0.0); glVertex2f( 1.0, -1.0);
glTexCoord2f(0.0, 1.0); glVertex2f( 1.0, 1.0);
glTexCoord2f(1.0, 1.0); glVertex2f(-1.0, 1.0);
glEnd();
glFlush();
SwapBuffers(hDC);
Or should I use glColorTableEXT in combination with GL_COLOR_INDEX8_EXT? I read somewhere that textured palettes are not supported? I found a link which mentions Paletted Texture Extension: http://www.cs.rit.edu/~ncs/Courses/570/UserGuide/OpenGLonWin-20.html
This is what I want (but then in OpenGL):
http://www.codeforge.com/read/174459/PalAnimDemo.h__html
I am not looking for ES/Shader implementations (I am just a beginner ;)) and DirectDraw might be easier I think but I want to try OpenGL.
Since almost every OpenGL program has included something to generate a palette for Win32
No, definitely not. If you mean the PIXELFORMATDESCRIPTOR, that is not a palette definition. Color index mode was a major PITA to work with in OpenGL, and no current implementation actually support it.
I am not looking for ES/Shader implementations (I am just a beginner ;))
But that's exactly what you should use. Also shaders are mandatory in modern OpenGL. A simple fragment shader, that performs a lookup into a 1D texture turning an index into a color is exactly what you need. And if you aim for modern systems, you'll have to use a shader anyway.
I have the following:
glHint(GL_POINT_SMOOTH, GL_NICEST);
glEnable(GL_POINT_SMOOTH);
The issue is that when I attempt to draw a pixel:
DrawPoints(float x1,float y1)
{
glBlendFunc(GL_DST_ALPHA,GL_ONE_MINUS_DST_ALPHA);
glPointSize(1);
glBegin(GL_POINTS);
glVertex2f(x1, y1);
glEnd();
}
I get points on the screen that are two pixels wide, and two pixels tall. The pixels that were two pixels wide were solved by making sure that x1 is called with a .5 at the end. Forcing the y1 variable to end in .5 did not fix the height issue. The points are always 2 pixels tall despite being set to be only one.
How could I solve this?
EDIT:
Took a screen shot of the issue in question. It is drawing out a sine wave on the screen.
EDIT 2:
Here's the full initialization code:
if(SDL_Init(SDL_INIT_EVERYTHING) < 0) {
fprintf(stderr,"%s:%d\n SDL_Init call failed.\n",__FILE__,__LINE__);
return false;
}
if((Surf_Display = SDL_SetVideoMode(WWIDTH, WHEIGHT, 32, SDL_HWSURFACE | SDL_GL_DOUBLEBUFFER | SDL_OPENGL)) == NULL) {
fprintf(stderr,"%s:%d\n SDL_SetVideoMode call failed.\n",__FILE__,__LINE__);
return false;
}
// Init GL system
glClearColor(0, 0, 0, 1);
glClearDepth(1.0f);
glViewport(0, 0, WWIDTH, WHEIGHT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, WWIDTH, WHEIGHT, 0, 1, -1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glHint(GL_POINT_SMOOTH, GL_NICEST);
glHint(GL_LINE_SMOOTH, GL_NICEST);
glHint(GL_POLYGON_SMOOTH, GL_NICEST);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_POLYGON_SMOOTH);
glLoadIdentity();
Making sure that glEnable(GL_BLEND); has been called did not help either.
Another note, calling SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 2); had no effect on the pixel height either.
I found the answer to what went wrong.
This code caused the error:
glHint(GL_POINT_SMOOTH, GL_NICEST);
glHint(GL_LINE_SMOOTH, GL_NICEST);
glHint(GL_POLYGON_SMOOTH, GL_NICEST);
These are not the correct flags to give to glHint.
This code fixed it and got rid of a ENUM error that OpenGL was throwing. I found that one through debugging whole other issue. Serves me right for not checking error states!
glHint(GL_POINT_SMOOTH_HINT, GL_NICEST);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
glHint(GL_POLYGON_SMOOTH_HINT, GL_NICEST);
I'm in the process of writing a wrapper for some OpenGL functions. The goal is to wrap the context used by the game Neverwinter Nights, in order to apply post-processing shader effects. After learning OpenGL (this is my first attempt to use it) and much playing with DLLs and redirection, I have a somewhat working system.
However, when the post-processing fullscreen quad is active, all texturing and transparency drawn by the game are lost. This shouldn't be possible, because all my functions take effect after the game has completely finished its own rendering.
The code does not use renderbuffers or framebuffers (both refused to compile on my system in any way, with or with GLEW or GLee, despite being supported and usable by other programs). Eventually, I put together this code to handle copying the texture from the buffer and rendering a fullscreen quad:
extern "C" SEND BOOL WINAPI hook_wglSwapLayerBuffers(HDC h, UINT v)
{
if ( frameCount > 250 )
{
frameCount++;
if ( frameCount == 750 ) frameCount = 0;
if ( nwshader->thisframe == NULL )
{
createTextures();
}
glBindTexture(GL_TEXTURE_2D, nwshader->thisframe);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, nwshader->width, nwshader->height, 0);
glClearColor(0.0f, 0.5f, 0.0f, 0.5f);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glBlendFunc(GL_ONE, GL_ZERO);
glEnable(GL_BLEND);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho( 0, nwshader->width , nwshader->height , 0, -1, 1 );
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glBegin(GL_POLYGON);
glTexCoord2f(0.0f, 1.0f);
glVertex2d(0, 0);
glTexCoord2f(0.0f, 0.0f);
glVertex2d(0, nwshader->height);
glTexCoord2f(1.0f, 0.0f);
glVertex2d(nwshader->width, nwshader->height);
glTexCoord2f(1.0f, 1.0f);
glVertex2d(nwshader->width, 0);
glEnd();
glMatrixMode( GL_PROJECTION );
glPopMatrix();
glMatrixMode( GL_MODELVIEW );
glPopMatrix();
glEnable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
} else {
frameCount++;
}
if ( h == grabbedDevice )
{
Log->logline("Swapping buffer on cached device.");
}
return wglSwapLayerBuffers(h,v);
}
This code functions almost functions perfectly and has no notable slow-down. However, when it is active (I added the frameCount condition to turn it on and off every ~5 seconds), all alpha and texturing are completely ignored by the game renderer. I'm not turning off any kind of blending or texturing before this function (the only OpenGL calls are to create the nwshader->thisframe texture).
I was able to catch a few screenshots of what's happening:
Broken A: http://i4.photobucket.com/albums/y145/peachykeen000/outside_brokenA.png
Broken B: http://i4.photobucket.com/albums/y145/peachykeen000/outside_brokenB.png
(note, in B, the smoke in the back is not broken, it is correctly transparent. So is the HUD.)
Broken Interior: http://i4.photobucket.com/albums/y145/peachykeen000/transparency_broken.png
Correct Interior (for comparison): http://i4.photobucket.com/albums/y145/peachykeen000/transparency_proper.png
The drawing of the quad also breaks menus, turning the whole thing into a black surface with a single white box. I suspect it is a problem with either depth or how the game is drawing certain objects, or a state that is not being reset properly. I've used GLintercept to dump a full log of all calls in a frame, and didn't see anything wrong (the call to wglSwapLayerBuffers is always last).
Being brand new to working with OpenGL, I really have no clue what's going wrong (or how to fix it) and nothing I've tried has helped. What am I missing?
I don't quite understand how your code is supposed to integrate with the Neverwinter Nights code. However...
It seems like you're most likely changing some setting that the existing code didn't expect to change.
Based on the description of the problem, I'd try removing the following line:
glDisable(GL_TEXTURE_2D);
That line disables textures, which certainly sounds like the problem you're seeing.