SDL OpenGL Rendering Issue - c++

I am just learning how to use SDL and openGL. I've been through the tutorials over at SDLTutorials.com. I am now trying to take things further and create a loading screen for an application; an oop menu system.
I am starting off real simple to test things out. I have the main backdrop of the window done via a texture class. The two relevant functions are defined here:
void Texture::Bind() {
glBindTexture(GL_TEXTURE_2D, TextureID);
}
void Texture::RenderQuad(int X, int Y, int Width, int Height) {
Bind();
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex3f(X, Y, 0);
glTexCoord2f(1, 0); glVertex3f(X + Width, Y, 0);
glTexCoord2f(1, 1); glVertex3f(X + Width, Y + Height, 0);
glTexCoord2f(0, 1); glVertex3f(X, Y + Height, 0);
glEnd();
}
I also have created a menu class that will basically be the parent of any menu created in the system and classes to be developed would be like a click button or an edit box. So far, the only relevent function in class Menu is:
void Menu::OnRender() {
if( Visible ) {
if(MenuTexture == NULL) {
glColor4f(1.,G,B,A);
glBegin(GL_QUADS);
glVertex2d(X, Y);
glVertex2d(X+Width,Y);
glVertex2d(X+Width,Y+Height);
glVertex2d(X,Y+Height);
glEnd();
}else{
MenuTexture->RenderQuad(X,Y,Width,Height);
}
}
}
Finally, my application has created one texture that it binds to the whole window. i.e. the backround.
void App::OnRender() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
MainScreen.RenderQuad(0,0,winWidth,winHeight); //Texture Object with bkg.jpg
MainMenu.OnRender(); //Menu object to be 150x300 and all black
SDL_GL_SwapBuffers();
}
I then create a menu item and try to draw it over the backdrop. If I leave the initialized colors black, the entire screen turns black almost instantly: I have one frame probably the first cycle through where the menu is drawn, the black square is placed on the window, and then it turns black.. If I have 1.0 for the Red, then the entire picture has like some sort of red filter placed on it and the menu box draws an opaque red square.
I've been through several examples of how rendering but I must be missing something. Essentially, if we put it all in line of whats being done, I bind the texture to a surface, define the coordinates of the surface and texture binding. Then I render a quad. I've put the color call in my menu inside the glBegin(), defining the color at all 4 vertices.
Thanks to my chosen answer: This solved the issue. However, it didn't render the Menu object with the correct color until I returned it to modulate. Not sure, I'll be playing around with it and reading up on this to learn about it. Thank you!
[SOVLED]
Changes only made to App::OnRender():
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
MainScreen.RenderQuad(0,0,winWidth,winHeight);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
MainMenu.OnRender();

GL_TEXTURE_ENV_MODE defaults to GL_MODULATE which multiplies incoming texel values by the current color.
If the current color is black (RGB(0,0,0)) then that will render all your textures black. Same for single-channel colors: RGB(1,0,0) * RGB(x,y,z) == RGB(x,0,0).
Try using GL_DECAL.

Related

Programming an EFIS for X-Plane

I am trying to develop a working EFIS display written in C++, OpenGL and the X-Plane SDK for an aircraft in X-Plane. I do not have very much experience with C++ and OpenGL. However, I do know X-Plane fairly well and I know how to use the X-Plane data for moving each element. What I do not know is how to code the EFIS display to draw all of the elements in an efficient way. I only know the very basics of drawing an OpenGL GL_QUAD and binding a texture to it however this seems to be a very low-level way of doing things.
What I would like to be able to do is create the GUI for the EFIS in a more efficient way as there are a lot of texture elements that need to be drawn.
This is an example of what I would like to build for X-Plane:
Here is the code I have currently written that loads in 1 image texture and binds it to a GL_QUAD.
static int my_draw_tex(
XPLMDrawingPhase inPhase,
int inIsBefore,
void* inRefcon)
{
// Note: if the tex size is not changing, glTexSubImage2D is faster than glTexImage2D.
// The drawing part.
XPLMSetGraphicsState(
0, // No fog, equivalent to glDisable(GL_FOG);
1, // One texture, equivalent to glEnable(GL_TEXTURE_2D);
0, // No lighting, equivalent to glDisable(GL_LIGHT0);
0, // No alpha testing, e.g glDisable(GL_ALPHA_TEST);
1, // Use alpha blending, e.g. glEnable(GL_BLEND);
0, // No depth read, e.g. glDisable(GL_DEPTH_TEST);
0); // No depth write, e.g. glDepthMask(GL_FALSE);
//---------------------------------------------- HORIZON -----------------------------------------//
glPushMatrix();
// Bind the Texture
XPLMBindTexture2d(texName[HORIZON], 0);
glColor3f(1, 1, 1);
glBegin(GL_QUADS);
// Initial coordinates for the horizon background
int arry[] = { 838, 465, 838, 2915, 2154, 2915, 2154 ,465 };
// Coordinates for the image
glTexCoord2f(0, 0); glVertex2f(arry[0], arry[1]);
glTexCoord2f(0, 1); glVertex2f(arry[2], arry[3]);
glTexCoord2f(1, 1); glVertex2f(arry[4], arry[5]);
glTexCoord2f(1, 0); glVertex2f(arry[6], arry[7]);
glEnd();
/*glDisable(GL_SCISSOR_TEST);*/
glPopMatrix();
return 1;
}
If someone could help with this, I would greatly appreciate it.

On OpenGL is it possible to have a small viewport map to the entire window? [duplicate]

I would like to make a game that is internally 320x240, but renders to the screen at whole number multiples of this (640x480, 960,720, etc). I am going for retro 2D pixel graphics.
I have achieved this by setting the internal resolution via glOrtho():
glOrtho(0, 320, 240, 0, 0, 1);
And then I scale up the output resolution by a factor of 3, like this:
glViewport(0,0,960,720);
window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 960, 720, SDL_WINDOW_OPENGL);
I draw rectangles like this:
glBegin(GL_LINE_LOOP);
glVertex2f(rect_x, rect_y);
glVertex2f(rect_x + rect_w, rect_y);
glVertex2f(rect_x + dst_w, dst_y + dst_h);
glVertex2f(rect_x, rect_y + rect_h);
glEnd();
It works perfectly at 320x240 (not scaled):
When I scale up to 960x720, the pixel rendering all works just fine! However it seems the GL_Line_Loop is not drawn on a 320x240 canvas and scaled up, but drawn on the final 960x720 canvas. The result is 1px lines in a 3px world :(
How do I draw lines to the 320x240 glOrtho canvas, instead of the 960x720 output canvas?
There is no "320x240 glOrtho canvas". There is only the window's actual resolution: 960x720.
All you are doing is scaling up the coordinates of the primitives you render. So, your code says to render a line from, for example, (20, 20) to (40, 40). And OpenGL (eventually) scales those coordinates by 3 in each dimension: (60, 60) and (120x120).
But that's only dealing with the end points. What happens in the middle is still based on the fact that you're rendering at the window's actual resolution.
Even if you employed glLineWidth to change the width of your lines, that would only fix the line widths. It would not fix the fact that the rasterization of lines is based on the actual resolution you're rendering at. So diagonal lines won't have the pixelated appearance you likely want.
The only way to do this properly is to, well, do it properly. Render to an image that is actual 320x240, then draw it to the window's actual resolution.
You'll have to create a texture of that size, then attach it to a framebuffer object. Bind the FBO for rendering and render to it (with the viewport set to the image's size). Then unbind the FBO, and draw that texture to the window (with the viewport set to the window's resolution).
As I mentioned in my comment Intel OpenGL drivers has problems with direct rendering to texture and I do not know of any workaround that is working. In such case the only way around this is use glReadPixels to copy screen content into CPU memory and then copy it back to GPU as texture. Of coarse that is much much slower then direct rendering to texture. So here is the deal:
set low res view
do not change resolution of your window just the glViewport values. Then render your scene in the low res (in just a fraction of screen space)
copy rendered screen into texture
set target resolution view
render the texture
do not forget to use GL_NEAREST filter. The most important thing is that you swap buffers only after this not before !!! otherwise you would have flickering.
Here C++ source for this:
void gl_draw()
{
// render resolution and multiplier
const int xs=320,ys=200,m=2;
// [low res render pass]
glViewport(0,0,xs,ys);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
// 50 random lines
RandSeed=0x12345678;
glColor3f(1.0,1.0,1.0);
glBegin(GL_LINES);
for (int i=0;i<100;i++)
glVertex2f(2.0*Random()-1.0,2.0*Random()-1.0);
glEnd();
// [multiply resiolution render pass]
static bool _init=true;
GLuint txrid=0; // texture id
BYTE map[xs*ys*3]; // RGB
// init texture
if (_init) // you should also delte the texture on exit of app ...
{
// create texture
_init=false;
glGenTextures(1,&txrid);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST); // must be nearest !!!
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
glDisable(GL_TEXTURE_2D);
}
// copy low res screen to CPU memory
glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map);
// and then to GPU texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map);
// set multiplied resolution view
glViewport(0,0,m*xs,m*ys);
glClear(GL_COLOR_BUFFER_BIT);
// render low res screen as texture
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glEnd();
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // swap buffers only here !!!
}
And preview:
I tested this on some Intel HD graphics (god knows which version) I got at my disposal and it works (while standard render to texture approaches are not).

Line get duplicated instead of moved

I'm using qWidget which inherits QGLWidget and below the widget I have 2 buttons.
button1: +
button2: -
this is my code:
void GLWidget::paintGL() {
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,_w,0,_h,-1,1);
glViewport(_camera_pos_x,0,_w,_h);
glBegin(GL_LINES);
glVertex2i(50,50);
glVertex2i(200,200);
glEnd();
update();
}
I want each time I'm pressing the + button, the screen will move 10 pixels to the right which means the line should move to the left by 10 pixels.
This is the code when im pressing the button:
void right() {_camera_pos_x += 10;}
_camera_pos_X is just a member int which is initialized to 0.
When I'm pressing the button another line is rendered 10 pixels to the right and I see 2 lines
What's wrong with my code?
By the way, I think im using old code of OpenGL, is there a better way to render a line?
First of all note, that drawing with glBegin/glEnd sequences is deprecated since more than 10 years.
Read about Fixed Function Pipeline and see Vertex Specification for a state of the art way of rendering.
glViewport specifies the transformation from normalized device coordinates to window coordinates. If you want to change the position of the line, then you have to draw the line at a different position, but that doesn't change the size of the viewport. Keep your viewport as large as the window:
glViewport(0, 0, _w, _h);
The default frame buffer is never cleared, so the rendering is always drawn on the rendering of the previous frames. Because of that, you can "see 2 lines". Use glClear to clear the framebuffer at the begine of the rendering:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Change the model view matrix to set the positon of the line:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,_w,0,_h,-1,1);
glViewport(0, 0, _w, _h);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(_camera_pos_x, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBegin(GL_LINES);
glVertex2i(50,50);
glVertex2i(200,200);
glEnd();
You must clear the old contents of the framebuffer, before drawing the new image. Put a glClear(GL_COLOR_BUFFER_BIT) at the beginning of paintGL().

OpenGL Scale Single Pixel Line

I would like to make a game that is internally 320x240, but renders to the screen at whole number multiples of this (640x480, 960,720, etc). I am going for retro 2D pixel graphics.
I have achieved this by setting the internal resolution via glOrtho():
glOrtho(0, 320, 240, 0, 0, 1);
And then I scale up the output resolution by a factor of 3, like this:
glViewport(0,0,960,720);
window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 960, 720, SDL_WINDOW_OPENGL);
I draw rectangles like this:
glBegin(GL_LINE_LOOP);
glVertex2f(rect_x, rect_y);
glVertex2f(rect_x + rect_w, rect_y);
glVertex2f(rect_x + dst_w, dst_y + dst_h);
glVertex2f(rect_x, rect_y + rect_h);
glEnd();
It works perfectly at 320x240 (not scaled):
When I scale up to 960x720, the pixel rendering all works just fine! However it seems the GL_Line_Loop is not drawn on a 320x240 canvas and scaled up, but drawn on the final 960x720 canvas. The result is 1px lines in a 3px world :(
How do I draw lines to the 320x240 glOrtho canvas, instead of the 960x720 output canvas?
There is no "320x240 glOrtho canvas". There is only the window's actual resolution: 960x720.
All you are doing is scaling up the coordinates of the primitives you render. So, your code says to render a line from, for example, (20, 20) to (40, 40). And OpenGL (eventually) scales those coordinates by 3 in each dimension: (60, 60) and (120x120).
But that's only dealing with the end points. What happens in the middle is still based on the fact that you're rendering at the window's actual resolution.
Even if you employed glLineWidth to change the width of your lines, that would only fix the line widths. It would not fix the fact that the rasterization of lines is based on the actual resolution you're rendering at. So diagonal lines won't have the pixelated appearance you likely want.
The only way to do this properly is to, well, do it properly. Render to an image that is actual 320x240, then draw it to the window's actual resolution.
You'll have to create a texture of that size, then attach it to a framebuffer object. Bind the FBO for rendering and render to it (with the viewport set to the image's size). Then unbind the FBO, and draw that texture to the window (with the viewport set to the window's resolution).
As I mentioned in my comment Intel OpenGL drivers has problems with direct rendering to texture and I do not know of any workaround that is working. In such case the only way around this is use glReadPixels to copy screen content into CPU memory and then copy it back to GPU as texture. Of coarse that is much much slower then direct rendering to texture. So here is the deal:
set low res view
do not change resolution of your window just the glViewport values. Then render your scene in the low res (in just a fraction of screen space)
copy rendered screen into texture
set target resolution view
render the texture
do not forget to use GL_NEAREST filter. The most important thing is that you swap buffers only after this not before !!! otherwise you would have flickering.
Here C++ source for this:
void gl_draw()
{
// render resolution and multiplier
const int xs=320,ys=200,m=2;
// [low res render pass]
glViewport(0,0,xs,ys);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
// 50 random lines
RandSeed=0x12345678;
glColor3f(1.0,1.0,1.0);
glBegin(GL_LINES);
for (int i=0;i<100;i++)
glVertex2f(2.0*Random()-1.0,2.0*Random()-1.0);
glEnd();
// [multiply resiolution render pass]
static bool _init=true;
GLuint txrid=0; // texture id
BYTE map[xs*ys*3]; // RGB
// init texture
if (_init) // you should also delte the texture on exit of app ...
{
// create texture
_init=false;
glGenTextures(1,&txrid);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST); // must be nearest !!!
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
glDisable(GL_TEXTURE_2D);
}
// copy low res screen to CPU memory
glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map);
// and then to GPU texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map);
// set multiplied resolution view
glViewport(0,0,m*xs,m*ys);
glClear(GL_COLOR_BUFFER_BIT);
// render low res screen as texture
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glEnd();
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // swap buffers only here !!!
}
And preview:
I tested this on some Intel HD graphics (god knows which version) I got at my disposal and it works (while standard render to texture approaches are not).

glColor not working, random color appearing

There's something wrong in my code somewhere but for any number of primitives that I draw, despite calling glClearColor and then picking a color with glColor3f, the colors that appear are completely random...
So in my Rendering class I cycle through all the objects and call their drawing methods, for primitives they would look like:
inline void PrimitiveDrawer::drawWireframePrism(Vector3 pos, float radius, Vector3 col){
glClearColor( 1.0f, 1.0f, 1.0f, 1.0f );
glColor3f(col.x, col.y, col.z);
glLineWidth(3);
glBegin (GL_LINE_LOOP);
...
glEnd()
But no matter what color i select I always get different ones... The interesting think is that all primitive lines I draw with this method assume the color of the models that they bound (they are meant to be bounding volumes for meshes)... Could it have to do with the model loaders I am using?
This is affecting every shape (outside the ones around the models), where every GL_LINE assumes the same colour (green for some reason), including the glutBitMapCharacter that I am trying to draw... That's the main think that bothers me as I'd like to pick the colour for the text drawing, currently I am doing:
void renderBitmapString(float x, float y, void *font,char *string)
{
char *c;
glRasterPos2f(x, y);
for (c=string; *c != '\0'; c++) {
glutBitmapCharacter(font, *c);
}
}
void drawText(char text[20], float x, float y){
glPushMatrix();
setOrthographicProjection();
glLoadIdentity();
glClearColor( 0, 0, 0, 0 );
glColor4f(0, 0, 1, 1);
renderBitmapString(x, y,(void *)font, text);
resetPerspectiveProjection();
glPopMatrix();
}
But the text comes up green instead of blue?
glClearColor has nothing to do with glColor. glClearColor sets the color used with a call of glClear(GL_COLOR_BUFFER_BIT) to clear the framebuffer with.
Colors from other objects drawn sounds to me, that you forget to disable texturing. Add a glDiable(GL_TEXTURE_2D); after you're done drawing textured stuff.