I'm trying to learn OpenGL but I've not yet got the hang of it, as I encountered a problem at the first hurdle where I try to display a bright red square, but the image comes out as a maroon coloured square. (I apologize but I cannot post pictures due to not having enough reputation :( )
I've been using the SOIL library (http://www.lonesock.net/soil.html) to make the task of loading textures simpler, and I am fairly sure that this is where the problem lies.
I understand the most obvious answer is to not use SOIL, and to learn raw OGL first before I try using extensions, and I do intend to do this. However I would still like this problem solving for peace of mind.
My personal assumption is I have probably enabled some sort of shading somewhere, or there is some quirk of OGL or SOIL that forces the shade of the texture to change, however I am not experienced enough to solve this.
Below is what I believe to be the relevant code.
void displayBackground()
{
GetTexture("resources/red.png");
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex2f(0, 0);
glTexCoord2f(480, 0); glVertex2f( 480, 0);
glTexCoord2f(480, 480); glVertex2f( 480, 480);
glTexCoord2f(0, 480); glVertex2f(0, 480);
glEnd();
glDisable(GL_TEXTURE_2D);
}
And below is the SOIL-specific code which as far as I can tell should load a solid red texture into the active OGL texture
GLuint GetTexture(std::string Filename)
{
GLuint tex_ID;
tex_ID = SOIL_load_OGL_texture(
Filename.c_str(),
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_POWER_OF_TWO
| SOIL_FLAG_MIPMAPS
| SOIL_FLAG_COMPRESS_TO_DXT
| SOIL_FLAG_DDS_LOAD_DIRECT
);
if( tex_ID > 0 )
{
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, tex_ID );
return tex_ID;
}
else
return 0;
}
Thank you in advance for anyone insight into where I have possibly gone wrong.
#Nazar554
I'm assuming this is what you mean by the view port? Sorry, I'm aware this is very basic OGL stuff and I probably sound rather stupid, but you've got to start somewhere right? :P
/** OpenGL Initial Setup**/
//pixel format descriptor to describe pixel layout of a given surface
PIXELFORMATDESCRIPTOR pfd;
std::memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW |
PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER_DONTCARE;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cDepthBits = 16;
pfd.iLayerType = PFD_MAIN_PLANE;
HDC hdc = GetDC(hwnd); //gets device context of hwnd. Device context is a set of graphics objects that define how to draw to the given device
int format = ChoosePixelFormat(hdc, &pfd); //chooses best pixel format for device context given the pfd to be used
SetPixelFormat(hdc, format, &pfd);
HGLRC hglrc;
hglrc = wglCreateContext(hdc); //creates OGL rendering context suitable for drawing on the device specified by hdc
wglMakeCurrent(hdc, hglrc); //makes hglrc the thread's current context. subsequent OGL calls made on hdc
glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // Red, Green, Blue, Alpha. (Additive color) Does not need to be updated every cycle
glOrtho(0, 900, 600, 1.0, -1.0, 1.0); //sets co-ordinates system
Put glOrtho before glClearColor. Also you need to select projection matrix before calling glOrtho.
Use this:
glMatrixMode(GL_PROJECTION); // select projection matrix
glLoadIdentity(); // clear it
glOrtho(0, w, h, 0, 0, 1); // compute projection matrix, and multiply identity matrix by it
// w, h is your window size if you are doing 2D
glMatrixMode(GL_MODELVIEW); // select model matrix
Also, if you are studying OpenGL better begin with modern versions (3.3+, or 2.1 without old stuff), not 1.2. They have a lot of differences and it will be hard to forget everything you studied before. For beginners freeglut or GLFW is more simple and portable than pure Win32.
Related
I'm trying to overlay small interactive info rectangles drawn with SDL 2D over a 3D scene drawn with OpenGL. Each of itself works, but not together. The 3D model is then hidden.
SDL_Init(SDL_INIT_EVERYTHING);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_ES);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 0);
SDL_CreateWindowAndRenderer(m_width, m_height, SDL_WINDOW_OPENGL|SDL_WINDOW_RESIZABLE, &m_window, &m_renderer);
SDL_GLContext context = SDL_GL_CreateContext(m_window);
SDL_RenderClear(m_renderer);
SDL_RenderPresent(m_renderer);
// load vertex, fragmend shader...
glClearColor(1.0, 1.0, 1.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawElements(GL_TRIANGLES, m_indicesSize, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
SDL_Rect rect;
rect.w = 50;
rect.h = 50;
rect.x = 100;
rect.y = 100;
SDL_SetRenderDrawColor(m_renderer, 255, 0, 0, 255);
SDL_RenderFillRect(m_renderer, &rect);
SDL_RenderPresent(m_renderer);
How can I solve this problem? Thxs..
You don't, at least for now.
Here's the (open) bug about adding backend API state getters/setters to SDL_Renderer..
Alternatively, create a SDL_Renderer instance that uses the software renderer & upload the bitmaps coming out of that into a OpenGL texture & composite that into your scene.
I have an OpenGL context on which I draw successfully using OpenGL.
I need to draw a specific rectangle of an IOSurface to this context.
What is the best way to do this on 10.8?
NOTE:
I know how to do this on 10.9 using CoreImage (by createing a CIImage from the IOSurface, and render it with [CIContext drawImage:inRect:fromRect]).
However, this does not work well for me on 10.8 (each raw of the image is displayed with a different offset, and the image is distorted diagonally).
Edit: Here is the code that works on 10.9 but not on 10.8:
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB);
CIImage* ciImage = [[CIImage alloc] initWithIOSurface:surface plane:0 format:kCVPixelFormatType_32BGRA options:#{kCIImageColorSpace : (__bridge id)colorSpace}];
NSRect flippedFromRect = fromRect;
// Flip rect before passing to CoreImage:
{
flippedFromRect.origin.y = IOSurfaceGetHeight(surface) - fromRect.origin.y - fromRect.size.height;
}
[ciContext drawImage:ciImage inRect:inRect fromRect:flippedFromRect];
CGColorSpaceRelease(colorSpace);
Here is the solution by wrapping the IOSurface with an OpenGL texture and draw the texture to the screen. This assumes a similar API to [CIContext render:toIOSurface:bounds:colorSpace:] but a vertically flipped OpenGL coordinate system.
// Draw surface on OpenGL context
{
// Enable the rectangle texture extenstion
glEnable(GL_TEXTURE_RECTANGLE_EXT);
// 1. Create a texture from the IOSurface
GLuint name;
{
CGLContextObj cgl_ctx = ...
glGenTextures(1, &name);
GLsizei surface_w = (GLsizei)IOSurfaceGetWidth(surface);
GLsizei surface_h = (GLsizei)IOSurfaceGetHeight(surface);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, name);
CGLError cglError =
CGLTexImageIOSurface2D(cgl_ctx, GL_TEXTURE_RECTANGLE_EXT, GL_RGBA, surface_w, surface_h, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, surface, 0);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 0);
}
// 2. Draw the texture to the current OpenGL context
{
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, name);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glBegin(GL_QUADS);
glColor4f(0.f, 0.f, 1.0f, 1.0f);
glTexCoord2f( (float)NSMinX(fromRect), (float)(NSMinY(fromRect)));
glVertex2f( (float)NSMinX(inRect), (float)(NSMinY(inRect)));
glTexCoord2f( (float)NSMaxX(fromRect), (float)NSMinY(fromRect));
glVertex2f( (float)NSMaxX(inRect), (float)NSMinY(inRect));
glTexCoord2f( (float)NSMaxX(fromRect), (float)NSMaxY(fromRect));
glVertex2f( (float)NSMaxX(inRect), (float)NSMaxY(inRect));
glTexCoord2f( (float)NSMinX(fromRect), (float)NSMaxY(fromRect));
glVertex2f( (float)NSMinX(inRect), (float)NSMaxY(inRect));
glEnd();
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 0);
}
glDeleteTextures(1, &name);
}
If you need to draw in the display's color profile, you can explicitly call ColorSync and pass it your source profile and destination profile. It will return to you a “recipe” to perform the color correction. That recipe actually has a linearization, a color conversion (a 3x3 conversion matrix) and a gamma.
FragmentInfo = ColorSyncTransformCopyProperty (transform, kColorSyncTransformFullConversionData, NULL);
If you like, you can combine all those operations into a 3D lookup table. That's actually what happens in the color management of many of the OS X frameworks and applications.
References:
Apple TextureUpload sample code
Draw IOSurfaces to another IOSurface
OpenGL Options for Advanced Color Management
When I render my text using TTF_RenderUTF8_Blended I obtain a solid rectangle on the screen. The color depends on the one I choose, in my case the rectangle is red.
My question
What am I missing? It seems like I'm not getting the proper Alpha values from the surface generated with SDL_DisplayFormatAlpha(TTF_RenderUTF8_Blended( ... )), or am I? Does anyone recognize or know the problem?
Additionnal informations
If I use TTF_RenderUTF8_Solid or TTF_RenderUTF8_Shaded the text is drawn properly, but not blended of course.
I am also drawing other textures on the screen, so I draw the text last to ensure the blending will take into account the current surface.
Edit:SDL_Color g_textColor = {255, 0, 0, 0}; <-- I tried with and without the alpha value, but I get the same result.
I have tried to summarize the code without removing too much details. Variables prefixed with "g_" are global.
Init() function
// This function creates the required texture.
bool Init()
{
// ...
g_pFont = TTF_OpenFont("../arial.ttf", 12);
if(g_pFont == NULL)
return false;
// Write text to surface
g_pText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Blended(g_pFont, "My first Text!", g_textColor)); //< Doesn't work
// Note that Solid and Shaded Does work properly if I uncomment them.
//g_pText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Solid(g_pFont, "My first Text!", g_textColor));
//g_pText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Shaded(g_pFont, "My first Text!", g_textColor, g_bgColor));
if(g_pText == NULL)
return false;
// Prepare the texture for the font
GLenum textFormat;
if(g_pText->format->BytesPerPixel == 4)
{
// alpha
if(g_pText->format->Rmask == 0x000000ff)
textFormat = GL_RGBA;
else
textFormat = GL_BGRA_EXT;
}
// Create the font's texture
glGenTextures(1, &g_FontTextureId);
glBindTexture(GL_TEXTURE_2D, g_FontTextureId);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, g_pText->format->BytesPerPixel, g_pText->w, g_pText->h, 0, textFormat, GL_UNSIGNED_BYTE, g_pText->pixels);
// ...
}
DrawText() function
// this function is called each frame
void DrawText()
{
SDL_Rect sourceRect;
sourceRect.x = 0;
sourceRect.y = 0;
sourceRect.h = 10;
sourceRect.w = 173;
// DestRect is null so the rect is drawn at 0,0
SDL_BlitSurface(g_pText, &sourceRect, g_pSurfaceDisplay, NULL);
glBindTexture(GL_TEXTURE_2D, g_FontTextureId);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBegin( GL_QUADS );
glTexCoord2f(0.0f, 0.0f);
glVertex2f(0.0f, 0.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(0.0f, 10.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(173.0f, 10.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(173.0f, 0.0f);
glEnd();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
}
You've made a fairly common mistake. It's on the OpenGL end of things.
When you render the textured quad in DrawText(), you enable OpenGL's blending capability, but you never specify the blending function (i.e. how it should be blended)!
You need this code to enable regular alpha-blending in OpenGL:
glEnable( GL_BLEND );
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
This info used to be on the OpenGL website, but I can't find it now.
That should stop it from coming out solid red. The reasons the others worked is because they're not alpha-blended, they're actually just red-on-black images with no alpha, so the blending function doesn't matter. But the blended one only contains red color, with an alpha channel to make it less-red.
I notice a few other small problems in your program though.
In the DrawText() function, you are blitting the surface using SDL and rendering with OpenGL. You should not use regular SDL blitting when using OpenGL; it doesn't work. So this line should not be there:
SDL_BlitSurface(g_pText, &sourceRect, g_pSurfaceDisplay, NULL);
Also, this line leaks memory:
g_pText = SDL_DisplayFormatAlpha( TTF_RenderUTF8_Blended(...) );
TTF_RenderUTF8_Blended() returns a pointer to SDL_Surface, which must be freed with SDL_FreeSurface(). Since you're passing it into SDL_DisplayFormatAlpha(), you lose track of it, and it never gets freed (hence the memory leak).
The good news is that you don't need SDL_DisplayFormatAlpha here because TTF_RenderUTF8_Blended returns a 32-bit surface with an alpha-channel anyway! So you can rewrite this line as:
g_pText = TTF_RenderUTF8_Blended(g_pFont, "My first Text!", g_textColor);
Hey all, I'm very new to OpenGL (just started seriously programming with it today) and I'm trying to use it to give my SDL games a 3D boost. I've setup a small test program below:
#include <SDL/SDL.h>
#include <gl/gl.h>
int main(int argc, char *argv[])
{
SDL_Event event;
float theta = 0.0f;
SDL_Init(SDL_INIT_VIDEO);
SDL_Surface *screen = SDL_SetVideoMode(800, 600, 32, SDL_OPENGL | SDL_HWSURFACE | SDL_RESIZABLE | SDL_FULLSCREEN);
glViewport(0, 0, 800, 600);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepth(1.0);
glDepthFunc(GL_LESS);
glEnable(GL_DEPTH_TEST);
glShadeModel(GL_SMOOTH);
glMatrixMode(GL_PROJECTION);
glMatrixMode(GL_MODELVIEW);
int done;
for(done = 0; !done;)
{
SDL_FillRect(screen, 0, SDL_MapRGB(screen->format, 255, 0, 0));
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.0f,0.0f,0.0f);
glRotatef(theta, 0.0f, 0.0f, 1.0f);
glBegin(GL_TRIANGLES);
glColor3f(0.83f, 0.83f, 0.0f);
glVertex2f(0.0f, 1.0f);
glColor3f(0.83f, 0.83f, 0.0f);
glVertex2f(0.87f, -0.5f);
glColor3f(0.83f, 0.83f, 0.0f);
glVertex2f(-0.87f, -0.5f);
glEnd();
theta += 10.0f;
SDL_Flip(screen);
SDL_GL_SwapBuffers();
SDL_PollEvent(&event);
if(event.key.keysym.sym == SDLK_ESCAPE)
done = 1;
}
}
My problem is that the red background I'm trying to rendered is never rendered, only the OpenGL Triangle is rendered.
Thanks in advance to anyone who can help me. It's much appreciated.
There's one simple rule about OpenGL: It doesn't play well with others. What happens in your case is, that the double buffer swap (initiated by SDL_GL_SwapBuffers) will in some way replace everything in the window, not being rendered by OpenGL.
Just draw everything using OpenGL.
You fill the back buffer on one line with SDL_FillRect then you clear it on the next with glClear. Have you tried swapping the order of the operations?
Not that I disagree with the accepted answer; in general trying to mix software rendering methods with OpenGL is a recipe for confusion at best, but you might get lucky in this case.
As for rending textured quads, you should be able to work it out from NeHe lesson 6. People complain about NeHe but it's a reasonable guide for getting started. Just don't use it as an example of good coding or of efficient modern OpenGL usage. Start here and move to more complex stuff later.
If you're using C++, SFML library might be a better option (it has C bindings though, but haven't tried those). It plays nicely with OpenGL and has functions to cooperatively work alongside GL. As far as I understood it, SFML functions themselves use GL to render. Although, I do suggest that you do rendering only with GL calls as noted above.
your SDL_FillRect isn't show as red, because you call glClear with GL_COLOR_BUFFER_BIT set afterwards
For my last few projects I have been using some of the utility files that I found whilst looking at a few demos here.
Namely a file called opengl.h - mainly used to manage shaders a bit like glew and another file gl_font.
gl_font is a class they use to render fonts on screen using vertex buffer objects.
However, when I use this to render the framerate in my game it draws everything but the skybox correctly. For some reason the skybox is rendered white as seen here, if I do not render the font it looks like this.
Here are some parts of the gl_font class that I think are most important:
void GLFont::begin()
{
HWND hWnd = GetForegroundWindow();
RECT rcClient;
GetClientRect(hWnd, &rcClient);
int w = rcClient.right - rcClient.left;
int h = rcClient.bottom - rcClient.top;
glPushAttrib(GL_CURRENT_BIT | GL_LIGHTING_BIT);
glDisable(GL_LIGHTING);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, m_fontTexture);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0.0f, w, h, 0.0f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBindBuffer(GL_ARRAY_BUFFER, m_vertexBuffer);
drawTextBegin();
}
I have trie changing glPushAttrib(GL_CURRENT_BIT | GL_LIGHTING_BIT); to glPushAttrib(GL_CURRENT_BIT | GL_LIGHTING_BIT | GL_TEXTURE_BIT); and the background texture returns, but the font isn't rendered.
void GLFont::end()
{
drawTextEnd();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glPopAttrib();
}
This is an image of the depth buffer when the font is rendered and this is what is looks like when it is not.
Could anyone shed some light on this problem please?
Any help would be much appreciated!
Thanks.
Looks like begin() lacks a glPushMatrix() after glMatrixMode(GL_MODELVIEW). This might cause the scene to be rendered incorrectly when some text is also rendered.
Didn't glGetError() report a GL_STACK_UNDERFLOW error?