glreadpixels stencil buffer always throws GL_INVALID_OPERATION - c++

I'm trying to figure out stencils. Right now I am just drawing some boxes with stencil values, then reading the value. Every time I call glReadPixels with GL_STENCIL_INDEX, I get GL_INVALID_OPERATION. Here is the code in question:
glPixelStorei(GL_PACK_ALIGNMENT, 1);
GLfloat tempStencilVal = 3;
glGetError();
glReadPixels(10, g_window1Height-10, 1, 1, GL_STENCIL_INDEX, GL_FLOAT, &tempStencilVal);
if (glGetError() == GL_INVALID_OPERATION) {std::cout << "GL Invalid Operation\n";}
else {std::cout << "X: " << 10 << " Y: " << 10 << " S: " << tempStencilVal << "\n";}
I've tried 5 different data formats, 3 different glPixelStore modes, and gone over the list of glReadPixels Errors 7 times. (Yes, OGL 2.1) If I change STENCIL_INDEX to DEPTH_COMPONENT it works fine. The only thing I can't confirm is if I have a stencil buffer. Is there some initialization I'm missing or some glGet to check that?
Potentially relevant info: Win7 x64 SP1 | ASUS GTX650Ti | VS2012 Ultimate
Here is the code for the function to draw the boxes, in case that's causing it:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, g_window1Width, -g_window1Height, 0, 0.0, 50.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glScaled(1.0, -1.0, -1.0);
glTranslated(0.0, 0.0, 0.5);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClearStencil(0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
glEnable(GL_STENCIL_TEST);
glStencilOp( GL_REPLACE, GL_REPLACE, GL_REPLACE );
glColor3ub(0, 100, 250);
glStencilFunc(GL_ALWAYS, 1, 1);
glBegin(GL_QUADS);
glVertex3d(0, 0, 0);
glVertex3d(0, 50, 0);
glVertex3d(50, 50, 0);
glVertex3d(50, 0, 0);
glEnd();
glStencilFunc(GL_ALWAYS, 1, 1);
glBegin(GL_QUADS);
glVertex3d(g_window1Width-50, 0, 0);
glVertex3d(g_window1Width, 0, 0);
glVertex3d(g_window1Width, 50, 0);
glVertex3d(g_window1Width-50, 50, 0);
glEnd();
This isn't the first time OGL has done the wrong thing for no apparent reason, but this breaks my plan for coding the interface.

To check if you do have a stencil buffer, you could try doing something with the values such as drawing another quad with glStencilFunc(GL_NOTEQUAL, 1, 1); with and without the stencil test enabled.
To find the actual format used, as you say with a glGet..., it looks like glGetFramebufferAttachmentParameter will give you the answer (with the default framebuffer bound).
The stencil buffer is 8 bits (I don't think it can be anything else) so maybe change the format to GL_UNSIGNED_BYTE.
It's also possible to mix depth and stencil buffers with GL_DEPTH_STENCIL, for which you might use GL_UNSIGNED_INT_24_8. I don't what format the default framebuffer has or if this will work in your case. If you're using a library such as glut, SDL or glfw, that's responsible for setting this up and where you should look for configuring the default framebuffer.

Related

Using glDepthFunc(GL_GREATER) would not draw anything

I'm running the following code to draw rectangles using GL_GREATER function,
but instead of getting the color of the farthest rectangle from the camera, I get a white screen.
glClearColor(1, 1, 1, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_GREATER);
glOrtho(-1, 1, -1, 1, -1, 1);
glColor3f(1, 0, 0);
glPushMatrix();
glTranslatef(0, 0, -0.5);
glRectf(-1, -1, 1, 1);
glColor3f(0, 1, 0);
glTranslatef(0, 0, 1);
glRectf(-1, -1, 1, 1);
glColor3f(0, 0, 1);
glPopMatrix();
glRectf(-1, -1, 1, 1);
So I'm expecting to see the farthest rectangle color on the screen, which is green (which is also weird because the zNear is -1 and using GL_LESS draws green instead of red - I don't understand why aswell).
however using GL_GREATER I get a white screen instead of green.
What am I missing here?
By default the values in the depth buffer are in range [0, 1]. See glDepthRange.
When the depth buffer is cleared, then the depth values are set to 1 by default. See glClearDepth.
If every value in the depth buffer is 1 and the depth test is GL_GREATER, then the depth test will fail in any case, because no depth can be grater than 1.
The value which is used to clear the depth buffer can be changed by glClearDepth.
Set the clear value for the depth buffer to 0, instead of 1, before the buffer is cleared:
glClearColor(1, 1, 1, 1);
glClearDepth(0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
If you are flipping the comparison, you also have to flip the depth buffer clear value with glClearDepth. Set it to 0.

opengl Color & Cube is not working properly

#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#include<GL/glut.h>
double cameraAngle;
void grid_and_axes() {
// draw the three major AXES
glBegin(GL_LINES);
//X axis
glColor3f(0, 1, 0); //100% Green
glVertex3f(-150, 0, 0);
glVertex3f(150, 0, 0);
//Y axis
glColor3f(0, 0, 1); //100% Blue
glVertex3f(0, -150, 0); // intentionally extended to -150 to 150, no big deal
glVertex3f(0, 150, 0);
//Z axis
glColor3f(1, 1, 1); //100% White
glVertex3f(0, 0, -150);
glVertex3f(0, 0, 150);
glEnd();
//some gridlines along the field
int i;
glColor3f(0.5, 0.5, 0.5); //grey
glBegin(GL_LINES);
for (i = -10; i <= 10; i++) {
if (i == 0)
continue; //SKIP the MAIN axes
//lines parallel to Y-axis
glVertex3f(i * 10, -100, 0);
glVertex3f(i * 10, 100, 0);
//lines parallel to X-axis
glVertex3f(-100, i * 10, 0);
glVertex3f(100, i * 10, 0);
}
glEnd();
}
void display() {
//codes for Models, Camera
//clear the display
//glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0, 0, 0, 0); //color black
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //clear buffers to preset values
/***************************
/ set-up camera (view) here
****************************/
//load the correct matrix -- MODEL-VIEW matrix
glMatrixMode(GL_MODELVIEW); //specify which matrix is the current matrix
//initialize the matrix
glLoadIdentity(); //replace the current matrix with the identity matrix [Diagonals have 1, others have 0]
//now give three info
//1. where is the camera (viewer)?
//2. where is the camera looking?
//3. Which direction is the camera's UP direction?
//gluLookAt(0,-150,20, 0,0,0, 0,0,1);
gluLookAt(150 * sin(cameraAngle), -150 * cos(cameraAngle), 50, 0, 0, 0, 0, 0, 1);
/*************************
/ Grid and axes Lines
**************************/
grid_and_axes();
/****************************
/ Add your objects from here
****************************/
/*glColor3f(1, 0, 0);
glutSolidCone(20, 20, 20, 20);
glColor3f(0, 0, 1);
GLUquadricObj *cyl = gluNewQuadric();
gluCylinder(cyl, 10, 10, 50, 20, 20);
glTranslatef(0, 0, 50);
glColor3f(1, 0, 0);
glutSolidCone(10, 20, 20, 20);
*/
glColor3f(1, 0, 0);
glutSolidCube(1);
I am not getting any cube here.
However if I use any transformation property like scaling or rotate then I get the desired cube like
glColor3f(1, 0, 0);
glScalef(50,5,60);
glutSolidCube(1);
what is the problem?
Another problem I am facing that color doesn't work if i don't use transformation property like above mentioned. If I write:
glColor3f(1, 0, 0);
glutSolidCone(20, 20, 20, 20);
For above codes color doesn't work; i get the default colored cone
However if I change this two lines to these 3 lines then color works perfectly:
glColor3f(1,0,0);
glTranslatef(0, 0, 50);
glutSolidCone(10,20,20,20);
then color works; what is the problem? Please help
//ADD this line in the end --- if you use double buffer (i.e. GL_DOUBLE)
glutSwapBuffers();
}
void animate() {
//codes for any changes in Models, Camera
cameraAngle += 0.001; // camera will rotate at 0.002 radians per frame.
//codes for any changes in Models
//MISSING SOMETHING? -- YES: add the following
glutPostRedisplay(); //this will call the display AGAIN
}
void init() {
//codes for initialization
cameraAngle = 0; //angle in radian
//clear the screen
glClearColor(0, 0, 0, 0);
/************************
/ set-up projection here
************************/
//load the PROJECTION matrix
glMatrixMode(GL_PROJECTION);
//initialize the matrix
glLoadIdentity();
/*
gluPerspective() — set up a perspective projection matrix
fovy - Specifies the field of view angle, in degrees, in the y direction.
aspect ratio - Specifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
zNear - Specifies the distance from the viewer to the near clipping plane (always positive).
zFar - Specifies the distance from the viewer to the far clipping plane (always positive).
*/
gluPerspective(70, 1, 0.1, 10000.0);
}
int main(int argc, char **argv) {
glutInit(&argc, argv); //initialize the GLUT library
glutInitWindowSize(500, 500);
glutInitWindowPosition(100, 100);
/*
glutInitDisplayMode - inits display mode
GLUT_DOUBLE - allows for display on the double buffer window
GLUT_RGBA - shows color (Red, green, blue) and an alpha
GLUT_DEPTH - allows for depth buffer
*/
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGB);
glutCreateWindow("Some Title");
init(); //codes for initialization
glEnable(GL_DEPTH_TEST); //enable Depth Testing
glutDisplayFunc(display); //display callback function
glutIdleFunc(animate); //what you want to do in the idle time (when no drawing is occuring)
glutMainLoop(); //The main loop of OpenGL
return 0;
}
I am not getting any cube here.
You do get a cube. It is just that tiny speck where the axis intersect. What else would you expect to see when you draw something 2 units big, ~160 units away, with a 70 degree field of view?
Another problem I am facing that color doesn't work if i don't use transformation property like above mentioned.
[...] I get the default colored cone.
I've no idea what you even mean by that. The "default color" would be the initial value of GL's builtin color attribute - which is (1, 1, 1, 1) - white. With the code you have set up, you will get the color which you set before. So the only guess I can make here is that you confused yourself by not properly taking GL's state machine into account.
But besides all that, you should not use that code at all - this is using the fixed function pipeline and immediate mode drawing - features which are deprecated since a decade now, and not supported at all by modern core profiles of OpenGL. Trying to learn that stuff in 2017 is a waste of time. And btw:
glutMainLoop(); //The main loop of OpenGL
Nope. Just NO!!!. OpenGL does not have a "main loop". GLUT is not OpenGL. Honestly, this is all just horrible.

OpenGL Issue with Camera(?)

I'm learning OpenGL and I have a problem with my program where I'm supposed to make the solar system.
First of all here's the code I use to setup my ModelView Matrix:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(20, 1, 0, 0);
glTranslatef(0, -20, -60);
And then I draw the orbits using line loops and the sun is a gluSphere:
glPushMatrix();
glColor3f(1, 0.4f, 0);
glTranslatef(0, -2, 0);
gluSphere(gluNewQuadric(), 4, 30, 30);
glPopMatrix();
And here's the result:
But then, when I "zoom in" using this code:
if (key=='w')
{
glTranslatef(0, 1, 2.4);
}
else if (key=='s')
{
glTranslatef(0, -1, -2.4);
}
this happens:
the lines stay in front of the sphere. I know it's probably something dumb I'm doing but I'm just starting to learn and this is really slowing me down..
Thanks!
You probably don't have the depth test turned on.
glEnable(GL_DEPTH_TEST);
You may also need to fiddle with the depth test parameters, though usually the default setting is sufficient.
glDepthfunc(GL_LESS);
I'd also like to take this time to strongly recommend that you stop using OpenGL's Immediate Mode and OpenGL's Fixed Function Pipeline, and learn Modern OpenGL.

Messed Up OpenGL Depth Buffer?

I have something rather strange going on at the moment with my code. I am running this on a BlackBerry Playbook and it is OpenGL ES 1.1
EDIT 4: I deleted everything I have posted to simplify my question.
I took the code and simplified it to drawing two overlapping triangles. Here is the array containing the coordinates as well as an array containing colours:
GLfloat vertices[] =
{
// front
175.0f, 200.0f, -24.0f,
225.0f, 200.0f, -24.0f,
225.0f, 250.0f, -24.0f,
// back
200.0f, 200.0f, -25.0f,
250.0f, 200.0f, -25.0f,
250.0f, 250.0f, -25.0f
};
static const GLfloat colors[] =
{
/* front */ 1.0f,0.0f,0.0f,1.0f,1.0f,0.0f,0.0f,1.0f,1.0f,0.0f,0.0f,1.0f, //Red
/* back */ 0.0f,1.0f,0.0f,1.0f,0.0f,1.0f,0.0f,1.0f,0.0f,1.0f,0.0f,1.0f //Green
};
Please note that my coordinates are 0 to 1024 in the x direction and 0 to 600 in the y direction as well as 0 to -10000 in the z direction.
Here is my setup code which reflects this:
glClearDepthf(1.0f);
glClearColor(1.0f,1.0f,1.0f,1.0f);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glShadeModel(GL_SMOOTH);
glViewport(0, 0, surface_width, surface_height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, surface_width, 0, surface_height, 0, 10000);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
I have depth enabling in two places as I was trying to rule out the possibility that it was supposed to be used while a certain matrix mode was chosen.
Lastly here is my render code:
void render()
{
//Typical render pass
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLES, 0 , 6);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
//Use utility code to update the screen
bbutil_swap();
}
The issue is that no matter what I do the green triangle is always overlayed over the red one. Changing z values either way has no effect on the finished image. I cannot figure this out.
By default, depth testing is disabled. You have to enable it with glEnable(GL_DEPTH_TEST). The reason why it is working when you enable culling is because the back facing triangles are not drawn, and since a cube is a convex polyhedron, no front-facing quad will ever overlap another front-facing quad. If you try to render a second cube, however, you will see depth problems as well, unless you enable depth testing.
I finally got it to work. The issue was with EGL setup code that I used that was provided. In bbutil.c (in my case .cpp) there is some code:
if(!eglChooseConfig(egl_disp, attrib_list, &egl_conf, 1, &num_configs)) {
bbutil_terminate();
return EXIT_FAILURE;
}
(that is not all the code in the file but its the important bit)
This basically freaks if the given attribute list is nor supported. Up higher in the file attrib_list is set as follows:
EGLint attrib_list[]= { EGL_RED_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_BLUE_SIZE, 8,
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_RENDERABLE_TYPE, 0,
EGL_NONE};
There is no depth buffer specified. Now if you look in the EGL spec it says no depth is the default. BINGO, that's the problem. So I just modified it to look like this:
EGLint attrib_list[]= { EGL_RED_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_BLUE_SIZE, 8,
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_RENDERABLE_TYPE, 0,
EGL_DEPTH_SIZE, 24,
EGL_NONE};
Note the EGL_DEPTH_SIZE and the 24. This sets the depth buffer to 24 bits. On the PlayBook 32 throws a segmentation fault although usually 32 is not supported anyways. Perhaps this will help someone out there trying to figure out why the provided include is causing this funny result I described as my problem.

rendering SDL_TTF text onto openGL Red Square instead of text

I've been attempting to render text onto an openGL window using SDL and the SDL_TTF library on windows XP, VS2010.
Versions:
SDL version 1.2.14
SDL TTF devel 1.2.10
openGL (version is at least 2-3 years old).
I have successfully created an openGL window using SDL / SDL_image and can render lines / polygons onto it with no problems.
However, moving onto text it appears that there is some flaw in my current program, I am getting the following result when trying this code here
for those not willing to pastebin here are only the crutial code segments:
void drawText(char * text) {
glLoadIdentity();
SDL_Color clrFg = {0,0,255,0}; // set colour to blue (or 'red' for BGRA)
SDL_Surface *sText = TTF_RenderUTF8_Blended( fntCourier, text, clrFg );
GLuint * texture = create_texture(sText);
glBindTexture(GL_TEXTURE_2D, *texture);
// draw a polygon and map the texture to it, may be the source of error
glBegin(GL_QUADS); {
glTexCoord2i(0, 0); glVertex3f(0, 0, 0);
glTexCoord2i(1, 0); glVertex3f(0 + sText->w, 0, 0);
glTexCoord2i(1, 1); glVertex3f(0 + sText->w, 0 + sText->h, 0);
glTexCoord2i(0, 1); glVertex3f(0, 0 + sText->h, 0);
} glEnd();
// free the surface and texture, removing this code has no effect
SDL_FreeSurface( sText );
glDeleteTextures( 1, texture );
}
segment 2:
// create GLTexture out of SDL_Surface
GLuint * create_texture(SDL_Surface *surface) {
GLuint texture = 0;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
// The SDL_Surface appears to have BGR_A formatting, however this ends up with a
// white rectangle no matter which colour i set in the previous code.
int Mode = GL_RGB;
if(surface->format->BytesPerPixel == 4) {
Mode = GL_RGBA;
}
glTexImage2D(GL_TEXTURE_2D, 0, Mode, surface->w, surface->h, 0, Mode,
GL_UNSIGNED_BYTE, surface->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
return &texture;
}
Is there an obvious bit of code I am missing?
Thank you for any help on this subject.
I've been trying to learn openGL and SDL for 3 days now, so please forgive any misinformation on my part.
EDIT:
I notice that using
TTF_RenderUTF8_Shaded
TTF_RenderUTF8_Solid
Throw a null pointer exception, meaning that there is an error within the actual text rendering function (I suspect), I do not know how this means TTF_RenderUTF8_Blended returns a red square but I suspect all troubles hinge on this.
I think the problem is in the glEnable(GL_TEXTURE_2D) and glDisable(GL_TEXTURE_2D) functions which must be called every time the text is painted on the screen.And maybe also the color conversion between the SDL and GL surface is not right.
I have combined create_texture and drawText into a single function that displays the text properly. That's the code:
void drawText(char * text, TTF_Font* tmpfont) {
SDL_Rect area;
SDL_Color clrFg = {0,0,255,0};
SDL_Surface *sText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Blended( tmpfont, text, clrFg ));
area.x = 0;area.y = 0;area.w = sText->w;area.h = sText->h;
SDL_Surface* temp = SDL_CreateRGBSurface(SDL_HWSURFACE|SDL_SRCALPHA,sText->w,sText->h,32,0x000000ff,0x0000ff00,0x00ff0000,0x000000ff);
SDL_BlitSurface(sText, &area, temp, NULL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, sText->w, sText->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, temp->pixels);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS); {
glTexCoord2d(0, 0); glVertex3f(0, 0, 0);
glTexCoord2d(1, 0); glVertex3f(0 + sText->w, 0, 0);
glTexCoord2d(1, 1); glVertex3f(0 + sText->w, 0 + sText->h, 0);
glTexCoord2d(0, 1); glVertex3f(0, 0 + sText->h, 0);
} glEnd();
glDisable(GL_TEXTURE_2D);
SDL_FreeSurface( sText );
SDL_FreeSurface( temp );
}
screenshot
I'm initializing OpenGL as follows:
int Init(){
glClearColor( 0.1, 0.2, 0.2, 1);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho( 0, 600, 300, 0, -1, 1 );
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
if( glGetError() != GL_NO_ERROR ){
return false;
}
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_COLOR, GL_ONE_MINUS_SRC_ALPHA);
}
I think you should just add glEnable(GL_BLEND), because the code for the text surface says TTF_RenderUTF8_Blended( fntCourier, text, clrFg ) and you have to enable the blending abilities of opengl.
EDIT
Okay, I finally took the time to put your code through a compiler. Most importantly, compiler with -Werror so that warning turn into errors
GLuint * create_texture(SDL_Surface *surface) {
GLuint texture = 0;
/*...*/
return &texture;
}
I didn't see it first, because that's something like C coder's 101 and is quite unexpected: You must not return pointers to local variables!. Once the functions goes out of scope the pointer returned will point to nonsense only. Why do you return a pointer at all? Just return a integer:
GLuint create_texture(SDL_Surface *surface) {
GLuint texture = 0;
/*...*/
return texture;
}
Because of this you're also not going to delete the texture afterward. You upload it to OpenGL, but then loose the reference to it.
Your code misses a glEnable(GL_TEXTURE_2D) that's why you can't see any effects of texture. However your use of textures is suboptimal. They way you did it, you recreate a whole new texture each time you're about to draw that text. If that happens in a animation loop, you'll
run out of texture memory rather soon
slow it down significantly
(1) can be addressed by not generating a new texture name each redraw
(2) can be addresses by uploading new texture data only when the text changes and by not using glTexImage2D, but glTexSubImage2D (of course, if the dimensions of the texture change, it must be glTexImage2D).
EDIT, found another possible issue, but first fix your pointer issue.
You should make sure, that you're using GL_REPLACE or GL_MODULATE texture environment mode. If using GL_DECAL or GL_BLEND you end up with red text on a red quad.
There was leaking memory of of the function in my previous post and the program was crashing after some time...
I improved this by separating the texture loading and displaying:
The first function must be called before the SDL loop.It loads text string into memory:
Every string loaded must have different txtNum parameter
GLuint texture[100];
SDL_Rect area[100];
void Load_string(char * text, SDL_Color clr, int txtNum, const char* file, int ptsize){
TTF_Font* tmpfont;
tmpfont = TTF_OpenFont(file, ptsize);
SDL_Surface *sText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Solid( tmpfont, text, clr ));
area[txtNum].x = 0;area[txtNum].y = 0;area[txtNum].w = sText->w;area[txtNum].h = sText->h;
glGenTextures(1, &texture[txtNum]);
glBindTexture(GL_TEXTURE_2D, texture[txtNum]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, sText->w, sText->h, 0, GL_BGRA, GL_UNSIGNED_BYTE, sText->pixels);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
SDL_FreeSurface( sText );
TTF_CloseFont(tmpfont);
}
The second one displays the string, must be called in the SDL loop:
void drawText(float coords[3], int txtNum) {
glBindTexture(GL_TEXTURE_2D, texture[txtNum]);
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS); {
glTexCoord2f(0, 0); glVertex3f(coords[0], coords[1], coords[2]);
glTexCoord2f(1, 0); glVertex3f(coords[0] + area[txtNum].w, coords[1], coords[2]);
glTexCoord2f(1, 1); glVertex3f(coords[0] + area[txtNum].w, coords[1] + area[txtNum].h, coords[2]);
glTexCoord2f(0, 1); glVertex3f(coords[0], coords[1] + area[txtNum].h, coords[2]);
} glEnd();
glDisable(GL_TEXTURE_2D);
}