Displaying text with SDL TTF with SDL2 and OpenGL - c++

I'm trying to display text using SDL2 TTF and OpenGL. A weird texture appears in the window, it's got the right size and the right position but you can't see any letters.
I've tried using the SDL_CreateRGBSurface() thinking that it might be a cleaner way to recuperate the pixels, but it didn't work either. My surface is never NULL and always passes the validation test.
I use the get_front() function before the while() loop, and the displayMoney() function inside it, right after using glClear(GL_COLOR_BUFFER_BIT).
SDL, TTF and OpenGL are initialized properly and I have created an OpenGL context. Here's the problematic code:
SDL_Surface* get_font()
{
TTF_Font *font;
font = TTF_OpenFont("lib/ariali.ttf", 35);
if (!font) cout << "problem loading font" << endl;
SDL_Color white = {150,200,200};
SDL_Color black = {0,100,0};
SDL_Surface* text = TTF_RenderText_Shaded(font, "MO", white, black);
if (!text) cout << "text not loaded" << endl;
return text;
}
void displayMoney(SDL_Surface* surface)
{
glEnable( GL_BLEND );
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
glEnable(GL_TEXTURE_2D);
GLuint TextureID = 0;
glGenTextures(1, &TextureID);
glBindTexture(GL_TEXTURE_2D, TextureID);
int Mode = GL_RGB;
if(surface->format->BytesPerPixel == 4) {
Mode = GL_RGBA;
}
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, Mode, 128, 64, 0, Mode, GL_UNSIGNED_BYTE, surface->pixels);
glPushMatrix();
glTranslated(100,100,0);
glScalef(100,100,0);
glBegin(GL_QUADS);
glTexCoord2f(0, 1); glVertex2f(-0.5f, -0.5f);
glTexCoord2f(1, 1); glVertex2f(0.5f, -0.5f);
glTexCoord2f(1, 0); glVertex2f(0.5f, 0.5f);
glTexCoord2f(0, 0); glVertex2f(-0.5f, 0.5f);
glEnd();
glPopMatrix();
glBindTexture(GL_TEXTURE_2D, 0);
}
#include <SDL2/SDL.h>
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
using namespace std;
#include <GL/gl.h>
#include <GL/glu.h>
#include <stb_image/stb_image.h>
#include <SDL2_ttf/SDL_ttf.h>
#include "init.h"
int main(int argc, char **argv) {
SDL_Window* window = init();
if (window == nullptr) {
cout << "Error window init" << endl;
}
if (TTF_Init() < 0) {
cout << "Error TTF init" << endl;
}
SDL_Surface* text = get_font();
while (loop) {
glClear(GL_COLOR_BUFFER_BIT);
displayMoney(text);
...
SDL_GL_SwapWindow(window);
There aren't any error messages. Also, instead of using my surface, I tested my code with an image by using the stbi_load function and it worked perfectly well. The issue therefore seems to be with the SDL part.
EDIT : I've recently found out the surface I get from my text has the following properties: Rmask=Gmask=Bmask=Amask = 0. This is obviously a problem but I've no idea how to fix it...

As stated in SDL_ttf documentation at https://www.libsdl.org/projects/SDL_ttf/docs/SDL_ttf.html#SEC42 ,
Shaded: Create an 8-bit palettized surface and render the given text at high quality with the given font and colors. The 0 pixel value is background, while other pixels have varying degrees of the foreground color from the background color.
So your resulting surface is indexed with 8-bit palette, not an RGBA (also indicated by missing colour masks in surface format, as you've noted). RGBA surface with alpha channel is produced by e.g. TTF_RenderText_Blended, or use different texture format, or perform format conversion. You need to pass surface width/height to glTexImage2D instead of 128/64 constants as surface size may vary.
You also have several resource leaks in question's code: creating new texture on each draw and never deleting it (which is also unnecessary if text isn't changing), and never closing font with TTF_CloseFont.

Related

Oculus 0.8 SDK Black Screen

I'm trying to make a very basic example of rendering to the Oculus using their SDK v0.8. All I'm trying to do is render a solid color to both eyes. When I run this, everything appears to initialize correctly. The Oculus shows the health warning message, but all I see is a black screen once the health warning message goes away. What am I doing wrong here?
#define GLEW_STATIC
#include <GL/glew.h>
#define OVR_OS_WIN32
#include <OVR_CAPI_GL.h>
#include <SDL.h>
#include <iostream>
int main(int argc, char *argv[])
{
SDL_Init(SDL_INIT_VIDEO);
SDL_Window* window = SDL_CreateWindow("OpenGL", 100, 100, 800, 600, SDL_WINDOW_OPENGL);
SDL_GLContext context = SDL_GL_CreateContext(window);
//Initialize GLEW
glewExperimental = GL_TRUE;
glewInit();
// Initialize Oculus context
ovrResult result = ovr_Initialize(nullptr);
if (OVR_FAILURE(result))
{
std::cout << "ERROR: Failed to initialize libOVR" << std::endl;
SDL_Quit();
return -1;
}
// Connect to the Oculus headset
ovrSession hmd;
ovrGraphicsLuid luid;
result = ovr_Create(&hmd, &luid);
if (OVR_FAILURE(result))
{
std::cout << "ERROR: Oculus Rift not detected" << std::endl;
SDL_Quit();
return 0;
}
ovrHmdDesc desc = ovr_GetHmdDesc(hmd);
std::cout << "Found " << desc.ProductName << "connected Rift device" << std::endl;
ovrSizei recommenedTex0Size = ovr_GetFovTextureSize(hmd, ovrEyeType(0), desc.DefaultEyeFov[0], 1.0f);
ovrSizei bufferSize;
bufferSize.w = recommenedTex0Size.w;
bufferSize.h = recommenedTex0Size.h;
std::cout << "Buffer Size: " << bufferSize.w << ", " << bufferSize.h << std::endl;
// Generate FBO for oculus
GLuint oculusFbo = 0;
glGenFramebuffers(1, &oculusFbo);
// Create swap texture
ovrSwapTextureSet* pTextureSet = nullptr;
if (ovr_CreateSwapTextureSetGL(hmd, GL_SRGB8_ALPHA8, bufferSize.w, bufferSize.h,&pTextureSet) == ovrSuccess)
{
ovrGLTexture* tex = (ovrGLTexture*)&pTextureSet->Textures[0];
glBindTexture(GL_TEXTURE_2D, tex->OGL.TexId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
// Create ovrLayerHeader
ovrEyeRenderDesc eyeRenderDesc[2];
eyeRenderDesc[0] = ovr_GetRenderDesc(hmd, ovrEye_Left, desc.DefaultEyeFov[0]);
eyeRenderDesc[1] = ovr_GetRenderDesc(hmd, ovrEye_Right, desc.DefaultEyeFov[1]);
ovrLayerEyeFov layer;
layer.Header.Type = ovrLayerType_EyeFov;
layer.Header.Flags = ovrLayerFlag_TextureOriginAtBottomLeft | ovrLayerFlag_HeadLocked;
layer.ColorTexture[0] = pTextureSet;
layer.ColorTexture[1] = pTextureSet;
layer.Fov[0] = eyeRenderDesc[0].Fov;
layer.Fov[1] = eyeRenderDesc[1].Fov;
ovrVector2i posVec;
posVec.x = 0;
posVec.y = 0;
ovrSizei sizeVec;
sizeVec.w = bufferSize.w;
sizeVec.h = bufferSize.h;
ovrRecti rec;
rec.Pos = posVec;
rec.Size = sizeVec;
layer.Viewport[0] = rec;
layer.Viewport[1] = rec;
ovrLayerHeader* layers = &layer.Header;
SDL_Event windowEvent;
while (true)
{
if (SDL_PollEvent(&windowEvent))
{
if (windowEvent.type == SDL_QUIT) break;
}
ovrGLTexture* tex = (ovrGLTexture*)&pTextureSet->Textures[0];
glBindFramebuffer(GL_FRAMEBUFFER, oculusFbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex->OGL.TexId, 0);
glViewport(0, 0, bufferSize.w, bufferSize.h);
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
ovr_SubmitFrame(hmd, 0, nullptr, &layers, 1);
SDL_GL_SwapWindow(window);
}
SDL_GL_DeleteContext(context);
SDL_Quit();
return 0;
}
There are a number of problems here
Not initializing ovrLayerEyeFov.RenderPose
Not using ovrSwapTextureSet correctly
Useless calls to SDL_GL_SwapWindow will cause stuttering
Possible undefined behavior when reading the texture while it's still bound for drawing
Not initializing ovrLayerEyeFov.RenderPose
You main problem is that you're not setting the RenderPose member of the ovrLayerEyeFov structure. This member tells the SDK what pose you rendered at and therefore how it should apply timewarp based on the current head pose (which might have changed since you rendered). By not setting this value you're basically giving the SDK a random head pose, which is almost certainly not a valid head pose.
Additionally, ovrLayerFlag_HeadLocked isn't needed for your layer type. It causes the Oculus to display the resulting image in a fixed position relative to your head. It might do what you want, but only if you properly initialize the layer.RenderPose members with the correct values (I'm not sure what those would be in the case of ovrLayerEyeFov, as I've only used the flag in combination with ovrLayerQuad).
What you should do is add the following right after the layer declaration to properly initialize it:
memset(&layer, 0, sizeof(ovrLayerEyeFov));
Then, inside your render loop you should add the following right after the check for a quit event:
ovrTrackingState tracking = ovr_GetTrackingState(hmd, 0, true);
layer.RenderPose[0] = tracking.HeadPose.ThePose;
layer.RenderPose[1] = tracking.HeadPose.ThePose;
This tells the SDK that this image was rendered from the point of view where the head currently is.
Not using ovrSwapTextureSet correctly
Another problem in the code is that you're incorrectly using the texture set. The documentation specifies that when using the texture set, you need to use the texture pointed to by ovrSwapTextureSet.CurrentIndex:
ovrGLTexture* tex = (ovrGLTexture*)(&(pTextureSet->Textures[pTextureSet->CurrentIndex]));
...and then after each call to ovr_SubmitFrame you need to increment ovrSwapTextureSet.CurrentIndex then mod the value by ovrSwapTextureSet.TextureCount like so
pTextureSet->CurrentIndex = (pTextureSet->CurrentIndex + 1) % pTextureSet->TextureCount;
Useless calls to SDL_GL_SwapWindow will cause stuttering
The SDL_GL_SwapWindow(window); call is unnecessary and pointless since you haven't drawn anything to the default framebuffer. Once you move away from drawing a solid color, this call will end up causing judder, since it will block until v-sync (typically at 60hz) causing you to sometimes miss the refersh of the Oculus display. Right now this will be invisible because your scene is just a solid color, but later on when you're rendering objects in 3D, it will cause intolerable judder.
You can use SDL_GL_SwapWindow if you
Ensure v-sync is disabled
Have a mirror texture available to draw to the window. (See the documentation for ovr_CreateMirrorTextureGL)
Possible framebuffer issues
I'm less certain about this one being a serious problem, but I would also suggest unbinding the framebuffer and detaching the Oculus provided texture before sending it to ovr_SubmitFrame(), as I'm not certain that the behavior is well defined when reading from a texture attached to a framebuffer that is currently bound for drawing. It seems to have no impact on my local system, but undefined doesn't mean doesn't work, it just means you can't rely on it to work.
I've updated the sample code and put it here. As a bonus I've modified it so it draws one color on the left eye and a different color on the right eye, as well as setting up the buffer to provide for rendering one half of the buffer for each eye.

Rendering text- freetype blank screen

I am using freetype, and the only thing I have left to do in order to render text is convert an ft_bitmap to something that can be rendered with opengl can someone explain how to do this? I am using glfw. With the way I have tried to do it it just gives a blank screen And here is the code that I am using:
#include <exception>
#include <iostream>
#include <string>
#include <glew.h>
#include <GL/glfw.h>
#include <iterator>
#include "../include/TextRenderer.h"
#include <ft2build.h>
#include FT_FREETYPE_H
#include <stdexcept>
#include <freetype/ftglyph.h>
using std::runtime_error;
using std::cout;
TextRenderer::TextRenderer(int x, int y, FT_Face Face, std::string s)
{
FT_Set_Char_Size(
Face, /* handle to face object */
0, /* char_width in 1/64th of points */
16*64, /* char_height in 1/64th of points */
0, /* horizontal device resolution */
0 ); /* vertical device resolution */
slot= Face->glyph;
text = s;
setsx(x);
setsy(y);
penX = x;
penY = y;
face = Face;
//shaders
GLuint v = glCreateShader(GL_VERTEX_SHADER) ;
const char* vs = "void main(){ gl_Position = ftransform();}";
glShaderSource(v,1,&vs,NULL);
glCompileShader(v);
GLuint f = glCreateShader(GL_FRAGMENT_SHADER) ;
const char* fs = "uniform sampler2D texture1; void main() { gl_FragColor = texture2D(texture1, gl_TexCoord[0].st); //And that is all we need}";
glShaderSource(f,1,&fs,NULL);
glCompileShader(f);
Program= glCreateProgram();
glAttachShader(Program,v);
glAttachShader(Program,f);
glLinkProgram(Program);
}
void TextRenderer::render()
{
glUseProgram(Program);
FT_UInt glyph_index;
for ( int n = 0; n < text.size(); n++ )
{
/* retrieve glyph index from character code */
glyph_index = FT_Get_Char_Index( face, text[n] );
/* load glyph image into the slot (erase previous one) */
error = FT_Load_Glyph( face, glyph_index, FT_LOAD_RENDER );
draw(&face->glyph->bitmap,penX + slot->bitmap_left,penY - slot->bitmap_top );
penX += *(&face->glyph->bitmap.width)+3;
penY += slot->advance.y >> 6; /* not useful for now */
}
}
void TextRenderer::draw(FT_Bitmap * bitmap,float x,float y)
{
GLuint texture [0] ;
glGenTextures(1,texture);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glTexImage2D (GL_TEXTURE_2D, 0, GL_RED , bitmap->width, bitmap->rows, 0, GL_RED , GL_UNSIGNED_BYTE, bitmap);
// int loc = glGetUniformLocation(Program, "texture1");
// glUniform1i(loc, 0);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glEnable(GL_TEXTURE_2D);
int height=bitmap->rows/10;
int width=bitmap->width/10;
glBegin(GL_QUADS);
glTexCoord2f (0.0, 0.0);
glVertex2f(x,y);
glTexCoord2f (1.0, 0.0);
glVertex2f(x+width,y);
glTexCoord2f (1.0, 1.0);
glVertex2f(x+width,y+height);
glTexCoord2f (0.0, 1.0);
glVertex2f(x,y+height);
glEnd();
glDisable(GL_TEXTURE_2D);
}
What i am using to initialize text renderer:
FT_Library library;
FT_Face arial;
FT_Error error = FT_Init_FreeType( &library );
if ( error )
{
throw std::runtime_error("Freetype failed");
}
error = FT_New_Face( library,
"C:/Windows/Fonts/Arial.ttf",
0,
&arial );
if ( error == FT_Err_Unknown_File_Format )
{
throw std::runtime_error("font format not available");
}
else if ( error )
{
throw std::runtime_error("Freetype font failed");
}
TextRenderer t(5,10,arial,"Hello");
t.render();
There's a lot of Problems in your program that result from not understanding what each call that you make to OpenGL or Freetype do. You should really read the documentation for the libraries instead of stacking tutorials into each other.
Let's do this one by one
Fragment Shader
const char* fs = "uniform sampler2D texture1;
void main() {
gl_FragColor = texture2D(texture1, gl_TexCoord[0].st);
//And that is all we need}";`
This shader doesn't compile (you should really check if it compiles with glGetShaderiv and if it links with glGetProgramiv). If you indent it correctly then you'll see that you commented out the final } because it's in the same line and after the //. So, you should remove the comment or use a \n to end the comment.
Also, for newer versions of OpenGL using gl_TexCoord is deprecated but it works if you use a compatibility profile.
Vertex Shader
just like the fragment shaders there's deprecated functionality used, namely ftransform().
But the bigger problem is that you use gl_TexCoord[0] in the fragment shader without passing it through from the vertex shader. So, you need to add the line gl_TexCoord[0]=gl_MultiTexCoord0; in your vertex shader. (As you might have guessed that is also deprecated)
Texture passing
You are passing a pointer to bitmap to glTexImage2D but bitmap is of type FT_Bitmap *, you need to pass bitmap->buffer instead.
You should not generate a new texture for each letter every frame (especially not if you're not deleting it). You should call glGentextures only once (you could put it in your TextRenderer constructor since you put all the other initialization stuff there).
Then there's the GLuint texture [0]; which should give you a compiler error. If you really need an array with one element then the syntax is GLuint texture [1];
So your final call would look something like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, bitmap->width, bitmap->rows, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, bitmap->buffer);
Miscellaneous
int height=bitmap->rows/10;
int width=bitmap->width/10;
this is an integer division and if your values for bitmap->width get smaller than 10 you would get 0 as the result, which would make the quad you're trying to draw invisible (height or width of 0). If you have trouble getting the objects into view you should just translate/scale it into view. This is also deprecated but if you keep using the other stuff this would make your window have a coordinate system from [-100,-100] to [100,100] (lower-left to upper-right).
glLoadIdentity();
glScalef(0.01f, 0.01f, 1.0f);
You're also missing the coordinate conversion from FreeType to OpenGL, Freetype uses a coordinate system which starts at [0,0] in the top left corner and x is the offset to the right while y is the offset to the bottom. So if you just use these coordinates in OpenGL everything will be upside-down.
If you do all that your result should look something like this (grey background to highlight where the polygons begin and end):
As for your general approach, repurposing one texture and drawing letter by letter re-using and overwriting the same texture seems like an inefficient approach. It would be better to just allocate one larger texture and then use glTexSubImage2D to write the glyphs to it. If freetype re-rendering letters is a bottleneck you could also just write all the symbols you need into one texture at the beginning (for example the whole ASCII range) and then use that texture as a texture-atlas.
My general advice would also be that if you don't really want to learn OpenGL but just want to use some cross-platform rendering without bothering with the low-level stuff I'd recommend using a rendering framework instead.

OpenGL renders texture all white

I'm attempting to render a .png image as a texture. However, all that is being rendered is a white square.
I give my texture a unique int ID called texID, read the pixeldata into a buffer 'image' (declared in the .h file). I load my pixelbuffer, do all of my OpenGL stuff and bind that pixelbuffer to a texture for OpenGL. I then draw it all using glDrawElements.
Also I initialize the texture with a size of 32x32 when its contructor is called, therefore i doubt it is related to a power of two size issue.
Can anybody see any mistakes in my OpenGL GL_TEXTURE_2D setup that might give me a block white square.
#include "Texture.h"
Texture::Texture(int width, int height, string filename)
{
const char* fnPtr = filename.c_str(); //our image loader accepts a ptr to a char, not a string
printf(fnPtr);
w = width; //give our texture a width and height, the reason that we need to pass in the width and height values manually
h = height;//UPDATE, these MUST be P.O.T.
unsigned error = lodepng::decode(image,w,h,fnPtr);//lodepng's decode function will load the pixel data into image vector
//display any errors with the texture
if(error)
{
cout << "\ndecoder error " << error << ": " << lodepng_error_text(error) <<endl;
}
for(int i = 0; i<image.size(); i++)
{
printf("%i,", image.at(i));
}
printf("\nImage size is %i", image.size());
//image now contains our pixeldata. All ready for OpenGL to do its thing
//let's get this texture up in the video memory
texGLInit();
}
void Texture::texGLInit()
{
//WHERE YOU LEFT OFF: glGenTextures isn't assigning an ID to textures. it stays at zero the whole time
//i believe this is why it's been rendering white
glGenTextures(1, &textures);
printf("\ntexture = %u", textures);
glBindTexture(GL_TEXTURE_2D, textures);//evrything we're about to do is about this texture
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
//glDisable(GL_COLOR_MATERIAL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8,w,h,0, GL_RGBA, GL_UNSIGNED_BYTE, &image);
//we COULD free the image vectors memory right about now.
}
void Texture::draw(point centerPoint, point dimensions)
{
glEnable(GL_TEXTURE_2D);
printf("\nDrawing block at (%f, %f)",centerPoint.x, centerPoint.y);
glBindTexture(GL_TEXTURE_2D, textures);//bind the texture
//create a quick vertex array for the primitive we're going to bind the texture to
printf("TexID = %u",textures);
GLfloat vArray[8] =
{
centerPoint.x-(dimensions.x/2), centerPoint.y-(dimensions.y/2),//bottom left i0
centerPoint.x-(dimensions.x/2), centerPoint.y+(dimensions.y/2),//top left i1
centerPoint.x+(dimensions.x/2), centerPoint.y+(dimensions.y/2),//top right i2
centerPoint.x+(dimensions.x/2), centerPoint.y-(dimensions.y/2)//bottom right i3
};
//create a quick texture array (we COULD create this on the heap rather than creating/destoying every cycle)
GLfloat tArray[8] =
{
0.0f,0.0f, //0
0.0f,1.0f, //1
1.0f,1.0f, //2
1.0f,0.0f //3
};
//and finally.. the index array...remember, we draw in triangles....(and we'll go CW)
GLubyte iArray[6] =
{
0,1,2,
0,2,3
};
//Activate arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//Give openGL a pointer to our vArray and tArray
glVertexPointer(2, GL_FLOAT, 0, &vArray[0]);
glTexCoordPointer(2, GL_FLOAT, 0, &tArray[0]);
//Draw it all
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, &iArray[0]);
//glDrawArrays(GL_TRIANGLES,0,6);
//Disable the vertex arrays
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
//done!
/*glBegin(GL_QUADS);
glTexCoord2f(0.0f,0.0f);
glVertex2f(centerPoint.x-(dimensions.x/2), centerPoint.y-(dimensions.y/2));
glTexCoord2f(0.0f,1.0f);
glVertex2f(centerPoint.x-(dimensions.x/2), centerPoint.y+(dimensions.y/2));
glTexCoord2f(1.0f,1.0f);
glVertex2f(centerPoint.x+(dimensions.x/2), centerPoint.y+(dimensions.y/2));
glTexCoord2f(1.0f,0.0f);
glVertex2f(centerPoint.x+(dimensions.x/2), centerPoint.y-(dimensions.y/2));
glEnd();*/
}
Texture::Texture(void)
{
}
Texture::~Texture(void)
{
}
I'll also include the main class' init, where I do a bit more OGL setup before this.
void init(void)
{
printf("\n......Hello Guy. \n....\nInitilising");
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,XSize,0,YSize);
glEnable(GL_TEXTURE_2D);
myBlock = new Block(0,0,offset);
glClearColor(0,0.4,0.7,1);
glLineWidth(2); // Width of the drawing line
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DEPTH_TEST);
printf("\nInitialisation Complete");
}
Update: adding in the main function where I first setup my OpenGL window.
int main(int argc, char** argv)
{
glutInit(&argc, argv); // GLUT Initialization
glutInitDisplayMode(GLUT_RGBA|GLUT_DOUBLE); // Initializing the Display mode
glutInitWindowSize(800,600); // Define the window size
glutCreateWindow("Gem Miners"); // Create the window, with caption.
printf("\n========== McLeanTech Systems =========\nBecoming Sentient\n...\n...\n....\nKILL\nHUMAN\nRACE \n");
init(); // All OpenGL initialization
//-- Callback functions ---------------------
glutDisplayFunc(display);
glutKeyboardFunc(mykey);
glutSpecialFunc(processSpecialKeys);
glutSpecialUpFunc(processSpecialUpKeys);
//glutMouseFunc(mymouse);
glutMainLoop(); // Loop waiting for event
}
Here's the usual checklist for whenever textures come out white:
OpenGL context created and being bound to current thread when attemting to load texture?
Allocated texture ID using glGenTextures?
Are the parameters format and internal format to glTex[Sub]Image… valid OpenGL tokens allowed as input for this function?
Is mipmapping being used?
YES: Supply all mipmap layers – optimally set glTexParameteri GL_TEXTURE_BASE_LEVEL and GL_TEXTURE_MAX_LEVEL, as well as GL_TEXTURE_MIN_LOD and GL_TEXTURE_MAX_LOG.
NO: Turn off mipmap filtering by setting glTexParameteri GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR.

rendering SDL_TTF text onto openGL Red Square instead of text

I've been attempting to render text onto an openGL window using SDL and the SDL_TTF library on windows XP, VS2010.
Versions:
SDL version 1.2.14
SDL TTF devel 1.2.10
openGL (version is at least 2-3 years old).
I have successfully created an openGL window using SDL / SDL_image and can render lines / polygons onto it with no problems.
However, moving onto text it appears that there is some flaw in my current program, I am getting the following result when trying this code here
for those not willing to pastebin here are only the crutial code segments:
void drawText(char * text) {
glLoadIdentity();
SDL_Color clrFg = {0,0,255,0}; // set colour to blue (or 'red' for BGRA)
SDL_Surface *sText = TTF_RenderUTF8_Blended( fntCourier, text, clrFg );
GLuint * texture = create_texture(sText);
glBindTexture(GL_TEXTURE_2D, *texture);
// draw a polygon and map the texture to it, may be the source of error
glBegin(GL_QUADS); {
glTexCoord2i(0, 0); glVertex3f(0, 0, 0);
glTexCoord2i(1, 0); glVertex3f(0 + sText->w, 0, 0);
glTexCoord2i(1, 1); glVertex3f(0 + sText->w, 0 + sText->h, 0);
glTexCoord2i(0, 1); glVertex3f(0, 0 + sText->h, 0);
} glEnd();
// free the surface and texture, removing this code has no effect
SDL_FreeSurface( sText );
glDeleteTextures( 1, texture );
}
segment 2:
// create GLTexture out of SDL_Surface
GLuint * create_texture(SDL_Surface *surface) {
GLuint texture = 0;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
// The SDL_Surface appears to have BGR_A formatting, however this ends up with a
// white rectangle no matter which colour i set in the previous code.
int Mode = GL_RGB;
if(surface->format->BytesPerPixel == 4) {
Mode = GL_RGBA;
}
glTexImage2D(GL_TEXTURE_2D, 0, Mode, surface->w, surface->h, 0, Mode,
GL_UNSIGNED_BYTE, surface->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
return &texture;
}
Is there an obvious bit of code I am missing?
Thank you for any help on this subject.
I've been trying to learn openGL and SDL for 3 days now, so please forgive any misinformation on my part.
EDIT:
I notice that using
TTF_RenderUTF8_Shaded
TTF_RenderUTF8_Solid
Throw a null pointer exception, meaning that there is an error within the actual text rendering function (I suspect), I do not know how this means TTF_RenderUTF8_Blended returns a red square but I suspect all troubles hinge on this.
I think the problem is in the glEnable(GL_TEXTURE_2D) and glDisable(GL_TEXTURE_2D) functions which must be called every time the text is painted on the screen.And maybe also the color conversion between the SDL and GL surface is not right.
I have combined create_texture and drawText into a single function that displays the text properly. That's the code:
void drawText(char * text, TTF_Font* tmpfont) {
SDL_Rect area;
SDL_Color clrFg = {0,0,255,0};
SDL_Surface *sText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Blended( tmpfont, text, clrFg ));
area.x = 0;area.y = 0;area.w = sText->w;area.h = sText->h;
SDL_Surface* temp = SDL_CreateRGBSurface(SDL_HWSURFACE|SDL_SRCALPHA,sText->w,sText->h,32,0x000000ff,0x0000ff00,0x00ff0000,0x000000ff);
SDL_BlitSurface(sText, &area, temp, NULL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, sText->w, sText->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, temp->pixels);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS); {
glTexCoord2d(0, 0); glVertex3f(0, 0, 0);
glTexCoord2d(1, 0); glVertex3f(0 + sText->w, 0, 0);
glTexCoord2d(1, 1); glVertex3f(0 + sText->w, 0 + sText->h, 0);
glTexCoord2d(0, 1); glVertex3f(0, 0 + sText->h, 0);
} glEnd();
glDisable(GL_TEXTURE_2D);
SDL_FreeSurface( sText );
SDL_FreeSurface( temp );
}
screenshot
I'm initializing OpenGL as follows:
int Init(){
glClearColor( 0.1, 0.2, 0.2, 1);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho( 0, 600, 300, 0, -1, 1 );
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
if( glGetError() != GL_NO_ERROR ){
return false;
}
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_COLOR, GL_ONE_MINUS_SRC_ALPHA);
}
I think you should just add glEnable(GL_BLEND), because the code for the text surface says TTF_RenderUTF8_Blended( fntCourier, text, clrFg ) and you have to enable the blending abilities of opengl.
EDIT
Okay, I finally took the time to put your code through a compiler. Most importantly, compiler with -Werror so that warning turn into errors
GLuint * create_texture(SDL_Surface *surface) {
GLuint texture = 0;
/*...*/
return &texture;
}
I didn't see it first, because that's something like C coder's 101 and is quite unexpected: You must not return pointers to local variables!. Once the functions goes out of scope the pointer returned will point to nonsense only. Why do you return a pointer at all? Just return a integer:
GLuint create_texture(SDL_Surface *surface) {
GLuint texture = 0;
/*...*/
return texture;
}
Because of this you're also not going to delete the texture afterward. You upload it to OpenGL, but then loose the reference to it.
Your code misses a glEnable(GL_TEXTURE_2D) that's why you can't see any effects of texture. However your use of textures is suboptimal. They way you did it, you recreate a whole new texture each time you're about to draw that text. If that happens in a animation loop, you'll
run out of texture memory rather soon
slow it down significantly
(1) can be addressed by not generating a new texture name each redraw
(2) can be addresses by uploading new texture data only when the text changes and by not using glTexImage2D, but glTexSubImage2D (of course, if the dimensions of the texture change, it must be glTexImage2D).
EDIT, found another possible issue, but first fix your pointer issue.
You should make sure, that you're using GL_REPLACE or GL_MODULATE texture environment mode. If using GL_DECAL or GL_BLEND you end up with red text on a red quad.
There was leaking memory of of the function in my previous post and the program was crashing after some time...
I improved this by separating the texture loading and displaying:
The first function must be called before the SDL loop.It loads text string into memory:
Every string loaded must have different txtNum parameter
GLuint texture[100];
SDL_Rect area[100];
void Load_string(char * text, SDL_Color clr, int txtNum, const char* file, int ptsize){
TTF_Font* tmpfont;
tmpfont = TTF_OpenFont(file, ptsize);
SDL_Surface *sText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Solid( tmpfont, text, clr ));
area[txtNum].x = 0;area[txtNum].y = 0;area[txtNum].w = sText->w;area[txtNum].h = sText->h;
glGenTextures(1, &texture[txtNum]);
glBindTexture(GL_TEXTURE_2D, texture[txtNum]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, sText->w, sText->h, 0, GL_BGRA, GL_UNSIGNED_BYTE, sText->pixels);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
SDL_FreeSurface( sText );
TTF_CloseFont(tmpfont);
}
The second one displays the string, must be called in the SDL loop:
void drawText(float coords[3], int txtNum) {
glBindTexture(GL_TEXTURE_2D, texture[txtNum]);
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS); {
glTexCoord2f(0, 0); glVertex3f(coords[0], coords[1], coords[2]);
glTexCoord2f(1, 0); glVertex3f(coords[0] + area[txtNum].w, coords[1], coords[2]);
glTexCoord2f(1, 1); glVertex3f(coords[0] + area[txtNum].w, coords[1] + area[txtNum].h, coords[2]);
glTexCoord2f(0, 1); glVertex3f(coords[0], coords[1] + area[txtNum].h, coords[2]);
} glEnd();
glDisable(GL_TEXTURE_2D);
}

opengl - framebuffer texture clipped smaller than I set it?

I'm using opengl ES 2.0
I'm using a framebuffer linked to a texture to compile an offscreen render (of some simplistic metaballs), and then I'm rendering that texture to the main back buffer.
Everything is looking great except that the texture appears clipped, ie. it is not the full window dimensions (short of about 128 pixels on one axis). Here's a screenshot: http://tinypic.com/r/9telwg/7
Any ideas what could cause this? I read here to set glViewport to the size of the texture, but that gives me a different aspect ratio since the texture metaballsTexture is square (1024x1024) and my window is 768x1024. It also still remains a bit clipped, as it seems I can't get the frame buffer to be big enough, even though the texture is bigger than my window size. Below is my code. I call PrepareToAddMetaballs() during the render when I'm ready, then successive calls to AddMetaball, now rendered onto my offscreeen FBO, then FinishedAddingMetaballs when I'm done, and later call Render() to display the offscreen texture linked to the FBO onto the main backbuffer.
#include "Metaballs.h"
#include "s3e.h"
#include "IwGL.h"
#include "Render.h"
#include "vsml.h"
#include <vector>
#include <string>
#include <iostream>
#include "1013Maths.h"
#define GL_RGBA8 0x8058
MetaBalls::MetaBalls() : metaballsTexture(NULL), metaballsShader(NULL) {
glGenFramebuffers(1, &myFBO);
metaballTexture[0] = NULL;
metaballTexture[1] = NULL;
metaballTexture[2] = NULL;
CRender::Instance()->CreateTexture("WaterCanvas.png", &metaballsTexture);
CRender::Instance()->CreateTexture("metaball.pvr", &metaballTexture[0]);
CRender::Instance()->CreateTexture("metaball-1.png", &metaballTexture[1]);
CRender::Instance()->CreateTexture("metaball-2.png", &metaballTexture[2]);
CRender::Instance()->CreateShader("Shaders/metaballs.fs", "Shaders/metaballs.vs", &metaballsShader);
glBindFramebuffer(GL_FRAMEBUFFER, myFBO);
// Attach texture to frame buffer
glBindTexture(GL_TEXTURE_2D, metaballsTexture->m_id);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, metaballsTexture->m_id, 0);
glClearColor(1,1,1,0);
glClear(GL_COLOR_BUFFER_BIT);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
std::string error = "Metaballs framebuffer incomplete";
std::cerr << error << std::endl;
throw error;
}
float w = PTM_DOWNSCALE(float(metaballsTexture->GetWidth()));
float h = PTM_DOWNSCALE(float(metaballsTexture->GetHeight()));
CRender::Instance()->BuildQuad(
tVertex( b2Vec3(0,0,0), b2Vec2(0,1) ),
tVertex( b2Vec3(w,0,0), b2Vec2(1,1) ),
tVertex( b2Vec3(w,h,0), b2Vec2(1,0) ),
tVertex( b2Vec3(0,h,0), b2Vec2(0,0) ),
buffer);
}
MetaBalls::~MetaBalls() {
CRender::Instance()->ReleaseShader(metaballsShader);
CRender::Instance()->ReleaseTexture(metaballsTexture);
CRender::Instance()->ReleaseTexture(metaballTexture[0]);
CRender::Instance()->ReleaseTexture(metaballTexture[1]);
CRender::Instance()->ReleaseTexture(metaballTexture[2]);
glDeleteFramebuffers(1, &myFBO);
}
void MetaBalls::PrepareToAddMetaballs(b2Vec3& paintColour) {
// bind render to texture
glBindFramebuffer(GL_FRAMEBUFFER, myFBO);
// Set our viewport so our texture isn't clipped (appears stretched and clipped)
// glViewport(0, 0, metaballsTexture->GetWidth(), metaballsTexture->GetHeight());
glClearColor(paintColour.x, paintColour.y, paintColour.z, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
}
void MetaBalls::FinishedAddingMetaballs() {
glBindFramebuffer(GL_FRAMEBUFFER, NULL);
// CRender::Instance()->SetWindowViewport();
}
void MetaBalls::AddMetaball(float x, float y, uint size) {
// render the metaball texture to larger texture
VSML::setIdentityMatrix(pTransform);
pTransform[12] = PTM_DOWNSCALE(x);
pTransform[13] = PTM_DOWNSCALE(y+4); // the +4 is for a bit of overlap with land
float oldview[16];
float identity[16];
VSML::setIdentityMatrix(identity);
memcpy(oldview, CRender::Instance()->GetViewMatrix(), sizeof(float)*16);
memcpy(CRender::Instance()->GetViewMatrix(),identity, sizeof(float)*16);
CRender::Instance()->DrawSprite(metaballTexture[size], pTransform, 1.0f, true);
memcpy(CRender::Instance()->GetViewMatrix(),oldview, sizeof(float)*16);
}
void MetaBalls::Render() {
VSML::setIdentityMatrix(pTransform);
pTransform[12] = PTM_DOWNSCALE(-128);
pTransform[13] = PTM_DOWNSCALE(-256);
// render our metaballs texture using alpha test shader
CRender::Instance()->BindShader(metaballsShader);
CRender::Instance()->BindTexture(0, metaballsTexture);
CRender::Instance()->SetMatrix(metaballsShader, "view", CRender::Instance()->GetViewMatrix());
CRender::Instance()->SetMatrix(metaballsShader, "world", pTransform);
CRender::Instance()->SetMatrix(metaballsShader, "proj", CRender::Instance()->GetProjMatrix());
CRender::Instance()->SetBlending(true);
CRender::Instance()->DrawPrimitives(buffer);
CRender::Instance()->SetBlending(false);
}
====================
EDIT
Aha! Got it. I haven't found this example anywhere, but I fixed it by adjusting the perspective matrix. It was set to 1024x768 when it was working, but with a window size of 768x1024, the projection matrix was changing, as well as the viewport. By setting each to 1024x768 manually (I chose to use constants), the metaballs are rendered correctly offscreen with proper aspect ratio. Their 1024x1024 texture is rendered as a billboard with that aspect ratio nice and sharp. After I'm done I restore them to what the rest of the application uses. Below is the working code:
#include "Metaballs.h"
#include "s3e.h"
#include "IwGL.h"
#include "Render.h"
#include "vsml.h"
#include <vector>
#include <string>
#include <iostream>
#include "1013Maths.h"
MetaBalls::MetaBalls() : metaballsTexture(NULL), metaballsShader(NULL) {
glGenFramebuffers(1, &myFBO);
metaballTexture[0] = NULL;
metaballTexture[1] = NULL;
metaballTexture[2] = NULL;
CRender::Instance()->CreateTexture("WaterCanvas.png", &metaballsTexture);
CRender::Instance()->CreateTexture("metaball.pvr", &metaballTexture[0]);
CRender::Instance()->CreateTexture("metaball-1.png", &metaballTexture[1]);
CRender::Instance()->CreateTexture("metaball-2.png", &metaballTexture[2]);
CRender::Instance()->CreateShader("Shaders/metaballs.fs", "Shaders/metaballs.vs", &metaballsShader);
glBindFramebuffer(GL_FRAMEBUFFER, myFBO);
// Attach texture to frame buffer
glBindTexture(GL_TEXTURE_2D, metaballsTexture->m_id);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, metaballsTexture->m_id, 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
std::string error = "Metaballs framebuffer incomplete";
std::cerr << error << std::endl;
throw error;
}
float w = PTM_DOWNSCALE(float(metaballsTexture->m_width));
float h = PTM_DOWNSCALE(float(metaballsTexture->m_height));
CRender::Instance()->BuildQuad(
tVertex( b2Vec3(0,0,0), b2Vec2(0,1) ),
tVertex( b2Vec3(w,0,0), b2Vec2(1,1) ),
tVertex( b2Vec3(w,h,0), b2Vec2(1,0) ),
tVertex( b2Vec3(0,h,0), b2Vec2(0,0) ),
buffer);
// return to default state
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
MetaBalls::~MetaBalls() {
CRender::Instance()->ReleaseShader(metaballsShader);
CRender::Instance()->ReleaseTexture(metaballsTexture);
CRender::Instance()->ReleaseTexture(metaballTexture[0]);
CRender::Instance()->ReleaseTexture(metaballTexture[1]);
CRender::Instance()->ReleaseTexture(metaballTexture[2]);
glDeleteFramebuffers(1, &myFBO);
}
void MetaBalls::PrepareToAddMetaballs(b2Vec3& paintColour) {
// bind render to texture
glBindFramebuffer(GL_FRAMEBUFFER, myFBO);
// Set orthographic projection
cfloat w = SCREEN_WIDTH / PTM_RATIO;
cfloat h = SCREEN_HEIGHT / PTM_RATIO;
VSML::ortho(-w, 0, -h, 0, 0.0f, -1.0f, CRender::Instance()->m_Proj);
// Set our viewport so our texture isn't clipped
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
glClearColor(paintColour.x, paintColour.y, paintColour.z, 0.1f);
glClear(GL_COLOR_BUFFER_BIT);
}
void MetaBalls::FinishedAddingMetaballs() {
glBindFramebuffer(GL_FRAMEBUFFER, NULL);
CRender::Instance()->SetWindowViewport();
}
void MetaBalls::AddMetaball(float x, float y, uint size) {
// render the metaball texture to larger texture
VSML::setIdentityMatrix(pTransform);
pTransform[12] = PTM_DOWNSCALE(x);
pTransform[13] = PTM_DOWNSCALE(y);
float oldview[16];
float identity[16];
VSML::setIdentityMatrix(identity);
memcpy(oldview, CRender::Instance()->GetViewMatrix(), sizeof(float)*16);
memcpy(CRender::Instance()->GetViewMatrix(),identity, sizeof(float)*16);
CRender::Instance()->DrawSprite(metaballTexture[size], pTransform, 1.0f, true);
memcpy(CRender::Instance()->GetViewMatrix(),oldview, sizeof(float)*16);
}
void MetaBalls::Render() {
VSML::setIdentityMatrix(pTransform);
pTransform[12] = PTM_DOWNSCALE(0);
pTransform[13] = PTM_DOWNSCALE(-256);
// render our metaballs texture using alpha test shader
CRender::Instance()->BindShader(metaballsShader);
CRender::Instance()->BindTexture(0, metaballsTexture);
CRender::Instance()->SetMatrix(metaballsShader, "view", CRender::Instance()->GetViewMatrix());
CRender::Instance()->SetMatrix(metaballsShader, "world", pTransform);
CRender::Instance()->SetMatrix(metaballsShader, "proj", CRender::Instance()->GetProjMatrix());
CRender::Instance()->SetBlending(true);
CRender::Instance()->DrawPrimitives(buffer);
CRender::Instance()->SetBlending(false);
}
Are you setting your viewport according to the texture's size? I didnt find any view port setting on your code...