Drawing large text with GLUT? - c++

I have created a tic-tac-toe game for school, and I am basically finished. The only thing I have left is that when someone wins, I want a box to pop up on the screen and to say in big text "X Wins" or "O Wins" depending on who one.
I've found that drawing text in openGL is very complicated. Since this isn't crucial to the assignment, I'm not looking for something complicated or and don't need it to look super nice. Also, I would most likely just want to change my code. Also, I want the size of the text to be variable driven for when I re-size the window.
This is what my text drawing function currently looks like. It draws it really small.
Note: mText is an int [2] data member that holds where I want to draw the text
void FullGame::DrawText(const char *string) const
{
glColor3d(1, 1, 1);
void *font = GLUT_BITMAP_TIMES_ROMAN_24;
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
int len, i;
glRasterPos2d(mText[0], mText[1]);
len = (int)strlen(string);
for (i = 0; i < len; i++)
{
glutBitmapCharacter(font, string[i]);
}
glDisable(GL_BLEND);
}

Short answer:
You are drawing bitmap characters which cannot be resized since they are given in pixels. Try using a stroke character or string with one of the following functions: glutStrokeCharacter, glutStrokeString. Stroke fonts can be resized using glscalef.
Long answer:
Actually, instead of printing the text character by character you could have used:
glutBitmapString(GLUT_BITMAP_TIMES_ROMAN_24, "Text to appear!");
Here the font size is in pixel so scaling will not work in direct usage.
Please note that glutBitmapString is introduced in FreeGLUT, which is an open source alternative to the OpenGL Utility Toolkit (GLUT) library. For more details in font rendering: http://freeglut.sourceforge.net/docs/api.php#FontRendering
I strongly advice using FreeGlut rather than the original Glut.
1 - Glut`s last release was before the year 2000 so it has some missing features: http://www.lighthouse3d.com/cg-topics/glut-and-freeglut/
2 - Two of the most common GLUT replacements are OpenGLUT and freeGLUT, both of which are open source projects.
If you decide using FreeGlut you should have the following include:
#include < GL/freeglut.h>
Having said that all you need is scaling the text, therefore you can use glutStrokeString which is more flexible than bitmap string,e.g., it can be resized visually. You can try:
glScalef(0.005,0.005,1);
glutStrokeString(GLUT_STROKE_ROMAN, (unsigned char*)"The game over!");
Hope that helps!

The easiest way is to generate a texture for each "Win" screen and render it on a quad. It sounds like you're not concerned about printing arbitrary strings, only 2 possible messages. You can draw the textures in Paint or whatever, and if you're not that worried about quality, the textures don't even have to be that big. Easy to implement and totally resizable.

Related

Speeding up drawing bitmap magnification within second bitmap with blend

The following code stretches a bitmap, blends it with an existing background, maintains transparent area of primary graphic and then displays the blend within a window (imgScreen). This works fine when the level of stretch is not large or when it is actually shrinking the initial bitmap. However when stretching the graphic it is very slow.
I have limited experience with C++ and this kind of graphics so perhaps there is another more efficient way to do this. The primary bitmap to be sized is always square. Any ideas are much appreciated..!
I was going to try not displaying clipping area but from tests it seems the initial stretch is causing the slowdown... Also having trouble seeing how to calculate non clipped area... Drawing to controls seems a waste but seems only way to use built in functions like stretchdraw and the alpha draw option.
std::auto_ptr<Graphics::TBitmap> bmap(new Graphics::TBitmap);
std::auto_ptr<Graphics::TBitmap> bmap1(new Graphics::TBitmap);
int s = newsize;
TRect sR = Rect(X,Y,X+s,Y+s);
TRect tR = Rect(0,0,s,s);
bmap->SetSize(s,s);
bmap->Canvas->StretchDraw(Rect(0, 0, s, s), Form1->Image4->Picture-
>Bitmap); // scale
bmap1->SetSize(s,s);
bmap1->Canvas->CopyRect(tR, Form1->imgScreen->Canvas, sR); //background
bmap1->Canvas->Draw(0,0,bmap.get()); // combine
Form1->imgTemp->Picture->Assign(bmap1.get());
Form1->imgScreen->Canvas->Draw(X,Y, Form1->imgTemp->Picture->Bitmap,
alpha);
Displays correctly but as graphic gets larger draw rate slows down quickly...

SDL Transparent Overlay

I would like to create a fake "explosion" effect in SDL. For this, I would like the screen to go from what it is currently, and fade to white.
Originally, I thought about using SDL_FillRect like so (where explosionTick is the current alpha value):
SDL_FillRect(screen , NULL , SDL_MapRGBA(screen->format , 255, 255 , 255, explosionTick ));
But instead of a reverse fading rectangle, it shows up completely white with no alpha. The other method I tried involved using a fullscreen bitmap filled with a transparent white (with an alpha value of 1), and blit it once for each explosionTick like so:
for(int a=0; a<explosionTick; a++){
SDL_BlitSurface(boom, NULL, screen, NULL);
}
But, this ended up being to slow to run in real time.
Is there any easy way to achieve this effect without losing performance? Thank you for your time.
Well, you need blending and AFAIK the only way SDL does it is with SDL_Blitsurface. So you just need to optimize that blit. I suggest benchmarking those:
try to use SDL_SetAlpha to use per-surface alpha instead of per-pixel alpha. In theory, it's less work for SDL, so you may hope some speed gain. But I never compared it and had some problem with this in the past.
you don't really need a fullscreen bitmap, just repeat a thick row. It should be less memory intensive and maybe there is a cache gain. Also you can probably fake some smoothness by doing half the lines at each pass (less pixels to blit and should still look like a global screen effect).
for optimal performance, verify that your bitmap is at the display format. Check SDL_DisplayFormatAlpha or possibly SDL_DisplayFormat if you use per-surface alpha

Proper Measurement of Characters in Pixels

I'm writing a text box from scratch in order to optimize it for syntax highlighting. However, I need to get the width of characters in pixels, exactly, not the garbage Graphics::MeasureString gives me. I've found a lot of stuff on the web, specifially this, however, none of it seems to work, or does not account for tabs. I need the fastest way to measure the exact dimensions of a character in pixels, and tab spaces. I can't seem to figure this one out...
Should mention I'm using C++, CLR/CLI, and GDI+
Here is my measuring function. In another function the RectangleF it returns is drawn to the screen:
RectangleF TextEditor::MeasureStringWidth(String^ ch, Graphics^ g, int xDistance, int lineYval)
{
RectangleF measured;
Font^ currentFont = gcnew Font(m_font, (float)m_fontSize);
StringFormat^ stringFormat = gcnew StringFormat;
RectangleF layout = RectangleF(xDistance,lineYval,35,m_fontHeightPix);
array<CharacterRange>^ charRanges = {CharacterRange(0,1)};
array<Region^>^ strRegions;
stringFormat->FormatFlags = StringFormatFlags::DirectionVertical;
stringFormat->SetMeasurableCharacterRanges(charRanges);
strRegions = g->MeasureCharacterRanges(ch, currentFont, layout, stringFormat);
if(strRegions->Length >= 1)
measured = strRegions[0]->GetBounds(g);
else
measured = RectangleF(0,0,0,0);
return measured;
}
I don't really understand what MeasureCharacterRanges layoutRect parameter does. I modified the code from Microsofts example to only work with, or only measure, one character.
You should not be using Graphics for any text rendering.
Starting with .NET Framework 2.0 use of Graphics.MeasureString and Graphics.DrawString was deprecated in favor of a newly added helper class TextRenderer:
TextRenderer.MeasureText
TextRenderer.DrawText
The GDI+ text renderer has been abandoned, and hasn't gotten any improvements or fixes for over 10 years; as well as being software rendered.
GDI rendering (which TextRenderer is a simple wrapper of) is hardware accelerated, and continues to get rendering improvements (ligatures, Uniscribe, etc).
Note: GDI+ text rendering is wrapped by Graphics.DrawString and MeasureString
Here's a comparison of the measure results of Graphics and TextRenderer:
The GDI+ measurements aren't "wrong", they are doing exactly what they intend - return the size that the text would be if it were rendered as the original font author intended (which you can achieve using Anti-alias rendering):
But nobody really wants to look at text the way the font designer intended, because that causes stuff to not line-up on pixel boundaries - making the text too fuzzy (i.e. as you see on a Mac). Ideally the text should be snapped to line up on actual pixel boundaries (i.e. Windows' Grid Fitting)
Bonus Reading
See my answer over here, with more information.
There's MeasureCharacterRanges that is like MeasureString, but more powerful and more accurate (and also slower).

What is the best way to detect mouse-location/clicks on object in OpenGL?

I am creating a simple 2D OpenGL game, and I need to know when the player clicks or mouses over an OpenGL primitive. (For example, on a GL_QUADS that serves as one of the tiles...) There doesn't seems to be a simple way to do this beyond brute force or opengl.org's suggestion of using a unique color for every one of my primitives, which seems a little hacky. Am I missing something? Thanks...
My advice, don't use OpenGL's selection mode or OpenGL rendering (brute force method you are talking about), use a CPU-based ray picking algorithm if 3D. For 2D, like in your case, it should be straightforward, it's just a test to know if a 2D point is in a 2D rectangle.
I would suggest to use the hacky method if you want a quick implementation (coding time, I mean). Especially if you don't want to implement a quadtree with moving ojects. If you are using opengl immediate mode, that should be straightforward:
// Rendering part
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
for(unsigned i=0; i<tileCout; ++i){
unsigned tileId = i+1; // we inc the tile ID in order not to pick up the black
glColor3ub(tileId &0xFF, (tileId >>8)&0xFF, (tileId >>16)&0xFF);
renderTileWithoutColorNorTextures(i);
}
// Let's retrieve the tile ID
unsigned tileId = 0;
glReadPixels(mouseX, mouseY, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE,
(unsigned char *)&tileId);
if(tileId!=0){ // if we didn't picked the black
tileId--;
// we picked the tile number tileId
}
// We don't want to show that to the user, so we clean the screen
glClearColor(...); // the color you want
glClear(GL_COLOR_BUFFER_BIT);
// Now, render your real scene
// ...
// And we swap
whateverSwapBuffers(); // might be glutSwapBuffers, glx, ...
You can use OpenGL's glRenderMode(GL_SELECT) mode. Here is some code that uses it, and it should be easy to follow (look for the _pick method)
(and here's the same code using GL_SELECT in C)
(There have been cases - in the past - of GL_SELECT being deliberately slowed down on 'non-workstation' cards in order to discourage CAD and modeling users from buying consumer 3D cards; that ought to be a bad habit of the past that ATI and NVidia have grown out of ;) )

Rewriting a simple Pygame 2D drawing function in C++

I have a 2D list of vectors (say 20x20 / 400 points) and I am drawing these points on a screen like so:
for row in grid:
for point in row:
pygame.draw.circle(window, white, (particle.x, particle.y), 2, 0)
pygame.display.flip() #redraw the screen
This works perfectly, however it's much slower then I expected.
I want to rewrite this in C++ and hopefully learn some stuff (I am doing a unit on C++ atm, so it'll help) on the way. What's the easiest way to approach this? I have looked at Direct X, and have so far followed a bunch of tutorials and have drawn some rudimentary triangles. However I can't find a simple (draw point).
DirectX doesn't have functions for drawing just one point. It operates on vertex and index buffers only. If you want simpler way to make just one point, you'll need to write a wrapper.
For drawing lists of points you'll need to use DrawPrimitive(D3DPT_POINTLIST, ...). however, there will be no easy way to just plot a point. You'll have to prepare buffer, lock it, fill with data, then draw the buffer. Or you could use dynamic vertex buffers - to optimize performance. There is a DrawPrimitiveUP call that is supposed to be able to render primitives stored in system memory (instead of using buffers), but as far as I know, it doesn't work (may silently discard primitives) with pure devices, so you'll have to use software vertex processing.
In OpenGL you have glVertex2f and glVertex3f. Your call would look like this (there might be a typo or syntax error - I didn't compiler/run it) :
glBegin(GL_POINTS);
glColor3f(1.0, 1.0, 1.0);//white
for (int y = 0; y < height; y++)
for (int x = 0; x < width; x++)
glVertex2f(points[y][x].x, points[y][x].y);//plot point
glEnd();
OpenGL is MUCH easier for playing around and experimenting than DirectX. I'd recommend to take a look at SDL, and use it in conjuction with OpenGL. Or you could use GLUT instead of SDL.
Or you could try using Qt 4. It has a very good 2D rendering routines.
When I first dabbled with game/graphics programming I became fond of Allegro. It's got a huge range of features and a pretty easy learning curve.