Anti-aliasing in allegro 5 - c++

How do I make allegro 5 use anti-aliasing when drawing? I need diagonal lines to appear smooth. Currently, they are only lines of shaded pixels, and the edges look hard.

To enable anti aliasing for the primitives:
// before creating the display:
al_set_new_display_option(ALLEGRO_SAMPLE_BUFFERS, 1, ALLEGRO_SUGGEST);
al_set_new_display_option(ALLEGRO_SAMPLES, 8, ALLEGRO_SUGGEST);
display = al_create_display(640, 480);
Note that anti-aliasing will only work for primitives drawn directly to the back buffer. It will not work anywhere else.
On OpenGL, your card must support the ARB_multisample extension.
To check if it was enabled (when using ALLEGRO_SUGGEST):
if (al_get_display_option(display, ALLEGRO_SAMPLE_BUFFERS)) {
printf("With multisampling, level %i\n",
al_get_display_option(display, ALLEGRO_SAMPLES));
}
else {
printf("Without multisampling.\n");
}

You have two options: line smoothing or multisampling.
You can activate line smoothing by using glEnable(GL_LINE_SMOOTH). Note that Allegro 5 may reset this when you draw lines through Allegro.
The other alternative is to create a multisampled display. This must be done before calling al_create_display. The way to do it goes something like this:
al_set_new_display_option(ALLEGRO_SAMPLE_BUFFERS, 1, ALLEGRO_REQUIRE);
al_set_new_display_option(ALLEGRO_SAMPLES, #, ALLEGRO_SUGGEST);
The # above should be the number of samples to use. How many? That's implementation-dependent, and Allegro doesn't help. That's why I used ALLEGRO_SUGGEST rather than REQUIRE for the number of samples. The more samples you use, the better quality you get. 8 samples might be a good value that's supported on most hardware.

Related

QChart and using QGradients with accelerated OpenGL rendering

What is going wrong:
Currently my chart works completely fine, it has gradients, and single colored series for example:
This works fine, but when I enable openGL acceleration (for more performance) on the 3 series using fooSeries->setUseOpenGL(true) the graph turns into this:
As you can see the color for the gradient series turn black, while the single colored series turns white. Also the Rounded Caps and Rounded Joins also seem to have gone. I did some experimentation to see what may be causing it.
Attemped fixes/experimentation:
I color the series as follows:
// fooGradient is a QLinearGradient described elsewhere as an example.
QBrush fooGradientPenBrush = QBrush(fooGradient);
fooPen = new QPen(fooGradientPenBrush, 5, Qt::SolidLine, Qt::RoundCap, Qt::RoundJoin);
//There are actually 2 QPens for two separate gradients in the program, but this is just for the example.
QBrush barPenBrush = QBrush(QRgb(0xFFFFFF));
barPen = new QPen(barPenBrush, 3, Qt::SolidLine, Qt::RoundCap, Qt::RoundJoin);
Then later attach these pens top their respective series:
fooSeries->setPen(*fooPen);
barSeries->setPen(*barPen);
Then they are attached to the chart. That's it. I will keep experimenting and looking at the documentation to see if I missed something, it may just be that the openGL acceleration only accepts solid colors, but it is said no where in the documentation that I can find. I'll leave a link to the setUseOpenGl documentation if anyone would like to take a look here.
After more research, I seemed to have missed an important detail in the documentation:
Pen styles and marker shapes are ignored for accelerated series. Only solid lines and plain scatter dots are supported. The scatter dots may be circular or rectangular, depending on the underlying graphics hardware and drivers.
I still wonder if there is a way to implement rounded corners and what not to accelerated lines.

Drawing large text with GLUT?

I have created a tic-tac-toe game for school, and I am basically finished. The only thing I have left is that when someone wins, I want a box to pop up on the screen and to say in big text "X Wins" or "O Wins" depending on who one.
I've found that drawing text in openGL is very complicated. Since this isn't crucial to the assignment, I'm not looking for something complicated or and don't need it to look super nice. Also, I would most likely just want to change my code. Also, I want the size of the text to be variable driven for when I re-size the window.
This is what my text drawing function currently looks like. It draws it really small.
Note: mText is an int [2] data member that holds where I want to draw the text
void FullGame::DrawText(const char *string) const
{
glColor3d(1, 1, 1);
void *font = GLUT_BITMAP_TIMES_ROMAN_24;
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
int len, i;
glRasterPos2d(mText[0], mText[1]);
len = (int)strlen(string);
for (i = 0; i < len; i++)
{
glutBitmapCharacter(font, string[i]);
}
glDisable(GL_BLEND);
}
Short answer:
You are drawing bitmap characters which cannot be resized since they are given in pixels. Try using a stroke character or string with one of the following functions: glutStrokeCharacter, glutStrokeString. Stroke fonts can be resized using glscalef.
Long answer:
Actually, instead of printing the text character by character you could have used:
glutBitmapString(GLUT_BITMAP_TIMES_ROMAN_24, "Text to appear!");
Here the font size is in pixel so scaling will not work in direct usage.
Please note that glutBitmapString is introduced in FreeGLUT, which is an open source alternative to the OpenGL Utility Toolkit (GLUT) library. For more details in font rendering: http://freeglut.sourceforge.net/docs/api.php#FontRendering
I strongly advice using FreeGlut rather than the original Glut.
1 - Glut`s last release was before the year 2000 so it has some missing features: http://www.lighthouse3d.com/cg-topics/glut-and-freeglut/
2 - Two of the most common GLUT replacements are OpenGLUT and freeGLUT, both of which are open source projects.
If you decide using FreeGlut you should have the following include:
#include < GL/freeglut.h>
Having said that all you need is scaling the text, therefore you can use glutStrokeString which is more flexible than bitmap string,e.g., it can be resized visually. You can try:
glScalef(0.005,0.005,1);
glutStrokeString(GLUT_STROKE_ROMAN, (unsigned char*)"The game over!");
Hope that helps!
The easiest way is to generate a texture for each "Win" screen and render it on a quad. It sounds like you're not concerned about printing arbitrary strings, only 2 possible messages. You can draw the textures in Paint or whatever, and if you're not that worried about quality, the textures don't even have to be that big. Easy to implement and totally resizable.

How to draw D3DXFont using D3DXSprite in DirectX 9 as fast as possible?

I have lots of text to draw. If I call D3DXFont::DrawText with first parameter being NULL I get terrible performance.
I heard that using D3DXFont with conjunction with D3DXSprites makes things much more faster.
How my application needs to draw strings?
It daraws every string with pseudo shadow. It means I draw each string 4 times in black:
x + 1, y + 1
x - 1, y + 1
x - 1, y - 1
x + 1, y - 1
and 1 time in actual color. It makes very nice looking always readable strings. I even switched to pixel fonts for faster rendering.
Now call that string with shadow (ShadowString).
Every frame I draw 256 (worst case scenario) of those ShadowStrings on screen.
I would like to know how to use sprites (or any other technique) to speed up drawing of those string as much as possible). Now I'm getting 30 FPS in app. But I target for 120 min. And problem is ONLY that text drawing thing.
Surely, you must profile your application before any optimizations, but truth to be told, D3DXFont/D3DXSprites and "fast" is mutually exclusive concepts. If they do not fit, just don't use them.
Use 3rd party libraries or make your own sprite/font renderer.
Recently I've answered about how to do it here: How to draw line and font without D3DX9 in DirectX 9?
Also, Google for "sprite font", "sprite batching", "texture atlases", "TTF rendering". It is not very difficult if you are familiar with API (notably vertex buffers and texturing), and there are plenty of examples on web. Don't hesitate to look for D3D11 or OpenGL examples, principles are the same.

SDL Transparent Overlay

I would like to create a fake "explosion" effect in SDL. For this, I would like the screen to go from what it is currently, and fade to white.
Originally, I thought about using SDL_FillRect like so (where explosionTick is the current alpha value):
SDL_FillRect(screen , NULL , SDL_MapRGBA(screen->format , 255, 255 , 255, explosionTick ));
But instead of a reverse fading rectangle, it shows up completely white with no alpha. The other method I tried involved using a fullscreen bitmap filled with a transparent white (with an alpha value of 1), and blit it once for each explosionTick like so:
for(int a=0; a<explosionTick; a++){
SDL_BlitSurface(boom, NULL, screen, NULL);
}
But, this ended up being to slow to run in real time.
Is there any easy way to achieve this effect without losing performance? Thank you for your time.
Well, you need blending and AFAIK the only way SDL does it is with SDL_Blitsurface. So you just need to optimize that blit. I suggest benchmarking those:
try to use SDL_SetAlpha to use per-surface alpha instead of per-pixel alpha. In theory, it's less work for SDL, so you may hope some speed gain. But I never compared it and had some problem with this in the past.
you don't really need a fullscreen bitmap, just repeat a thick row. It should be less memory intensive and maybe there is a cache gain. Also you can probably fake some smoothness by doing half the lines at each pass (less pixels to blit and should still look like a global screen effect).
for optimal performance, verify that your bitmap is at the display format. Check SDL_DisplayFormatAlpha or possibly SDL_DisplayFormat if you use per-surface alpha

How to magnify/stretch a texture with Matlab Psychtoolbox (OpenGL)?

Update: This only seems to be a problem at some computers. The normal, intuitive code seems to work fine one my home computer, but the computer at work has trouble.
Home computer: (no problems)
Windows XP Professional SP3
AMD Athlon 64 X2 3800+ Dual Core 2.0 GHz
NVIDIA GeForce 7800 GT
2 GB RAM
Work computer: (this question applies to this computer)
Windows XP Professional SP3
Intel Pentium 4 2.8 Ghz (dual core, I think)
Intel 82945G Express Chipset Family
1 GB RAM
Original post:
I'm trying to apply a very simple texture to a part of the screen using Psychtoolbox in Matlab with the following code:
win = Screen('OpenWindow', 0, 127); % open window and obtain window pointer
tex = Screen('MakeTexture', win, [255 0;0 255]); % get texture pointer
% draw texture. Args: command, window pointer, texture pointer, source
% (i.e. the entire 2x2 matrix), destination (a 100x100 square), rotation
% (none) and filtering (nearest neighbour)
Screen('DrawTexture', win, tex, [0 0 2 2], [100 100 200 200], 0, 0);
Screen('Flip', win); % flip the buffer so the texture is drawn
KbWait; % wait for keystroke
Screen('Close', win); % close screen
Now I would expect to see this (four equally sized squares):
But instead I get this (right and bottom sides are cut off and top left square is too large):
Obviously the destination rectangle is a lot bigger than the source rectangle, so the texture needs to be magnified. I would expect this to happen symmetrically like in the first picture and this is also what I need. Why is this not happening and what can I do about it?
I have also tried using [128 0 1152 1024] as a destination rectangle (as it's the square in the center of my screen). In this case, all sides are 1024, which makes each involved rectangle a power of 2. This does not help.
Increasing the size of the checkerboard results in a similar situation where the right- and bottommost sides are not showed correctly.
Like I said, I use Psychtoolbox, but I know that it uses OpenGL under the hood. I don't know much about OpenGL either, but maybe someone who does can help without knowing Matlab. I don't know.
Thanks for your time!
While I don't know much (read: any) Matlab, I do know that textures are very picky in openGL. Last I checked, openGL requires texture files to be square and of a power of two (i.e. 128 x 128, 256 x 256, 512 x 512).
If they aren't, openGL is supposed to pad the file with appropriate white pixels where they're needed to meet this condition, although it could be a crapshoot depending on which system you are running it on.
I suggest making sure that your checkerboard texture fits these requirements.
Also, I can't quite make sure from your code posted, but openGL expects you to map the corners of your texture to the corners of the object you are intending to texture.
Another bit of advice, maybe try a linear filter instead of nearest neighbor. It's heavier computationally, but results in a better image. This probably won't matter in the end.
While this help is not Matlab specific, hope it is useful.
Without knowing a lot about the Psychtoolbox, but having dealt with graphics and user interfaces a lot in MATLAB, the first thing I would try would be to fiddle with the fourth input to Screen (the "source" input). Try shifting each corner by half-pixel and whole-pixel values. For example, the first thing I would try would be:
Screen('DrawTexture', win, tex, [0 0 2.5 2.5], [100 100 200 200], 0, 0);
And if that didn't seem to do anything, I would next try:
Screen('DrawTexture', win, tex, [0 0 3 3], [100 100 200 200], 0, 0);
My reasoning for this advice: I've noticed sometimes that images or GUI controls in my figures can appear to be off by a pixel, which I can only speculate is some kind of round-off error when scaling or positioning them.
That's the best advice I can give. Hope it helps!