sdl ttf_rendertext_blended fails randomly - opengl

EDIT: Even that the problem still exists, I haven't been able to reproduce this frequently enough to examine it closer. See more info at the end of the question.
I started to develop a game, and I am currently writing basic library for it. I'm using D programming language with SDL-2 and OpenGL 3 (using Derelict3 bindings), on Linux Mint 13 (Maya). Compiler is DMD64 D Compiler v2.067.1, and I rebuild binary each time with 'rdmd'.
To render (changing) text, I create glyphs on-demand. The piece of code I use for this is:
class Font {
...
Texture render(char c) {
if(!(c in rendered)) rendered[c] = texture(to!string(c));
return rendered[c];
}
Texture texture(string text) {
SDL_Color color={255, 255, 255, 255};
auto bitmap = TTF_RenderText_Blended(
font,
std.string.toStringz(text),
color
);
if(!bitmap) {
throw new TTFError(
"TTF_RenderText_Blended: " ~
to!string(TTF_GetError()) ~ ": '" ~ text ~ "'"
);
}
auto texture = new Texture(bitmap);
SDL_FreeSurface(bitmap);
return texture;
}
The problem is that this fails purely randomly. Sometimes it works without any problems. When it fails to render a glyph, it is interesting that it will fail to render the same glyph over and over again. Here is an example when catching the exception I throw:
...
TTF_RenderText_Blended: Text has zero width: '9'
TTF_RenderText_Blended: Text has zero width: '6'
TTF_RenderText_Blended: Text has zero width: '9'
TTF_RenderText_Blended: Text has zero width: '6'
TTF_RenderText_Blended: Text has zero width: '9'
TTF_RenderText_Blended: Text has zero width: '6'
...
(I'm printing score to screen, other numbers showing fine except those few ones). The numbers TTF_RenderText_Blended fails to render vary from run to run, and as mentioned, time to time it renders all the numbers.
One detail is that static strings I render before entering game loop have not yet failed to render, just single letters I use for changing texts.
I'm pretty much out of any ideas, and haven't found anything related to this problem by searching web. Any ideas to look for solutions are very well appreciated.
CURRENT SITUATION: I updated compiler to DMD 2.067.1 and the problem remains (compilers used so far: 2.066.1, 2.067.1). The whole - lets say project family is in the github at the moment:
https://github.com/mkoskim/games
The text glyph rendering function is located in this file:
https://github.com/mkoskim/games/blob/master/engine/ext/font.d
...and it is used from here:
https://github.com/mkoskim/games/blob/master/engine/ext/gui/label.d
The problem occurs mainly/most frequently in the pacman game (although very seldomly just right now):
https://github.com/mkoskim/games/tree/master/testbench/pacman
If you want to try it out, first, read the (hopefully complete enough) installation instructions:
https://github.com/mkoskim/games/blob/master/INSTALL
The project is made for 64-bit Linux Mint Maya, and it is currently not that user friendly and portable from building perspective. Pacman is the only demo that (hopefully) works without game controller. After successful installation of required libraries and tools, you can build it with command:
games/testbench/pacman$ make default

I ran into the exact same issue, and for me it was fixed by keeping the SDL_RWops structure used to create the font (with TTF_OpenFontRW) alive for the whole lifetime of the TTF_Font created by it. I saw you're creating the font with TTF_OpenFontRW as well so I assume this will fix it for you as well. It looks like SDL_ttf relies on this being kept alive, it reads freed memory otherwise.

I know this question is a little bit outdated but I maybe had a similar problem.
I fixed it with simply call SDL_DestroyTexture() every time befor I used TTF_Render_Text_Blended() :)

Related

Borland c++ console functions

I'm studing now and I got this homework / tasks to do:
1) If you press the CTRL + L key, all numeric symbols should change the color.
2) If you press the CTRL + S key, you will get the length of the word, left from the cursor.
I found this function int bioskey(int cmd);
So now I can check if the key is pressed, but how to change the color only of numeric symbols, or read words from console to get their length ?
Some of us still remember the MS-DOS (let it rest in peace or pieces...)
if you are really in MS-DOS then you can not expect that the content of the console would be changed in colors for only specific areas. You need to do that your self. The problem is we do not know anything about your project background so we do not know what and how yours stuff is represented,rendered/outputed/inputed etc...
I assume EGA/VGA BIOS text mode is used so you can exploit direct access to the VRAM. So you need to set pointer to the address B800:0000 and handle it as array where each character on screen has 2 BYTEs. one is color attribute and the other is ASCII code (not sure in which order anymore)...
So for already rendered stuff you just:
loop through whole screen
usually 80x25x2 Bytes
test each ASCII for alpha numeric value
so ASCII code >= '0' and code<='9' for numbers or add all the stuff you are considering as alphanumeric like code>' ' and code<='9'.
change colors for selected characters
just by changing the attribute byte.
When you put it together for numbers it will look like this:
char far *scr=(char far*)0x0B0000000;
int x,y,a;
for (a=0,y=0;y<25;y++)
for (x=0;x<80;x++,a+=2)
if ((scr[a+0]>='0')&&((scr[a+0]<='9'))
{
scr[a+1]=7; //attribute with the different color here
}
if it does not work than try swap scr[a+0] and scr[a+1]. If an exception occur then you are not in MS-DOS and you do not have access to VRAM. In that case use DOS-BOX or driver that allows access to memory like dllportio ...
For more info see some more or less related QA's:
Display an array of color in C
What is the best way to move an object on the screen?
If you got problem with the CTRL+Key detection not sure if in-build function in TC++ allows CTRL (was too long ago) then you can exploit BIOS or even hook up the keyboard ISR. See the second link where ISR for keyboard handler is there present... You can port it to C++ or google there must be a lot of examples out there especially TP7.0 (which is pascal but easily portable to TC++)

wxWidgets wxPen size changes unexpectedly

I've used the following code to draw on a Image using a wxMemoryDC.
To do so I used a wxPen and changed the settings of the pen as in the following code. The code compiles and runs perfectly in windows environment. But in Ubuntu it draws the lines but the pen size stays correctly for a very little time and then the pen size becomes very low.(As shown in the image) It is not an error of the m_pensize variable because it always prints the correct value. Why does this works so strange in ubuntu when it works correctly in windows?.
(m_graphics is the memoryDC here)
if (x<m_backgroundImage.GetWidth() && y< m_backgroundImage.GetHeight()){
m_graphics.SelectObject(m_maskImage);
wxPen* pen;
if (m_isDrawing){
pen = wxThePenList->FindOrCreatePen(*wxRED, m_penSize);
printf("Pen size is %d", m_penSize);
}
else{
pen = wxThePenList->FindOrCreatePen(*wxBLACK, m_penSize);
}
if (m_pentype != Circle){
pen->SetCap(wxCAP_PROJECTING);
}
m_graphics.SetPen(*pen);
m_graphics.DrawLine(m_lastX,m_lastY,x,y);
m_graphics.SelectObject(wxNullBitmap);
}
In windows it is shown Correctly
In linux The pen size is changed unexpectadly.
Your help is greatly appreciated.
If the same code behaves differently in wxMSW and wxGTK, then it's probably a bug in wxWidgets itself, however to fix it it needs to be reproduced in some simple to test way, ideally by making the smallest possible change to the wxWidgets drawing sample and opening a ticket attaching this change as a patch to it.
To simplify the code as much as possible, I'd recommend:
Getting rid of wxThePenList and just creating the pen directly. It's unlikely that the bug is here, but who knows.
Check if it's not due to SetCap() call, this is the most likely candidate IMHO.

SDL_RenderCopy() has strange behavior on Raspberry PI

This is driving me up the wall..
I've got a very simple SDL2 program.
It has a array of 3 SDL_Texture pointers.
These textures are filled as follows:
SDL_Texture *myarray[15];
SDL_Surface *surface;
for(int i=0;i<3;i++)
{
char filename[] = "X.bmp";
filename[0] = i + '0';
surface = SDL_LoadBMP(filename);
myarray[i] = SDL_CreateTextureFromSurface(myrenderer,surface);
SDL_FreeSurface(surface);
}
This works, no errors.
In the main loop (which is just a standard event loop waiting for SDL_QUIT, keystrokes and a user-event which a SDL_Timer puts in the event queue every second) I just do (for the timer triggered event):
idx = (idx+1) % 3; // idx is global var initially 0.
SDL_RenderClear(myrenderer);
SDL_RenderCopy(myrenderer, myarray[idx], NULL, NULL);
SDL_RendererPresent(myrenderer);
This works fine for 0.bmp and 1.bmp, but the 3rd image (2.bmp) simply shows as a black field.
This is structural.
If I alternate the first 2 images they are both fine.
If I alternate the 2nd and 3rd image the 3rd image doesn't show.
If I use more than 3 images then 3 and upwards show as black.
Loading order doesn't matter. It starts going wrong with the 3rd image loaded from disk.
All images are properly formatted BMP's.
I even saved 2.bmp back to disk under a different name by using SDL_SaveBMP() after it was loaded to make sure it got loaded in memory OK. The new file is bit for bit identical to the original.
This program, without modifications and the same bmp files, works fine on OSX (XCode5) and Windows (VC++ 2012 Express).
The problem only shows on the Raspberry PI.
I have placed explicit error checks on every call that can leave a result/error-code (not shown in the samples above for brevity) but all of them show "no error".
I have used the latest stable source set of www.libsdl.org and compiled as instructed (configure, make, make install, etc.).
Anybody got any idea what could be going on ?
P.S.
Keyboard input doesn't seem to work either on my PI, but I haven't delved into that yet.
Answering myself as I finally figured it out myself...
I finally went back to the README-raspberrypi.txt that came with the SDL2 sources.
I didn't read it carefully enough the first time around...
Problem 1: I'am running on a FULL-HD display. The PI's default GPU memory is 64MB which is not enough for large displays and double-buffering. As suggested in the README I increased this to 128MB and this solved the black image problem.
Problem 2: Text input wasn't working because my user-account was not in the input group. I had added the default "pi" account to the input group initially, but when I later started using another account I forgot to add that user to the group.
In short: Caught by my own (too) quick skimming of the documentation.

Write information into empty window with c++

So I'm building this game engine thinggy, and found it to be VERY hard to create some kind of an overlay with debug information into the main game window with D3D11 or at all draw text, so I thought I'd create an other window to contain my debug data.
I got the window created fine and all, but I have no idea how to write my debug info into it. I do not want to use the windows form designer as that would have to convert my project into a CLR project which I do not want.
I have been googling now for 3 hours at least (honest) and tried various solutions but none of them really seemed practical to use/they were not working.
The debug info I'd like to write originates from global float values. An example would be CAM_POS_X which holds a floating point value which indicates at which X co ordinate the camera is currently at.
Something like this is desired:
|SiriusAlpha 0.1 Debug window_ |
|Current X position: CAM_POS_X|
|Current Y position: CAM_POS_Y|
|Current Z position: CAM_POS_Z|
|Current YAW: CAM_YAW______|
|Current PITCH: CAM_PITCH___|
|Current FPS: CUR_FPS_______|
All of these values are not nescessarily floating point variables. They could be strings, doubles, integers or even booleans.
If anyone would be willing to explain to me how to do this in D3D11 and I could skip the whole debug window schenennigans I'd be even happier.
Otherways, I'd be delighted if somebody could explain to me how this is done.
Have you tried TextOut()? Read the article on msdn. You should already have a device context, the rest is quick and easy.
The TextOut function writes a character string at the specified location, using the currently selected font, background color, and text color.
Printing to a string is trivial.
wchar buf[128];
swprintf(buf, "Current X Position: %f", CAM_X_POS);
TextOut(yourDC, screenXPos, screenYPos, &buf, sizeof(buf));
I haven't tested this, but from the MSDN documentation this should work fine.

Characters overlapping when they have changed color and are printed backwards

As you can see the upper dark X's are cut even though there is space for them.
This happens because they have changed color and are printed backwards (from right to left).
Is this a bug, faulty code, a bad setup on my system or (I doubt it) like it is supposed to be?
Here is the code that generates this output:
#include <Windows.h>
#include <iostream>
void moveTo(int x,int y){
COORD kord={x,y};
SetConsoleCursorPosition(GetStdHandle(STD_OUTPUT_HANDLE),kord);
}
void setColor(WORD attributes){
SetConsoleTextAttribute(GetStdHandle(STD_OUTPUT_HANDLE), attributes);
}
void main(){
for(int i=9;i+1;i--)
{
moveTo(i,0);
std::cout.put('X');
}
for(int i=-10;i;i++)
{
moveTo(i+10,1);
std::cout.put('X');
}
setColor(8);
for(int i=9;i+1;i--)
{
moveTo(i,2);
std::cout.put('X');
}
for(int i=-10;i;i++)
{
moveTo(i+10,3);
std::cout.put('X');
}
setColor(7);
for(int i=9;i+1;i--)
{
moveTo(i,4);
std::cout.put('X');
}
for(int i=-10;i;i++)
{
moveTo(i+10,5);
std::cout.put('X');
}
std::cin.get();
}
This is a bug in Windows.
As mentioned in the errata by Hans Passant:
I repro too, VS2008 on Win7. Cool bug. Changing the console font fixes it.
Let's use this bug isolation. I recognize this font as Petite Terminal, which implies you both most likely configured this project as a Win32 Console Application. The additional repro with GCC confirms this hypothesis, and we will assume, from a practical standpoint, that all of you were getting a 32-bit console application running inside of a Windows terminal.
The question becomes why it's writing exactly one additional column of pixels in the context of the default terminal font, color 8, and backwards writing into a console screen buffer.
Specifically, let's break this problem up into its component pieces:
When a write is issued, a character is written to a location in the terminal array
When the default color (7) is selected, pixels do not overflow into other buffers within the array
When color 8 is selected, an additional column of pixels is written to the next region of the buffer, which is only visible when the text is recited backwards
Because of the presence of overspill in (3), this is a bug.
Quoting Raymond Chen:
The console rendering model assumes each character fits neatly inside
its fixed-sized cell. When a new character is written to a cell, the
old cell is overprinted with the new character, but if the old
character has overhang or underhang, those extra pixels are left
behind since they "spilled over" the required cell and infected
neighbor cells. Similarly, if a neighboring character "spilled over",
those "spillover pixels" would get erased.
The set of fonts that could be used in the console window was trimmed
to the fonts that were tested and known to work acceptably in console
windows. For English systems, this brought us down to Lucida Console
and Terminal.
...
"Well, that's stupid. You should've stopped me from choosing a font
that so clearly results in nonsense."
And that's what we did.
Not that I'm blaming Raymond on this one, but he authoritatively illustrates this as a "can't happen."
The selection and testing of console fonts for Windows should have caught this. The fact that it's even an issue at all is an aberration.