I'm writing a text box from scratch in order to optimize it for syntax highlighting. However, I need to get the width of characters in pixels, exactly, not the garbage Graphics::MeasureString gives me. I've found a lot of stuff on the web, specifially this, however, none of it seems to work, or does not account for tabs. I need the fastest way to measure the exact dimensions of a character in pixels, and tab spaces. I can't seem to figure this one out...
Should mention I'm using C++, CLR/CLI, and GDI+
Here is my measuring function. In another function the RectangleF it returns is drawn to the screen:
RectangleF TextEditor::MeasureStringWidth(String^ ch, Graphics^ g, int xDistance, int lineYval)
{
RectangleF measured;
Font^ currentFont = gcnew Font(m_font, (float)m_fontSize);
StringFormat^ stringFormat = gcnew StringFormat;
RectangleF layout = RectangleF(xDistance,lineYval,35,m_fontHeightPix);
array<CharacterRange>^ charRanges = {CharacterRange(0,1)};
array<Region^>^ strRegions;
stringFormat->FormatFlags = StringFormatFlags::DirectionVertical;
stringFormat->SetMeasurableCharacterRanges(charRanges);
strRegions = g->MeasureCharacterRanges(ch, currentFont, layout, stringFormat);
if(strRegions->Length >= 1)
measured = strRegions[0]->GetBounds(g);
else
measured = RectangleF(0,0,0,0);
return measured;
}
I don't really understand what MeasureCharacterRanges layoutRect parameter does. I modified the code from Microsofts example to only work with, or only measure, one character.
You should not be using Graphics for any text rendering.
Starting with .NET Framework 2.0 use of Graphics.MeasureString and Graphics.DrawString was deprecated in favor of a newly added helper class TextRenderer:
TextRenderer.MeasureText
TextRenderer.DrawText
The GDI+ text renderer has been abandoned, and hasn't gotten any improvements or fixes for over 10 years; as well as being software rendered.
GDI rendering (which TextRenderer is a simple wrapper of) is hardware accelerated, and continues to get rendering improvements (ligatures, Uniscribe, etc).
Note: GDI+ text rendering is wrapped by Graphics.DrawString and MeasureString
Here's a comparison of the measure results of Graphics and TextRenderer:
The GDI+ measurements aren't "wrong", they are doing exactly what they intend - return the size that the text would be if it were rendered as the original font author intended (which you can achieve using Anti-alias rendering):
But nobody really wants to look at text the way the font designer intended, because that causes stuff to not line-up on pixel boundaries - making the text too fuzzy (i.e. as you see on a Mac). Ideally the text should be snapped to line up on actual pixel boundaries (i.e. Windows' Grid Fitting)
Bonus Reading
See my answer over here, with more information.
There's MeasureCharacterRanges that is like MeasureString, but more powerful and more accurate (and also slower).
Related
We are writing a piece of software which downloads tiles from the internet from WMS servers (these are map servers, and they provide images as map data for various locations on the globe) and then displays them inside a window, using Qt and some OpenGL bindings.
Some of these servers contain data only for specific regions on the planet, and if you request and area outside of what they support it they provide you just a blank white image, which we do not want to use since they occupy extra space. So the question is:
How to identify whether an image contains only 1 color (white), or not.
What we have tried till now is the following:
Create a QImage, loop over every pixel of it, see if it differs from white. This is extremely slow, and since we want this to be a more or less realtime application, this idea sadly does not work.
Check if the image size is the same as an empty image size, but this also does not work, since it might happen that:
There is another image with the same size which actually contains data
It might be that tiles which are over an ocean have just one color, a light blue, and we need those tiles.
Do a "post processing" of the downloaded images and remove them from the scene later, but this looks ugly from the users' perspective that tiles are just appearing and disappearing ...
Request transparent images from the WMS servers, but due to some OpenGL mishappenings, when rendering, these images appear as black only on some (mostly low-end) video cards.
Any idea, library to use, direction or even code is welcome, and we need a C++ solution, since our app is C++.
Edit for those suggesting to sample pixels only from a few points in the map:
and
The two images above (yes, the left image contains a very tiny piece of Norway in the corner), would be eliminated if we would assume that the image is entirely white based only sampling a few points, in case none of those points actually touch any color than white. Link to the second image: https://wms.geonorge.no/skwms1/wms.sjokartraster2?LAYERS=all&SRS=EPSG:900913&FORMAT=image/png&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&BBOX=-313086.067812500,9079495.966562500,0.000000000,9392582.034375001&WIDTH=256&HEIGHT=256&TRANSPARENT=false
The correct and most reliable way would be to uncompress the PNG bytes and check each pixel in a tight loop.
The most usual source of an image process routine being "slow" is invoking a function call per-pixel. So if you are calling QImage::pixel in a nested loop for each row/column, it will not have the performance you desire.
Instead, take advantage of the fact that QImage gives you raw image bytes via the scanLine method or the bits method:
Something like this might work:
const int bytes_per_line = qimage.bytesPerLine();
unsigned char white_row[MAX_WIDTH * 4];
memset(white_row, 0xff, sizeof(white_row));
bool allWhite = true;
for (int row = 0; allWhite && (row < height); row++)
{
unsigned char* row_data = qimage.scanLine(row);
allWhite = !memcmp(row_data, white_row, bytes_per_line);
}
The above loop terminates pretty fast the moment a non-white pixel is encountered.
I was using the following pattern to record an enhanced meta-file for a later playback:
POINT pts[] = {
//.....
};
::SelectObject(hEnhDC, ::GetStockObject(LTGRAY_BRUSH));
::Polygon(hEnhDC, pts, _countof(pts));
Now I'm forced to use GDI+ to provide anti-aliasing, so I'm trying to convert that code sample:
Gdiplus::Point pts[] = {
//...
};
Gdiplus::Graphics grx(hEnhDC);
Gdiplus::Pen pen(Gdiplus::Color(255, GetRValue(clrPen), GetGValue(clrPen), GetBValue(clrPen)), PEN_THICKNESS);
grx.FillPolygon(&brush, pts, _countof(pts));
grx.DrawPolygon(&pen, pts, _countof(pts));
The issue is how do I convert a stock-object HBRUSH from ::GetStockObject(LTGRAY_BRUSH) to GDI+ Brush object?
EDIT: Guys, thank you for all your suggestions. And I apologize for not providing more details. This question is not about getting the RGB color triplet from the stock brush. I can do all that with the GetSysColor function, or with the LOGBRUSH like you showed below.
The trick lies in the first sentence above. I am recording an enhanced metafile that may be played on a separate computer, so I cannot hard-code colors into it.
Let me explain. Say, the first GDI example (let's simplify it down to a triangle with a gray fill):
POINT pts[] = {
{100, 100,},
{100, 120,},
{120, 100,},
};
::SelectObject(hEnhDC, ::GetStockObject(LTGRAY_BRUSH));
::Polygon(hEnhDC, pts, _countof(pts));
If I then call GetEnhMetaFileBits on that meta-file, I'll get the following data:
So as you see the EMR_SELECTOBJECT object in that recorded meta-file specifies the LTGRAY_BRUSH = 0x80000001, which will be properly substituted for the color when that meta-file is played on the target system.
And that's what I'm trying to achieve here with GDI+. For some reason it only seems to support hard-coded color triplets in its Brush class. That's why I asked.
Otherwise, one solution is to parse the enhanced meta-file's raw data. (For GDI+ it is a much more complex structure though, that also involves parsing EMR_GDICOMMENT objects.) And then substitute the needed color on the target system before the GDI+ meta-file is played. But it involves writing a lot of code, which I was trying to avoid at this stage ...
I’m afraid you can’t easily convert.
A simple workaround is create GDI+ solid brush with the same color.
See this spec for color values of GDI stock objects, that particular brush has color #C0C0C0
I have created a tic-tac-toe game for school, and I am basically finished. The only thing I have left is that when someone wins, I want a box to pop up on the screen and to say in big text "X Wins" or "O Wins" depending on who one.
I've found that drawing text in openGL is very complicated. Since this isn't crucial to the assignment, I'm not looking for something complicated or and don't need it to look super nice. Also, I would most likely just want to change my code. Also, I want the size of the text to be variable driven for when I re-size the window.
This is what my text drawing function currently looks like. It draws it really small.
Note: mText is an int [2] data member that holds where I want to draw the text
void FullGame::DrawText(const char *string) const
{
glColor3d(1, 1, 1);
void *font = GLUT_BITMAP_TIMES_ROMAN_24;
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
int len, i;
glRasterPos2d(mText[0], mText[1]);
len = (int)strlen(string);
for (i = 0; i < len; i++)
{
glutBitmapCharacter(font, string[i]);
}
glDisable(GL_BLEND);
}
Short answer:
You are drawing bitmap characters which cannot be resized since they are given in pixels. Try using a stroke character or string with one of the following functions: glutStrokeCharacter, glutStrokeString. Stroke fonts can be resized using glscalef.
Long answer:
Actually, instead of printing the text character by character you could have used:
glutBitmapString(GLUT_BITMAP_TIMES_ROMAN_24, "Text to appear!");
Here the font size is in pixel so scaling will not work in direct usage.
Please note that glutBitmapString is introduced in FreeGLUT, which is an open source alternative to the OpenGL Utility Toolkit (GLUT) library. For more details in font rendering: http://freeglut.sourceforge.net/docs/api.php#FontRendering
I strongly advice using FreeGlut rather than the original Glut.
1 - Glut`s last release was before the year 2000 so it has some missing features: http://www.lighthouse3d.com/cg-topics/glut-and-freeglut/
2 - Two of the most common GLUT replacements are OpenGLUT and freeGLUT, both of which are open source projects.
If you decide using FreeGlut you should have the following include:
#include < GL/freeglut.h>
Having said that all you need is scaling the text, therefore you can use glutStrokeString which is more flexible than bitmap string,e.g., it can be resized visually. You can try:
glScalef(0.005,0.005,1);
glutStrokeString(GLUT_STROKE_ROMAN, (unsigned char*)"The game over!");
Hope that helps!
The easiest way is to generate a texture for each "Win" screen and render it on a quad. It sounds like you're not concerned about printing arbitrary strings, only 2 possible messages. You can draw the textures in Paint or whatever, and if you're not that worried about quality, the textures don't even have to be that big. Easy to implement and totally resizable.
I'm using DirectWrite to render some text to a window. Everything seems to work except positioning when using different font sizes: I'd expect 2 texts with font size v1 and v2 and both with (x, y) = (0, 0) to be at the top left but as you can see:
neither "Test" nor "X" are really at the top left.
Is there a way to make this work?
Welcome to the world of fonts. Fonts are probably the most difficult thing to use, because there is surprises in font themselves ( there is so many new standards that supposed to solves everything and just confuse more because almost no font support it at 100%, even some 'classic' font have partial/bad information in them) the GDI, GDI+, DirectDraw don't draw font at the same position in pixels because of math, coordinate rounding, anti-aliasing... ( you can have one more bonus if you do the math with freetype ).
When you try to print the font there are other pb. So the only way around this is for me. Don't even try to draw font at certain pixel coordinates. Do your job at drawing font, picture, lines on the screen that render well, do your best to convert them to printing coordinate for exports but never expect to control pixel in fonts, everything is round approximates.
PS : Don't trust internal fields in fonts. On Arial they are good on all other fonts some are missing or initialised to zero, but the "fun" part it's not always the same field which are not present it depends of the fonts. You could only use the fields if you try them before by font. Yes fonts are fantastic!
The term #evilruff is referring to is called 'internal leading'. You can use IDWriteFontFace::GetMetrics or possibly IDWriteFontFace::GetDesignGlyphMetrics to get this value (for GetMetrics, the value you're looking for is most likely metrics.ascent - metrics.capHeight).
The values here are in Font Design Units, not pixels (of any sort). You can convert these values to em height by dividing by metrics.designUnitsPerEm; typically, font sizes in DirectWrite are specified by the pixel size of the (lowercase) m; so if you multiply the values in ems by the font size, you should get the values in pixels.
I'm assuming you are using an IDWriteTextLayout in conjunction with DrawTextLayout (rather than creating your own DWRITE_GLYPH_RUN). IDWriteTextLayout aligns glyphs to their layout cell (including the full ascent and line gap), not the glyph ink, and this is true of pretty much all text layouts, be they web browsers or word processors or simple edit controls. If they did not (instead aligning to the top of the letter), then diacritics in words like Ťhis would be clipped.
If you always want to align to the ink, create an IDWriteTextLayout, call IDWriteTextLayout::GetOverhangMetrics, and then call DrawTextLayout with an origin equal to negative DWRITE_OVERHANG_METRICS::left&top. If you want to align to the cap-height always (that way "hello" and "Hello" would both draw at the same vertical coordinate), then Eric's approach will work.
I would like to create a fake "explosion" effect in SDL. For this, I would like the screen to go from what it is currently, and fade to white.
Originally, I thought about using SDL_FillRect like so (where explosionTick is the current alpha value):
SDL_FillRect(screen , NULL , SDL_MapRGBA(screen->format , 255, 255 , 255, explosionTick ));
But instead of a reverse fading rectangle, it shows up completely white with no alpha. The other method I tried involved using a fullscreen bitmap filled with a transparent white (with an alpha value of 1), and blit it once for each explosionTick like so:
for(int a=0; a<explosionTick; a++){
SDL_BlitSurface(boom, NULL, screen, NULL);
}
But, this ended up being to slow to run in real time.
Is there any easy way to achieve this effect without losing performance? Thank you for your time.
Well, you need blending and AFAIK the only way SDL does it is with SDL_Blitsurface. So you just need to optimize that blit. I suggest benchmarking those:
try to use SDL_SetAlpha to use per-surface alpha instead of per-pixel alpha. In theory, it's less work for SDL, so you may hope some speed gain. But I never compared it and had some problem with this in the past.
you don't really need a fullscreen bitmap, just repeat a thick row. It should be less memory intensive and maybe there is a cache gain. Also you can probably fake some smoothness by doing half the lines at each pass (less pixels to blit and should still look like a global screen effect).
for optimal performance, verify that your bitmap is at the display format. Check SDL_DisplayFormatAlpha or possibly SDL_DisplayFormat if you use per-surface alpha