Is there a C++ library that takes a string and a font file and returns the pixel representation of that string using that font? For example, I wrote a short PHP script that draws a single letter using Courier and then pulls out the individual pixels:
I can convert that to an array of color intensity codes and use it, but it means I need to hardcode every character I want to use, in every font, and I lose things like ligatures and intelligent kerning that only come up when multiple characters are drawn together. Is there a way in C++ to just do this directly, given the TTF file for the font I want to use? I'm using Linux, so I can't depend on Windows API functions like GetGlyphOutline
Is there a C++ library that takes a string and a font file
There's C library that can be used to do the same thing. It is called Freetype2. It is relatively easy to use. If you want to keep things portable and relatively lightweight, using Freetype2 is the way to do it. On linux system it is probably already installed.
Also, cross-platform GUI toolkit normally provide some kind of "font" class that can be used to do what you want. For example, in Qt 4 you could use QFont to draw text on Qimage and then extract individual pixels, plus operatign systems (ones that have concept of "font") normally provide some kind of font manipulation API as well.
Related
This cannot be so hard, but I simply don't manage. Neither google nor stackoverflow or the documentation of ubuntu or ghostscript were helpful.
I am generating postscript from C++. I place the text word by word to handle line-wrap. For deciding where to place the next word and whether it fits into the current line I rely on freetype to measure the "advance" of each glyph.
The text is a mix of normal text and source code, so I have two fonts involved. I chose Helvetica for normal text and Courier for source code, since both are readily available in postscript and don't need to be embedded. The problematic part of my postscript output is not significantly more complicated than
(Helvetica) findfont 11 scalefont setfont
40 100 moveto (hello world) show
123 100 moveto (hello again) show % I care for the first number
Of course, there is a proper eps header etc.
I did not manage to locate the font files on my ubuntu 16.04 system, so I downloaded best guesses from free font websites. It turns out that they apparently differ from those used by my postscript interpreter. At least, after converting to a PDF with epstopdf (which comes with LaTeX as far as I know), I see that my Helvetica font is too wide and my Courier font is too narrow, so that word spacing is off, up to the point that long words overlap with the subsequent word.
My question: how can I get font width measurements matching those of the postscript interpreter?
I am not even sure whether the question is well-posed, but somehow I do assume that there is one and only one reference Helvetica font, so that postscript output looks the same on all systems and printers.
Making freetype load the correct fonts would probably be the easiest solution, but I do not know how to find the files.
A source for downloading the exactly matching fonts would also solve the problem, although having them twice would be odd.
Even better, asking a postscript interpreter like ghostscript for the ground truth would be preferable, but the ghostscript library documentation is sparse and I did not find any examples.
I could create a postscript file that prints the width of the text obtained with textwidth, convert to a pdf, and extract the text. That would be ugly and slow, and I'd like to go for a proper C++ solution.
Progress in any of these or maybe other directions would be absolutely great!
The fonts you are using should have a .afm (Adobe Font Metrics) file, which you can read the font metrics from if its a PostScript font. Its also true that the 'base 13' fonts should be the same in terms of metrics across all PostScript implementations. Of course, if you are using a TrueType font to get the metrics from then they may well differ from a PostScript font.
You haven't said what PostScript interpreter you are using, it may be that its not using a standard font, but my guess is you are using a TrueType font from your Ubuntu which doesn't quite match the PostScript ones you are using in your 'interpreter'. If memory serves you can look in /etc/fonts/fonts.conf to see where your fonts are stored.
FWIW Ghostscript ships with implementations of the base 13 fonts which are matched to the Adobe fonts, PostScript interpreters should match those. We don't however ship the AFM files, but you can load the fonts into Fontographer, or use FreeType, or simply get the advance width by using stringwidth (not textwidth) in a PostScript program.
I wouldn't have said Ghostscript's docuemntation is 'sparse'. Difficult to find what you want, maybe, but there's lots of documentation there. Just use.htm, the basic information, is a 265Kb HTML file.
The final alternative, of course, is to download the fonts you are actually using in the PostScript program, then you know that they match the metrics you used to create the PostScript in the first place. As with PDF, this is highly recommended, especially for fonts outside the base 13, as its the only way to get reliable output.
I've read Rendering Vector Art on the GPU on rendering shapes that are defined by quadratic/cubic Bezier curve boundaries. I was hoping to build off of this to create text that fills in as if it were stroked by a pen or brush somehow. (Any advice on how to do this is welcome.)
However, I'm a little unsure of where to get my hands on fonts / shapes that have the format specified in this paper (arrays of points representing quadratic/cubic Beziers).
Does anyone know of a way of getting font/vector drawings that are in this format? The authors of the paper mention truetype fonts, but according to
TrueType Font Parsing in C
it looks like parsing truetype fonts might involve a lot more than this? I know there are also formats like .svg, but I am not sure where to start with that, since it holds so much more than what I am looking to get out of it.
As an example, is there some type of file format that I could convert a .svg file or truetype file to, perhaps by using something inkscape's export function, such that the resulting format would be possible to parse for an array of points and control points?
I accepted an answer below, but for anyone interested in this, you should check out
https://github.com/quarnster/TTF
It's pretty much exactly what I was looking for. The code works great, but it's a bit hard to understand how to use it. It makes more sense if you read about the TTF format, like here An Introduction to TrueType Fonts: A look inside the TTF format.
I suggest using the cross platform library FreeType (http://www.freetype.org/). FreeType loads font files and, among other things, provides the bounding curves of glyphs in the typeface. Specifically, you should look into the function FT_Outline_Decompose, which gives exactly what you want.
Via GDI/GDI+ get the text pixels or glyph, how to convert to 3d mesh? Does any exists library or source code can be used?
PS: I know D3DXCreateText, but Im using opengl...
If you works on OpenGL, you can try FTGL, it allows you to generate different polygon meshes from fonts, including extrudes meshes as well as render them:
http://ftgl.sourceforge.net/docs/html/ftgl-tutorial.html
but I am not sure how portable is this library specially for OpenGL ES...
Using GDI is definitely not among the best ways to go if you need to obtain glyphs for the text, you could use FreeType library instead (http://www.freetype.org), which is open-source and portable. It can produce both bitmaps and vectorized representation for the glyphs. You will have to initialize single instance of class FT_Library in your program that is later used to work with multiple fonts. After loading font form file (TrueType, OpenType, PostScript and some other formats) you'll be able to obtain geometrical parameters of specific characters and use it appropriately to create texture or build primitives using your preferred rendering API, OpenGL let it be.
I am looking for a method, software or library for simple image analysis.
The input image will be a white-colored background, and some random small black dots on it.
I need to generate a .txt file that represents these dots' coordinates. That is, if there are three dots in the image the output will be a text file that includes somehow a representation of three coordinates, (x1,y1), (x2,y2), and (x3,y3).
I have searched the web for hours and didn't find something appropriate, all I found was complex programs for image processing.
I've been told that it's easy to write code for this mission in MATLAB, but I'm unfamiliar with MATLAB.
Can this be done easily with C++, Java or C#?
Any good libraries?
It is quite simple in any language. Depending on the form of your input, you probably need to go over all of it (assuming it is a simple matrix - simply have two nested loops, one for the x coordinate and one for the y coordinate), whenever you encounter a black dot - simply output the current indexes which would be the x and y coordinates for the dot.
As to libraries, anything other than something to decode your input to the form of such a matrix (e.g. a JPEG decoder) would be overkill.
I don't think you would need image processing libraries for this kind of problem (somebody correct me if I am wrong) since these libraries may focus on image manipulation and not recognition. What you will need is a knowledge of the image format that you are supporting (how are they stored, how are they interpreted, etc) and basic C file system functions.
For example, if you are expecting a JPG file format you will simply calculate the padding for each scanline and reach each scan line one by one, and each pixel in the line one by one. You'd have to use two counters, one for the row and one for the column. If the pixel is simply not white, then you have your coordinate
This is something which should be very easy for you to do without any external software; something like
for(y in [0..height]) {
for(x in [0..width]) {
if(pixels[y][x].color == BLACK)
print("(%d, %d)", x, y);
}
}
would work.
The bitmap file format is quite easy to read.
http://en.wikipedia.org/wiki/BMP_file_format
You could just stream the bytes into an array using this info. I've written a few BMP readers; it is a trivial matter.
Also, although I cannot vouch for its ease of use as I've never used it before, I've heard that EasyBMP works fine too.
CImg library shold help you. From CImg FAQ:
1.1. What is the CImg Library ?
The CImg Library is an open-source C++ toolkit for image processing.
It mainly consists in a (big) single header file CImg.h providing a
set of C++ classes and functions that can be used in your own sources,
to load/save, manage/process and display generic images. It's actually
a very simple and pleasant toolkit for coding image processing stuffs
in C++ : Just include the header file CImg.h, and you are ready to
handle images in your C++ programs.
I recently saw that when sfml loads a font from memory by receiving a const char*.
How does this represent a font?
I also saw the arial.hpp file only contains a huge array of numbers(chars), which you can feed into the LoadFont function.
the font class in SFML also holds an image, but I don't know how it gets set since there's no load/set function for it, and images are made out of unsigned chars, not char arrays like what the arial font is made of.
How does all this stuff fit together, and how do I create and load a font?
(sfml specific steps would be nice also)
As far as I can tell, there is no LoadFont function in SFML. There are Font::LoadFromFile and Font::LoadFromMemory. I'll assume you're talking about those.
From the documentaiton for Font::LoadFromMemory:
Load the font from a file in memory.
It is for cases when you have loaded something into memory. That is, if you're not using the normal filesystem. Maybe you have all of your data in .zip files, so using standard file IO won't be useful. You load it into a block of memory (the aforementioned array of bytes), and pass it to this function.
The 2.0 documentation is more complete, as it lists the font formats that are accepted.