pixel text to 3d mesh convert library - c++

Via GDI/GDI+ get the text pixels or glyph, how to convert to 3d mesh? Does any exists library or source code can be used?
PS: I know D3DXCreateText, but Im using opengl...

If you works on OpenGL, you can try FTGL, it allows you to generate different polygon meshes from fonts, including extrudes meshes as well as render them:
http://ftgl.sourceforge.net/docs/html/ftgl-tutorial.html
but I am not sure how portable is this library specially for OpenGL ES...

Using GDI is definitely not among the best ways to go if you need to obtain glyphs for the text, you could use FreeType library instead (http://www.freetype.org), which is open-source and portable. It can produce both bitmaps and vectorized representation for the glyphs. You will have to initialize single instance of class FT_Library in your program that is later used to work with multiple fonts. After loading font form file (TrueType, OpenType, PostScript and some other formats) you'll be able to obtain geometrical parameters of specific characters and use it appropriately to create texture or build primitives using your preferred rendering API, OpenGL let it be.

Related

The best way to export 3d mesh animation in order to use it in OpenGL

Suppose that I have a 3d model with animation in, say, Blender. I need to export this model to some file and then use it in OpenGL app, i.e. without hardcoding animations, but reading them from file. What format is the best solution?
OpenGL doesn't support any format directly, but the classic OBJ file format lines up very well with drawing with vertex arrays. Basically, OBJ lists all vertices independently of the geometry. This way, several objects can share the same points. All kinds of groupings are also possible.
Also, it is one of the earliest formats to support a wide range of spline curves & surfaces, including Bezier, B-Splines & NURBS.
A basic decription can be found here:
http://en.wikipedia.org/wiki/Wavefront_.obj_file
The complete OBJ spec can be found here:
http://www.martinreddy.net/gfx/3d/OBJ.spec
It's not as modern as WebGL, but it's simple, human readable and widely supported.
What format is the best solution?
OpenGL doesn't care about file formats. So feel free to choose whatever suits your needs best. Due to the rise of WebGL I started dumping whole Blender scenes into collections of JSON formated files.

Loading PNG textures to OpenGL

I'm working on a game, and all of my graphics use magenta as transparent/magic color.
They are all 32-bit and the magenta is just for conveniency.
Anyway, I would appreciate if someone could advice me what library should I use to load my images (I need to load them in GL_RGBA format, both internal and texture specific).
If only PNG support is necessary, use libpng. DevIL is supposed to be easy, but it's somewhat bloated (does a lot more than just load images) and internally actually calls OpenGL functions which can mess with your own OpenGL logic.
I personally prefer SDL_image, since I'm using SDL in my projects anyway. While not immediately obvious, SDL_BlitSurface() function can do the conversion from whatever IMG_Load() returns to the necessary pixel format.
DevIL can load virtually every file format and can directly create OpenGL textures. It is the easiest way to go.
You should also use a file format which supports an alpha channel (PNG, TGA, ...). Using a "magic color" in 32-bit images is really out-dated!
Apart from the other answers mentioning SDL and DevIL, there are two more options to consider:
Use libpng directly. This will probably have the smallest impact on the code size if that matters, since you get no bloat for other formats you're not using, no DLLs, etc.
Use operating-system texture loading. This can be a nice way to reduce dependencies if you prefer using OS features over external libraries. GDI+ in Windows XP and up has built-in texture loading for a few formats like PNG and JPEG, and I don't know for certain, but other OSs might have similar features. It's pretty simple to hook GDI+ up in to OpenGL and then the OS is taking care of your texture loading!
There is a very minimalist one file example of loading a png into openGL here:
http://tfc.duke.free.fr/coding/src/png.c
Another option is OpenCV.
It does a lot more than just texture loading, but odds are good you'll find use of its other features as well.

Drawing strings in OpenGL

What's the quickest, easiest way to draw text in standard OGL ??
Text is surprisingly involved in OpenGl
Take a look at this example from NeHe
OpenGL does not support drawing text. You need to use some library to render text to bitmap and then you can use OpenGL to render the bitmap. Freetype2 and Pango are good cross-platform low level solutions. Game programming libraries such as ClanLib and GUI libraries such as Qt may also have their own ways for rendering text.
It depends on the framework you are working on like the one above me said. for example, SDL is multi-platform and one can draw text using a special lib inside SDL:
http://gameprogrammingtutorials.blogspot.com/2010/02/sdl-tutorial-series-part-6-displaying.html
If you're using glut look at the following functions:
glutStrokeString,
glutBitmapString
in glut documentation..
Use textures. Each character is a textured quad, and texture coordinates enclose the specific characters.
Then, you can affine using display lists, generating raster representing string at runtime, outlining, blending...
You can use a platform specific OpenGL API (i.e. wglUseFontOutlines), but I think it will be deprecated since OpenGL 3.2.
OpenGL does not support text rendering directly. You have a variety of options:
Some OS bindings, such as WGL and
AGL, do have limited font support
(mostly they render system fonts
into bitmaps for use in Open GL).
The GLUT toolkit (and similar
toolkits) also has some font support
(bitmap and stroke).
You can use a
library such as FreeType (mostly a
font renderer, you still may wish to
use something like Pango for text
layout).
You can use simple textured
quads (this is effectively what
Quake 1 did).
Depends on what framework you are using. One common method is to render text to an offscreen buffer and use that as a texture.

Hardware accelerated Unicode text rendering

I want to write a hardware accelerated text renderer using Free Type 2 to load the fonts, find the correct glyphs and their sizes etc.
My plan to do this is to have a large texture containing glyphs (for a given font,size,etc) in video memory, and a table for each texture defining information about the contents of the texture in system memory.
I can then use the table to build a vertex buffer to render the text.
The problem I'm facing is the construction of the texture, it is not practical to create a texture for every glyph in Unicode, there just too many. For Ascii in the past I just built the texture in an image editor and then filled out the table as needed myself in advance, however for this I will need some kind of dynamic system that will get the glyphs needed, but also efficently cache them to avoid repeated uploads of the same glyph to vram...(some sort of least commonly used system I guess)
Another problem is not all glyphs are the same size, I could split the texture up into a grid big enough for the largest glyphs (which I need some way to accurately work out) which makes fitting the glyphs onto the texture easy and replacing them with new glyphs (based on the least commonly used or something), however that leaves a lot of wasted space, but i'm not sure how to more efficiently pack them without running into problems with fragmentation as glyphs are swaped in and out...
Also I assume updating the texture could stall the graphics hardware if the texture is still being used for some previous text, is this a correct assumption and how can I avoid it if its the case?
Text rendering is much complex issue then "pasting" some glyphs... Not just much complex,
it is very complex: kerning, ligatures, spacing, bidirectional text, vowels, and much more...
Why wouldn't you just create a text using normal libraries for text rendering like Pango, create bitmap and display it as bitmap on your 3D object (if I understand what you need).
EDIT: Simple HTML like markup can be rendered with Pango as well: http://library.gnome.org/devel/pango/unstable/PangoMarkupFormat.html
Cairo supports hardware accelerated rendering to many surface types
There is a library called FontForge which is using Cairo for rendering, but i haven't tried it myself. You should check it and let me know how it goes :-)

DirectX Font tutorial that doesn't use GDI

Does anyone have any tutorials/info for creating and rendering fonts in native directx 9 that doesn't use GDI? (eg doesn't use ID3DXFont).
I'm reading that this isn't the best solution (due to accessing GDI) but what is the 'right' way to render fonts in dx?
ID3DXFont is a great thing for easy to use, early, debug output. However, it does use the GDI for font rasterization (not hardware accelerated) and there is a significant performance hit (try it, its actually very noticable). As of DirectX 11, though, fonts will be rendered with Direct2D and be hardware accelerated.
The fastest way to render text is using what's called "Bitmap Fonts". I would explain how to do this, except that there is a lot of different ways to do implement this technique, each differing in complexity and capability. It can be as simple as a system that loads a pre-created texture and draws the letters from that, or a system that silently registers a font with Windows and creates a texture in memory at load-time (The engine I developed with a friend did this, it was very slick). Either way, you should see a very noticable performance increase with bitmap fonts.
Why this isn't a good solution?
Mixing GDI rendering and D3D rendering into the same window is a bad idea.
However, ID3DXFont does not use that. It uses GDI to rasterize the glyphs into a texture. And uses that texture to render the actual text.
About the only alternative would be using another library (e.g. FreeType) to rasterize glyphs into a texture, but I'm not sure if that would result in any substantial benefits.
Of course, for simple (e.g. non-Asian) fonts you could rasterize all glyphs into a texture beforehand, then use that texture to draw text at runtime. This way runtime does not need to use any font rendering library, it just draws quads using the texture. This approach does not scale well with large font sizes or fonts with lots of characters. Also would not handle complex typography very well (e.g. where letters have to be joined etc.)
With DirectX, the correct way to render standard fonts is with GDI.
However, IF
You want to support cross platform font rendering
with proper support for internationalization - including far eastern languages where maintaining a glyph for every character in a font is impractical
and/or You want to distribute your own fonts and render them without "installing" them...
Then libfreetype might be what you are looking for. I don't claim its easy: Its a lot more complex than using the native font api.
Personally I think that ID3DXFont is the way to go.
If you really wanted to make your own font routines, I suggest you look at:
http://creators.xna.com/en-us/utilities/bitmapfontmaker
You can use this to create a bitmap with all the characters printed on it. Then its just a matter or loading the texture and blitting the relevant chars onto the screen at the right place. (This is what XNA uses for its font drawing)
Its a lot more work, but you don't need the font to be installed on the target PC, and you have the advantage to being able to go into photoshop and edit the font appearance there.