I have some experience rendering text in opengl, where the approach is to use font glyphs (which can eb created through things like freetype) and then used by pulling individual lettings useing texture coordinates, however I wanted to try using a more create font and was thinking about how to render cursive fonts.
I noticed a lot of the fonts on here are cursive fonts and have been unable to find any information on rendering cursive fonts. I realize that a texture lookup approach will probably not work in the same way. I am looking for some advice to point in a direction since I have ben unable to find one.
https://www.fontsquirrel.com/fonts/list/popular
The only thing I can think about is to come up with a set of bezier curves to represent the entire sentence and then render it. This would make it hard to render text dynamically since the curves will need to be connected in real time to maintain smoothness.
Has anyone had any success with rendering cursive fonts?
Cursive fonts work in the same way as non-cursive fonts do. The only thing is that the axis-aligned bounding boxes of neighboring glyphs might overlap. In theory, this can happen to any font - not only to cursive ones.
A glyph is not only characterized by its width and height. It also has additional information that are used for laying out text. In FreeType, these are bearings and advance distances (see the documentation). The advance distance tells you how much the text cursor will advance, i.e., where the next glyph will start. The bearing will tell you how much space you have to leave blank between the current text cursor and the actual glyph. For very skewed glyphs, this bearing might be negative, i.e., the glyph starts left of the cursor position. Similarly, the advance distance may be smaller than the glyph width. This allows neighboring glyphs to intertwine.
Btw, different APIs call these font metrics differently. E.g., DirectWrite calls them overhangs.
Effectively what you have to do is define a relationship between characters, and use visual variations of the same characters so that they appear linked.
For example, a cursive lowercase 'L' next to a cursive lowercase 'O' will link from the bottom of the L to the top of the O; whereas if the letter after the L was, let's say 'i', then it would have to link the characters at the bottom.
If you're intending on some predetermined text, then you can of course also just manually arrange things. If you're looking for a real-time solution, you would need the above method.
Take a loot at this page and perhaps some of the libraries found in the "implementations" section, which should point you in the direction of ready-to-use assets.
Related
I'm working with OpenGL and Qt. I render a scene in an OpenGLWidget. When hovering over objects in the scene, I would like to display a box near the selected object with some text. I have already implemented the selection of the object.
I thought of two possible approaches.
Place a widget (such as a QLabel) above the OpenGLWidget in
which the scene is rendered.
Render the text in a quad directly in OpenGL.
Which of the two approaches you recommend and you could please give me some suggestions on implementation. Alternatively, you could recommend another approach. Thanks!
Hi #Artic I am not a Qt expert so I can't give you information on widgets, but I can give you some pointers for creating a label with OpenGL. Giving a full implementation is tricky here because it depends a lot on how you want to display the text. But I'll try to outline some of your options.
To render text in OpenGL most people go with a technique known as bitmap fonts, see more here:
https://learnopengl.com/In-Practice/Text-Rendering
The concept of bitmap fonts is fairly straight forward, all characters are pre-rasterized to a texture and then you can sample from each part of the texture depending on the character you need. You build your label out of quads, textured with each part of the bitmap you sample from for each character.
Signed distance fields essentially use the same technique but the pre-rasterized texture of characters are rendered using signed distance fields which deal with some of the issues that standard bitmaps fonts have.
In basic terms, SDF works by generating a special texture, or image, of the font that stores the distance from the edge of each character to its centre, using the colour channels of the image to record the data.
If you use signed distance fields it won't be enough to just sample from your bitmap, fonts rendered this way require extra work (typically done using a shader program) to produce the correct rendering.
Once you have a way of generating a label you can decide if you want to display it in screen space or in world space.
If you want to display it in world space (where the label is hovering over the model or item) you will need to do more work if you want that label to always face the camera and this technique is called billboarding.
You could also render your text "on the fly" if you just want to render some text to the screen in screen space. You can use a library like SDL_ttf.
See: http://lazyfoo.net/tutorials/SDL/16_true_type_fonts/index.php
In this example you use SDL_ttf to render a string of text to a surface with dimensions of your choosing, you can then create an OpenGL texture from that surface and render it to the screen.
Sorry if this information is a bit broad, I would need a more specific question to give you further implementation details.
For an implementation, I would evaluate the pros and cons based on what you need. If you haven't implemented a system for rendering text before it's probably best to stick with something simple; there are more techniques for text rendering than I have listed here such as turning text in to polygons and other libraries which attempt to deal with some of the issues with traditional font rendering techniques but you probably don't need anything complicated.
For a recommendation on which to use I would go with the technique that you feel most comfortable with, typically doing things from scratch in OpenGL will take more time but it can provide you with a nicer set of functionality to use in the future. However if Qt already has something nice for rendering a label (such as a widget that you mentioned) it is probably worth taking the time to learn how to use it as it may yield faster results and you don't want to reinvent the wheel if you don't need to. On that note though doing things from scratch with OpenGL can be very rewarding and greatly improve your understanding since you have to get familiar with how things are done when you don't have a layer of abstraction to depend on. Ultimately it depends on you. Good luck!
You could use tool tips in Qt. The string will appear when the OpenGlWidget is hovered over. You can change the text of the tool tip based on the mouse location in the scene, using the tool tip method showText():
QToolTip::showText(QPoint &position, QString &text, QWidget *w);
There are more options for the showText() method and can be found in Qt's tool tip documentation. Also, here are more examples on how to use Qt tool tips.
I'm writing a simple bitmap font renderer for OpenGL and I'd like to render some Unicode as well. However, in many fonts some characters are missing and are rendered as squares. These consequently waste space in my texture and I'd like to get rid of them. Is there a WinAPI function to detect whether a certain character will be rendered as a tofu square using a certain font?
I'm using GDI, I make an offscreen bitmap using CreateDIBSection, then get a font using CreateFontIndirect and render glyphs using ExtTextOutW.
So far I was thinking about detecting the tofu in the rasterized form (by comparing the pixels) which sort of works but I guess it is not very nice. It also requires me to get some Unicode character that the given font for sure does not have, in order to get a "template" to compare to. I guess I would get into trouble sooner or later like that.
I had to do something similar to implement my own font-linking and font-fallback when building glyph maps for a 3D renderer. I used Uniscribe to convert the string of characters to a string of glyph indexes into the font. Then I used ScriptGetFontProperties to get the wgDefault property for the font and compared the value to each of the glyph indexes.
The pure GDI equivalent would be to use something like GetCharacterPlacement to convert the text to glyph indexes (and then you use ExtTextOut with ETO_GLYPH_INDEX), but I don't think that does all the key shaping and reordering that Uniscribe would do. For starters, I don't think that'll work for anything beyond the BMP.
You don't want to graphically match to the missing symbol glyph because it's not always a square. For example, in some fonts, it's a black diamond with a question mark in it.
I want to develop a font engine so my GUIs look identical in all platforms. I've come to a pickle here as I want to make sure I approach it in the most productive angle, yet an angle that gives me the ability to implement as much as possible on my own (for learning purposes).
I just want an outline of how I should do it, maybe give some example paths that I can follow.
I was researching bezier curves but I don't think it was a good idea because I don't see how drawing only lines can scale up properly making the letters empty. I was also looking into implementing it with ttf font files but didn't see upscaling and downscaling being dependent on the image size as a practical thing, mainly because of memory consumption.
Also provide some advantages/disadvantages with your approach.
The curves define the boundary of the font glyph, from which you determine where to fill color. It is just like how a solid polygon is defined by line segments on its boundary.
I saw that D3DX9 is not recommended by Microsoft and not shipped with Windows SDK.
I would like to adopt.
I'm missing my line drawing utility.
How can I replace ID3DXLine and ID3DXFont in "new" DirectX9?
Generally, lines and fonts are sprites. ID3DXLine and ID3DXFont use ID3DXSprite interface under the hood. (Of course, there are other options too, but sprite approach is the most widely used)
Drawing sprites
So, firstly, you will need either 3rd party or your own sprite renderer. Typically, development of "bedroom" sprite engine, consists of stages:
drawing bunch of simple colored quads (two triangles forming rectangle). There are different techniques, but even simplest "all-in-one vertex buffer" approach is not so bad. More advanced techniques includes instancing, point sprites, geometry shader and tessellation tricks (last two are not applicable in DX9). But don't even try to draw million sprites with million draw calls ;)
Texturing those quads. You will need bitmap loader. If you don't want to use D3DX at all, you can pick open-source FreeImage library for example, of write your own loader.
optimizing rendering using batching. Sort your sprites, to minimize draw calls number and/or minimize context state changes.
optimizing texturing using texture atlases. You will need to solve rectangle packing algorithm (there are already plenty of implementations on web, or pick up you math book) and roll out some kind of texture atlas format.
You can choose on what stage you stop. Later, you can go back and continue.
Drawing lines
Then, for straight lines, you will simply draw a thin rectangular sprite. User will input values such as beginning, end and thickness of line, and you will need to do some simple math to calculate position and rotation of this sprite. Sprite can be just colored or have a texture: for dotted lines, stripped lines, lines with pink fluffy kittens etc. Then, you can implement curved lines as a set of straight lines. You can optionally add sprites to the ends of line, such as arrows.
Drawing text
For text, things can be very complicated (and I will tell only about sprite fonts here):
each character is a little sprite
you draw texture of a letter over it
you have a texture with those letters, and sample it using dictionary. Dictionary is a map of character (or character code) to texture coordinates where it's picture situated, along with additional info, such as spacing, kerning, etc.
you can have pre-baked (offline) texture atlas with all letters of all fonts of all font sizes you need, along with dictionary. Obviously you cannot have all letters of all languages on a planet in your resource cache.
you can bake each character as needed on runtime and add it to your cache (texture atlas + dictionary)
To get characters from font file such as .ttf to a bitmap (image) you can use library. FreeType is a best open-source I know. Parsing fonts yourself can be... complicated.
You can then mix all together and draw lines with text texture. Or draw text surrounded by frame of lines. Or sprite with a text above it. Or GUI. All those stuff will be the sprites.
...or just not bother
If you still using DirectX 9, do you really need to bother with Windows SDK, removing D3DX stuff? Maybe you can continue developing with Direct SDK and D3DX if it works for you? Note, that if, for some reason, you'll decide to move to DX11, there are DirectXTK, which partially replaces D3DX11 stuff. Still, your own, or 3rd party solution will probably be more flexible and suitable for you. There are many others applications of sprites in 3D graphics,, such as billboarding, GUI, particles, etc. And as always, reinventing the wheel is a much fun and positive experience ;)
Hope it helps. Happy coding!
Why not try and use DirectX 11?
Oterhwise OpenGL is supported on almost any platform.
I would recommend trying SDL it has helper methods for most 2D stuff you can imagine.
I am trying to visualize a CAD geometry where GL_QUADS is used for the geometry and glutBitmapCharacter to annotate with a text.
The GL_QUADS hides the text partially (e.g 33,32,... here) for some view orientations (picture 1).
If I use glDisable(GL_DEPTH_TEST) to get the text displayed properly, I get the text that is supposed to annotate the back surface is also displayed (picture 2).
My objective is to annotate the visible front surfaces without being obscured but having the annotation on the back surfaces not shown.
(I am able to solve this by slightly offsetting the annotation normal to the quad, but this will cause me some other issues in my program, so I don't prefer this solution)
Could somebody please suggest me a solution ?
Well, as I expect you already know, it looks like the text is getting cut off because of the way it's positioned/oriented - it is drawing from a point and from right-to-left on the screen.
If you don't want to offset it (as you already mentioned, but I still suggest as it's the simple solution) then one way might be to rotate the text the same way the object's being rotated. This would (I'd expect) simply be a matter of changing where you draw the text to the same place you draw each quad (thus using the same Matrix). Of course then the text won't be as legible. This solution also requires the use of a different Object for rendering the text, such as FreeType Fonts.
EDIT 2: another solution would be texture-mapped text
Could somebody please suggest me a solution ?
You need to implement collision detection engine.
If point in 3d space at which label must be displayed is not obscured, render text with depth test disabled. This will fix your problem completely.
As far as I can tell, there's no other way to solve the problem if you want to keep letters oriented towards viewer - no matter what you do, there will always be a a good chance of them being partially obscured by something else.
Since you need a very specific kind of collision detection (detect visibility of a point), you could try to solve this problem using select buffer. On other hand, detecting ray/triangle (see gluUnProject/gluProject) collision isn't too hard to implement, although on complex scenes things will quickly get more complicated and you'll need to implement scene graph and use algorithms similar to octrees.