I searched around a bit and couldn't find anything clear cut as to whether/how you can render true-type fonts using OpenGL (with SDL as the API if it makes any difference).
So I was wondering if anybody with experience knows the answer to if it is possible and how, or could point me to some other good source or explanation.
If it's not possible which I suspect is the case, any other suggestions would be greatly appreciated for working with fonts using OpenGL.
OpenGL itself deals only with points, lines and triangles. Anything going beyond that functionality must be implemented by the user. So no, there's no direct support for font rendering in OpenGL.
One can of course use OpenGL to rasterize glyphs, by various methods.
A very widespread method is texture mapped fonts, i.e. each (used) glyph of a font rendered into a texture atlat.
One can use OpenGL primitives to raster glyph curves, though this is a tricky subject.
Use shaders to implement vector textures
Use shaders to implement distance maps (distance maps are not unlike texture mapped font's but with a greatly reduced memory footprint).
Look at this. Here is a more recent example of using FreeType with OpenGL.
Related
I am wondering how to implement the "depth of field/circle of confusion" effect using OpenGL?
Is there any built-in method or library to support it?
You will not find anything "built in" to OpenGL that will give you what you are looking for. You will have to implement this effect through a shader, which is fairly straightforward.
An article on how to achieve this effect is freely available here:
Nvidia article on depth of field techniques
You can compute different DOF approximations. For a simple approximation, you could try to render near object into a texture and far object into another texture. In another pass you could blur the texture holding the image of the far objects and then combine both textures to a single texture and wrap it on a screen space rect. This has nothing to do with the actual DOF but in real-time graphics often little tricks are used - but the visual outcome has to be convincing though.
I've been using XNA for essentialy all of my programming so far and would like to move on to OpenGL (along with SFML for IO, creating the window etc.) with C++ . For starters I'd like to create a tile-based game and I've mostly looked at LazyFoo's tutorials.
I just have a two questions:
How should I draw the tiles? Should I use immediate drawing, arrays, VBOs or what? VBOs feel like overkill for this but I'm not sure. It's very tempting to use immediate drawing but apparently it's deprecated. Maybe it's fine for this purpose since it's 2D and only for a bunch of quads.
I'd like a lot of different tiles and thus all of my tiles will not fit into a single texture without making it massive. I've read that using bindTexture isn't very cheap and thus I should avoid as many calls as I can. I thought that maybe I can create a manager for my textures and stitch them all together into one big texture and bind that but then the dimensions of that is an issue.
Don't use immediate mode! It's cumbersome to work with and has been removed from recent OpenGL versions. Use Vertex Arrays, ideally through VBOs. In the end they're much easier to use, believe me.
Regarding that switching of textures. We're talking about optimizing the texture switch patterns in very complex scenes. In your case it will hardly matter at all.
Update
Right now you worry abount things without having even used them. That's worse than premature optimization. I suggest you first get a good grip on OpenGL, then start worrying about state switch management.
With regards to the texture atlas; this is usually done by stitching textures into groups of power-of-two sized textures. For example in a tile-based game you might have a particular tile set (say, tiles for an ice world) grouped together on 2 or 3 textures. When you want to render them you would determine what tiles are visible, then you bind each texture once and render the tiles from that texture for any tiles that are visible on screen.
This requires quite a lot of set-up time to get right; you need keep information on each sub-texture of the atlas so you can find the right texture and render the appropriate region of that texture whenever a tile is referenced. You also need a good way of grouping rendering operations so that they occur when the appropriate texture is bound.
Like datenwolf said, I wouldn't focus too much on complicated texture systems early on; eager binding of textures will be plenty fast enough until you get further down the road.
I would like to know if there is a way to generate a single static image of a 3D object (1 single object represented as a triangle list), using OpenGL or DirectX, that allows you to know which specific triangles defining the object have been used to generate every one of the pixels forming the rendered image. I've cited OpenGL and DirectX because they are widely used APIs graphics if somebody knows other ways of achieving the previous that works at high speed I would be also interested in his/her answer. I currently use my own software implementation of the rendering pipeline to keep track of the relationship, but I would like to use the power and effects (mainly antialiasing, shadows and specific skin rendereing techniques) that graphics cards offer.
Thanks very much for your help
Sure, just output a triangle identifier to a separate render-target (using MRT). In GLSL-terms, this is gl_PrimitiveID, and in HLSL-terms it's SV_PrimitiveID. If you are using multi-sampling, then your multi-sample buffer for that render-target become a list of primitives that contribute to each pixel.
Draw each triangle in a different colour. R8G8B8 offers you about 16.7 million possible colours, so one can index that number of triangles with it. You don't have to draw to a on-screen buffer. You could render the picture as usual, and render to a second target, indexing the triangles in a off-screen buffer.
I would like to draw a simple 2D stickman on the screen. I also want it to be anti-aliased.
The problem is that I want to use a bones system, which will be written after I would know how to draw the stickman itself based on the joints positions. This means I can't use sprites - I want my stickman to be fully controlable in the code.
It would be great if it will be possible to draw curves too.
Drawing a 3D stickman using a model would also be great if not better. The camera will be positioned like it's 2D, but I would still have depth. The problem is that I only have experience in Maya, and exporting and vertex weighting of the model in OpenGL seems like a mess...
I tried to find libraries for 2D anti-aliased drawing or enable multi-sampling and draw normally, but I had no luck. I also tried to use OpenGL's native anti-aliasing but it seems deprecated and the line joins are bad...
I don't want it to be too complicated because, well, it shouldn't be - it's just the first part of my program, and it's drawing a stickman...
I hope you guys can help me, I'm sure you know better than me :)
You could enable GL_SMOOTH. To check if you device supports your required line width for smooth lines, you can use glGet(GL_SMOOTH_LINE_WIDTH_RANGE);
If you want your code to be generic, you can also use antialiased textures.
Take a look at this link
http://www.opengl.org/resources/code/samples/advanced/advanced97/notes/node62.html
The only way to get antialiasing is use GL library which knows how to get antialiased GL context, for example, SDL. As of stickman, you can draw him with colored polygons.
I've read that FBOs can be used for fast image manipulation using the OpenGL drawing actions. Does anyone know the basics of how to do this? or has some very simple example code illustrating it?
Before you can use FBOs for image manipulation you need to know how to handle OpenGL, as a FBO can simply be used as a render target (output buffer for rendering operations). Once you're fluent with OpenGL and probably know how to do shader programming, you can do virtually everything with images in an FBO, and do it extremely fast.
A simpler approach might be to employ CUDA (NVidia) or Stream Computing (ATI) to harness a GPU's power for image manipulation, because these APIs are much closer to regular array-based C++ programming. Image manipulation may be somewhat slower that way than with OpenGL, but still way faster than with traditional CPU driven code.
Framebuffer Objects (FBO) are just a basic tool that cannot be used to manipulate images directly. If you know how to render your image manipulations in OpenGL to the screen, you can then use FBOs to render them off-screen. So they are in fact useful for this task, since you are not limited by the resolution of your screen and don't have to distract the user with thousands of flashing images. However, the manipulation itself happens in OpenGL, probably in the fragment shader.
Visit to the OpenGL forum to get some advice how to start with OpenGL basics. They also have quite a few links to sample code.