Background
I work on the game Bitfighter. We're still compatible with OpenGL 1.1 and compile for OSX, Windows, and Linux.
We use vector graphics for everything, including text rendering and manipulation. We use a slightly-modified variation of 'FontStrokeRoman' from GLUT, which is just a bunch of static lines. We like this as it seems to perform very well, easy to rotate/scale/manipulate. We also allow for in-game chat so text is drawn on the fly.
Problem
We want to use more/different fonts.
We've found several other fonts we like, but they are all TTF-type fonts that are built as polygons (with curves, etc.) instead of strokes or spines. This brings up a few problems:
We'd have to use textures (which we've avoided so far in the game)
They're not easily resizable/rotatable/etc.
Performance is significantly less (theoretically?)
We've experimented with converting the TTF fonts to polygon point arrays and then triangulating the fill. This however has been difficult to render nicely - pixel hinting/anti-aliasing seems difficult to do in this case.
We've even experimented with skeletonize-ing the polygon fonts using libraries like 'campskeleton', so we can output a vector-stroke font (with not much success that looks good).
What should we do?
I know this question is somewhat general. We want to keep performance and text manipulation abilities but be able to use better fonts. I am open to any suggestion in any direction.
Some solutions could be answers to the following:
How do we properly anti-alias polygon-based text and still keep performance?
Do textures really perform worse than static point arrays? Maybe I have a faulty assumption
Can textured fonts be resized/rotated in a fast manner?
Something entirely different?
After some convincing (thanks Serge) and trying out several of the suggestions here (including FTGL that uses freetype), we have settled on:
Font-Stash which uses stb_truetype
This seemed perfect for our game. Rendering performance was comparable to the vector-based stroke font we used - textured quads really are not that slow, once generated, and the caching from Font-Stash helps immensely. Also, the use of stb_truetype allowed us to not require another dependency in the game across all platforms.
For what its worth, this solution was roughly an order of magnitude faster than using FTGL for true-type fonts, probably because of the caching.
Thanks for the suggestions and pointers.
Update
The Font-Stash link above was a fork of the original here. The original has since been updated, has added some of the features that the fork, and can allow different rendering back-ends.
I'm no OpenGL expert, or graphical expert in general.
And, really, Raptor, I -hate- to be the guy that says this, because I know how you feel right now.
I really hate to say it, but honestly, I'd just give TTF fonts a shot, using their textures and polygons as they were intended. I doubt that such a usage would truly be detrimental towards your performance these days. It would likely be more valuable to save time to use them as is, rather than spending time to experiment around for some clever solution that more fits your desires.
I doubt that text drawing using polygons/textures would be detrimental, whatsoever, to you performance. This especially applies for today's computers. How many years back, technologically, do you intend on supporting with your application? Or perhaps you wish to run it on the Raspberry PI as well? Or other mobile platforms that do not have high graphical capabilities? Support of any of these could, perhaps, invalidate my claims.
But, like I said, I really hate to be the guy that suggests you to trudge forward as is. Because, I've been there, asking for performance advice (sometimes over even tinier things) and just groaning when someone says 'forget about it, the compiler will handle it' or 'computers are so advanced you shouldn't worry about it'. I honestly hope that someone else comes in with experience, and a good answer for you. However, if not, I just want let you know: I cannot foresee the possibility that using TTF, as it was intended, would be detrimental whatsoever to your gameplay performance.
Did you try outline glyph decomposition using freetype FT_Outline_Decompose ?
Freetype is the tool of choice for font rendering and glyph outlines extraction. Hinting is supported and rendering modes allows you to specify that you want antialiazing, hinting, monochrome targets etc.
Freetype also has a built in glyph/bitmaps cache mechanism which may be useful.
Note : OGFLT seems to bridge the gap between freetype and opengl although I never has the chance to use it
Related
I'm trying to replicate the effect of Cathode but i'm not really aware of any rendering effects in SDL. Does anyone know the technique used in Cathode? Are they using OpenGL and shaders maybe?
If you are still interested in the subject I'm working on a similar project. The effects were obtained by using GLSL shaders.
You can grab the source code here: https://github.com/Swordifish90/cool-old-term/
The shaders strings might not be extremely readable due to the extensive use of the ternary operators (needed to customize the appearance) but they should give you a really good idea.
If you poke around a bit in the application bundle, you'll find that the only relevant framework is GLKit which, according to Apple, will "reduce the effort required to create new shader-based apps".
There's also a bunch of ".fragdata", ".vertdata", and ".glsldata" files, which are encrypted.
Very unfortunate for you.
So I would say: Yes, it's OpenGL shaders all the way.
Unfortunately, since the shaders are encrypted, you're going to have to locate suitable algorithms elsewhere.
(Perhaps it's possible to use the OpenGL debugging and profiling tools to capture the shader source as it is compiled, but I doubt it.)
You may have realized that Android phones have (had?) such animations when you put them to sleep. That code is available under in file named ElectronBeam.java.
However it is Java code and uses GLES 1.0 with GLES 1.1 Extenstions but algorithm for bending screen should be understandable.
Seems to be based on GLTerminal which uses OpenGL, it would have to use OpenGL and shaders for speed.
I guess the fastest approximation would be to render the text to buffers within OpenGL and use a deformed 2d grid to create the "rounded corners" radial distortion.
But it would take a lot of work to add all the features that cathode has, not to mention to run them quickly.
I suspect emulating a CRT perfectly is a bit like emulating an analog synth perfectly - hard to impossible.
If you want to work quickly and not killing the CPU, the GPU is the best solution! So pixel shaders. pixel shaders can do all of these effects. Once I made such an application. I wrote it in Silverlight, but it does not matter, I used the pixel shader.
Suggests to write this in Qt4 and add to the QWidget pixel shader effects.
I started working at this company that uses an 2D OpenGL implementation to show our system's data (which runs on Windows.) The whole system was built with C++ (using C++Builder 2007). Thing is, all the text they print there are pixelized when you zoom in, which I think happens because the text is a bitmap:
From what I know they use the same font files as Windows does. I asked around here on why this happens and the answer I got is that the guy who implemented it (which doesn't work at the company anymore) said fonts on OpenGL are hard and this was the best he could do or something like it.
My question is: is there any simple and effective way to make the text also a vector (the same way those lines in the picture are?) So when I zoom the camera, which happens a lot, they don't pixelize. I have little knowledge of OpenGL and if you have some guide and/or tutorial related to this to point me towards the right direction I'd be very thankful. Basically any material would be great.
Most of OpenGL text rendering libraries come to this: creating bitmaps for the fonts. This means you are going to have problems with scaling and aliasing unless you do some hacks.
One of the popular hacks is Valve's approach: Chris Green. 2007. "Improved Alpha-Tested Magnification for Vector Textures and Special Effects.". You use signed distance field algo to generate your fonts bitmap which then helps you to smooth the text outlines on scale during rendering. Wikidot has the C++ implementation for Distance field generation.
If you stick to NVidia specific hardware, you can try the NVidia Path extension which allows you to render graphics directly on GPU. Remember, it is a NVidia only thing.
But in general, signed distance field based approach is the smoothest and easiest to implement.
BTW, freetype-gl uses Valve's approach and also the modern pipeline.
You can try freetype-gl its a library for font rendering in OpenGL.
The issue with using fonts in OpenGL is that they are handled inconsistently across platforms, and that they have minimal support. If you're willing to go with a helper library for OpenGL (SDL comes to mind), then this behaviour will likely be wrapped, meaning that you merely need to provide a suitable font file for them to use.
You may try out FTOGL4 , the fonts for OpenGL4
I've always been inspired by dynamic, futuristic-like user interfaces. The best I can describe is a graphic interface such as in the latest Iron Man movies.
Although I wouldn't build a full blown application, I would like to make little snipplets of animations that I plan to make interactive. And maybe put them together someday to make something bigger. Admittedly, I will use for audio manipulation in the future but anyway, that's not the point since it's the animations/graphics I'm unsure of.
I know it's possible to make those kind of animations in Adobe After Effects. I'm just having a hard time thinking of the processes (artistically and programmability) to proceed.
While researching on this on my own I have acquired basic experience with OGRE 3D and Blender. I've imported and compiled meshes on OGRE, have been able to do basic things like move the meshes around which is about it.
I'm beginning to think I may be approaching this the wrong way and there are better tools or if 3D is overkill for those kind of animations when 2D would suffice and maybe provide a smoother experience.
I'm having trouble understanding the process and am wondering two things:
1.)The main thing I'm having trouble understanding is how to get still graphics to make animations? Do the meshes keep the timeline from a program like Blender then a graphics engine like OGRE reads the timeline and plays them?
Most importantly:
2.)Do I even need graphics (meshes)? Most of the interface are thin-border boxes, text and shapes of transparent LED-like colors that can move around dynamically to make that futuristic effect.
Please share your opinions, suggestions and anything you think might help me accomplish to develop those kinds of sexy eye candy! Thanks.
When you look at awesome futuristic UIs in movies, they are usually made of
basic primitives
desaturated colors, and/or one color tone
transparency
a cool font or two
high-tech text, graphs or similar
simple animations to make things look "alive", blinking lights/text and similar
a touch interface, of course
Maybe you can't do a lot about the touch interface, but the rest is really not hard graphics wise, it's a matter of carefully crafted artwork and combining simple elements in a cool way.
Also I would look into Adobe Photoshop and fancy texturing rather than Blender and fancy modelling, as you are looking for a fancy 2D UI, and detailed 3D models will not be that important. Playing around in photoshop (well, or GIMP if you want a free alternative) can help you develop your art skills, and help you get that high-tech, sci-fi look on a 2D surface.
You know, I would go as far as to suggest making some sci-fi wallpapers in the style you are after before trying to solve this problem in code. I think you will find that photo manipulation skills and an eye for art will help you here. And for gods sake, look at those movies (Iron Man, Minority Report etc.) that have those UIs you are aiming at, and analyze what exactly they are. Decompose them like I did in the list above.
As for the "which tools should I use?", I say the answer to that is fairly simple:
OpenGL
Photoshop (or GIMP if you are a starving student etc.)
A compiler & toolchain
A code editor/IDE
A cup
I see this is tagged C++, which is an excellent choice of programming language if I may say so.
Ogre is a full blown 3D engine, which is fine, but not exactly targeted at what you want to use it for. You might find that you struggle to get what you want done (disclaimer: I have not tried this in Ogre, and it might work well for this. Then again, when did you last see Ogre used in an audio manipulation program?). My advice is to learn good, simple OpenGL. That would give you complete power over your UI, not get in your way or limit you in any way. It is also cross platform, well documented, and used by tons of developers all over the world (also for audio manipulation applications). I can't see how you could possibly go wrong with it. The fun part is that it probably won't take you long to get advanced enough in it to start developing some pretty nice UIs. As I mentioned, it's more of an art problem than a coding problem.
The cup is for the coffee, by the way. :)
The easiest and most efficient way is to keep track of all your graphics data (meshes, animations, effects) in "media files" and load & play them in runtime. Though you'll be able to easily change your game without changing the code.
For example, you have a Diablo-like game and you wanna turn it to the future-style. You just need to rewrite some player and AI scripts and modify meshes/effects/sounds/animations. But if you've done those via code - it will be a new game from scratch.
I would suggest Ogre, but you already used that, so by my opinion, you are on the right track.
Look up 'billboards' in Ogre documentation, re: LED and 2D stuff.
How can I make a screensaver in C++ that fades an image in and out at random places on the screen with a specified time delay on the fade out?
Multimonitor support would be awesome.
If you have a working code or know where I can get it, it would be great. Otherwise point me in the right direction. I'm looking for a method that has a smooth and not laggy og flickery fade. The screensaver is for Windows XP.
I dont know C++, but I do know AS3, Javascript, and PHP. So I was hoping to relate some of that knowledge to C++.
What should I use to compile it?
First off if you're starting out in C++, don't start with a windows specific compiler like visual c++. Grab a nice cross-platform IDE that comes with a compiler like eclipse or code::blocks. Like any project, you are going to want to split it up into smaller tasks you can complete in stages. Sounds like you have a couple of hurdles.
Lack of C++ Knowledge (we were all here once)
Lack of knowledge about images (very common affliction)
Lack of experience (we'll work on this)
DO NOT let others discourage you. You CAN do this, and probably faster than you think possible. DO read a book or two about C++, you can get by for one project without knowing much but you WILL get frustrated often. Let's break up your project into a set of small goals. As you complete each goal your confidence in C++ will rise.
Image blending
Windows Screen Saver w/ multi-monitor support
Screen Canvas (directx, opengl, bitmaps?)
Timers
First, let's look at the problem of image blending. I assume that you'll want to "fade" the image in question into whatever windows background you have active. If you're going to have a solid-color background, you can just do it by changing the alpha transparency of the image in question between canvas refreshes. Essentially you'll want to average the color values of each pixel in the two images based on your refresh timer. In more direct terms to find the red, green, and blue pixel elements for any resultant pixel (P3)
N = timer ticks per interval (seconds/milliseconds/etc)
T = ticks that have occurred this interval
P1r = red pixel element from image 1
P2r = red pixel element from image 2
P3r = resultant red pixel element for blended image
P1g = green pixel element from image 1
P2g = green pixel element from image 2
P3g = resultant green pixel element for blended image
P1b = blue pixel element from image 1
P2b = blue pixel element from image 2
P3b = resultant blue pixel element for blended image
P3r = ((T/N * P1r) + ((N-T)/N * P2r))/2
P3g = ((T/N * P1g) + ((N-T)/N * P2g))/2
P3b = ((T/N * P1b) + ((N-T)/N * P2b))/2
Now let's look at the problem of windows screen savers and multi-monitor support. These are really two separate problems. Windows screensavers are really only regular .exe compiled files with a certain callback function. You can find many tutorials on setting up your screensaver functions on the net. Multi-monitor support will actually be a concern when you set up your Screen Canvas, which we'll discuss next.
When I refer to the Screen Canvas, I am referring to the area upon which your program will output visual data. All image rendering apps or tutorials will basically refer to this same concept. If you find this particularly interesting, please consider a graduate program or learning course in Computer Vision. Trust me you will not regret it. Anyway, considering the basic nature of your app I would reccommend against using openGL or DirectX, just because each have their own layer of app-specific knowledge you'll need to acquire before they are useful. On the other hand if you want built-in 3d niceties like double buffering and gpu offloading, I'd go with openGL (more platform agnostic). Lots of fun image libraries come as gimmes as well.
As for multi monitor support, this is a tricky but not complicated issue. You're basically just going to set your canvas bounds (screen boundary) to match the geometry of your multiple monitors. Shouldn't be an issue with multiple monitors of the same resolution, may get tricky with mismatched monitor resolutions (might have canvas areas not displayed on screen etc. There are well known algorithms and workarounds for these issues, but perhaps this is beyond the scope of this question.
Finally as to the timers required, this should be the easiest part. Windows and Linux handle time in their own strange ways (WHY doesn't MS implement STRPTIME), so If you are interested in portability I'd use a 3rd party library. Otherwise just use the windows settimer() basic functionality and have your image rendered in the callback function.
Advanced Topic: Hey if you're still reading there are a number of interesting algorithmic improvements you can make to this program. For instance, with your timer going off a set quanta each second, you could cache the first few blended images and save some processing time (our eyes are not terribly good at noticing differentiating between changing color gradients). If you are using openGL, you could cache sets of blended images as display lists (gotta use all that graphics card memory for something right?). Fun stuff, good luck!
"I dont know C++, but I do know AS3, Javascript, and PHP. So I was hoping to relate some of that knowledge to C++."
Oh boy, I suppose you're going to be surprised. To learn C++:
Buy at least one or two very good books (see here). Do not buy books that aren't very good. They'll teach you habits that you would have to unlearn later in order to make further progress.
Expect having to do plenty of hands-on code writing with a lot of reading in between. In the first few weeks, the compiler will spit legions of incomprehensible error messages at you, your code will keep crashing, and seasoned C++ programmers looking at your code will throw up their hands in disgust. And it's always your fault.
Plan a lot of time to learn. And I mean a real lot. No matter how much time you devote, it will take at least a couple of months until you upgrade from "dummy" to "novice".
Imagine having to hammer at it for a couple of years in order to become a real professional, constantly reading books and articles, devoting plenty of time to newsgroups and discussion forums, learning from others, banging your head against every wall surrounding your desk.
Be prepared to learn something you haven't known before at least once every week even after a decade of programming in C++ for a living.
Just for a moment, imagine I might not overstate this.
Edit for clarification: I have up-voted both Hunter Davis' and Elemental's answers, because they're very good, pretty much to the point, and in general the encouragement is supported by me despite my rant up there. In fact, I do not doubt that many are able to hack something together in C/C++ when given a skeleton example even if actually they don't know much of C++. Go ahead and try, there's a good chance you'll come up with a screen saver within reasonable time. But. Being able to "hack something together in C/C++" is far from "having learned C++". (And I mean very far.)
C++ is an incredible complex, mean, and stubborn beast. I also consider it almost breathtakingly beautiful, but in that it probably mirrors the view from a very high mountain: It's pretty hard to get to the point where you're able to appreciate the beauty.
C++ is a complex language and all that sbi indicates is probably true (certainly after 10 years of commercial C++ programming there is still some to learn) BUT I think that if you are confident in another language it should only take a couple of days to:
a) Install one of the compilers mentioned here (I would suggest VC as a quick in to windows programming)
b) Find some sample code of a screen saver that does something simple using the windows GDI (there is some code within the MS documentation on screen savers)
c) Modify this code to do what you want.
In terms of your limited requirement I think you will find that the window GDI++ libraries have sufficient power to do the alpha fades you require smoothly.
I'd look at microsoft c++ express. It's free and pretty capable. You may need to hack it, or get an older version, to produce unmanaged executables.
There are a lot of tutorials available for the rest
For those not familiar with Core Image, here's a good description of it:
http://developer.apple.com/macosx/coreimage.html
Is there something equivalent to Apple's CoreImage/CoreVideo for Windows? I looked around and found the DirectX/Direct3D stuff, which has all the underlying pieces, but there doesn't appear to be any high level API to work with, unless you're willing to use .NET AND use WPF, neither of which really interest me.
The basic idea would be create/load an image, attach any number of filters that can be chained together, forming a graph, and then render the image to an HDC, using the GPU to do most of the hard work. DirectX/Direct3D has these pieces, but you have to jump through a lot of hoops (or so it appears) to use it.
There are a variety of tools for working with shaders (such as RenderMonkey and FX-Composer), but no direct equivalent to CoreImage.
But stacking up fragment shaders on top of each other is not very hard, so if you don't mind learning OpenGL it would be quite doable to build a framework that applies shaders to an input image and draws the result to an HDC.
Adobe's new Pixel Blender is the closest technology out there. It is cross-platform -- it's part of the Flash 10 runtime, as well as the key pixel-oriented CS4 apps, namely After Effects and (soon) Photoshop. It's unclear, however, how much is currently exposed for embedding in other applications at this point. In the most extreme case it should be possible to embed by embedding a Flash view, but that is more overhead than would obviously be idea.
There is also at least one smaller-scale 3rd party offering: Conduit Pixel Engine. It is commercial, with no licensing price clearly listed, however.
I've now got a solution to this. I've implemented an ImageContext class, a special Image class, and a Filter class that allows similar functionality to Apple's CoreImage. All three use OpenGL (I gave up trying to get this to work on DirectX due to image quality issues, if someone knows DirectX well contact me, because I'd love to have a Dx version) to render an image(s) to a context and use the filters to apply their effects (as HLGL frag shaders). There's a brief write up here:
ImageKit
with a screen shot of an example filter and some sample source code.