is there anyway to build rich animation with C++?
I have been using OpenCV for object detection, and I want to show the detected object with rich animation, Is there any easy way to realize this?
I know flash can be used to easily build rich animation. But can flash be reliably integrated with C++ and How?
Also, Can OpenGL help me with this? To my knowledge, OpenGL is good for 3D rendering. But I am more interested in showing 2D animations in an image. So I am not sure whether this is a right way to go.
Another question, how are those animations in augmented reality realized? What kind of library are they using?
Thank you in advance.
Its difficult to tell if this answer will be relevant, but depending on what sort of application you are creating you may be able to use Simple DirectMedia Layer.
This is a cross-platform 2D and 3D (via OpenGL) media library for C, C++ and many other compatible languages.
It appears to me that you wish to produce an animated demo of your processing results. If I am wrong, let me know.
The simplest way to produce a demo of a vision algorithm is to dump the results to a distinct image file after each processed frame. After the processing session, these individual image files are employed to prepare the video using e.g. mencoder. I employed such procedure to prepare this.
Of course, your program can also produce OpenGL. Many people dealing with 3D reconstruction do that. However, in my opinion that would be an overkill for simple 2D detection. Producing flash would be an even greater overkill.
Related
I have a vectorized Adobe Illustrator image that I would like to animate using custom xyz input (in this case simulated plot points that I would like to visualize over time, using a hand drawn picture/wireframe model) from e.g. a c++ program or perhaps even javascript application. Is there any (fairly straightforward) strategy to achieve this? E.g. using open GL or some other (open source) tool?
If you want to draw vectorized images you would need to use vector image renderer. The easiest way to do this is to use Flash since it has support vector drawing (one of the best) and really strong scripting language to do all sorts of things (anymate stuff based on input, etc) even 3d graphics.
The hard way of doing this is to use a custom library in c++ do draw vector grapics in opengl or directx. I can only speak of gameswf (opensource player for flash files) or scaleform. There two have support for swf files exported by flash. If you only need a renderer without any animation then there should be plenty of libraries out there (check out this thread)
I started working at this company that uses an 2D OpenGL implementation to show our system's data (which runs on Windows.) The whole system was built with C++ (using C++Builder 2007). Thing is, all the text they print there are pixelized when you zoom in, which I think happens because the text is a bitmap:
From what I know they use the same font files as Windows does. I asked around here on why this happens and the answer I got is that the guy who implemented it (which doesn't work at the company anymore) said fonts on OpenGL are hard and this was the best he could do or something like it.
My question is: is there any simple and effective way to make the text also a vector (the same way those lines in the picture are?) So when I zoom the camera, which happens a lot, they don't pixelize. I have little knowledge of OpenGL and if you have some guide and/or tutorial related to this to point me towards the right direction I'd be very thankful. Basically any material would be great.
Most of OpenGL text rendering libraries come to this: creating bitmaps for the fonts. This means you are going to have problems with scaling and aliasing unless you do some hacks.
One of the popular hacks is Valve's approach: Chris Green. 2007. "Improved Alpha-Tested Magnification for Vector Textures and Special Effects.". You use signed distance field algo to generate your fonts bitmap which then helps you to smooth the text outlines on scale during rendering. Wikidot has the C++ implementation for Distance field generation.
If you stick to NVidia specific hardware, you can try the NVidia Path extension which allows you to render graphics directly on GPU. Remember, it is a NVidia only thing.
But in general, signed distance field based approach is the smoothest and easiest to implement.
BTW, freetype-gl uses Valve's approach and also the modern pipeline.
You can try freetype-gl its a library for font rendering in OpenGL.
The issue with using fonts in OpenGL is that they are handled inconsistently across platforms, and that they have minimal support. If you're willing to go with a helper library for OpenGL (SDL comes to mind), then this behaviour will likely be wrapped, meaning that you merely need to provide a suitable font file for them to use.
You may try out FTOGL4 , the fonts for OpenGL4
I'm going to develop math model of trafics simulation and will need to somehow vizualise it. The model will be in C++
I'd like someone to recommend me how can I visualise the result data file - e.g. paint cars, road etc. Language choose is not important but should be easy enough to go into.
os: Win32
UPD:
It'd better be the 2D not 3D
but actually - doesn't matter
Best quality and most general software I've seen is Graphviz.
http://www.graphviz.org/
I've heard lots of good things about VTK (not yet had the occasion to use it myself).
The Wiki contains lots of C++ examples.
Although I do not know how (or even if) it interfaces with C++ you may be interested in processing to quickly build visualizations.
If you want more 2D than 3D and if you know C++, then using Qt and notably its Qt Graphics View framework could be ok.
I've always been inspired by dynamic, futuristic-like user interfaces. The best I can describe is a graphic interface such as in the latest Iron Man movies.
Although I wouldn't build a full blown application, I would like to make little snipplets of animations that I plan to make interactive. And maybe put them together someday to make something bigger. Admittedly, I will use for audio manipulation in the future but anyway, that's not the point since it's the animations/graphics I'm unsure of.
I know it's possible to make those kind of animations in Adobe After Effects. I'm just having a hard time thinking of the processes (artistically and programmability) to proceed.
While researching on this on my own I have acquired basic experience with OGRE 3D and Blender. I've imported and compiled meshes on OGRE, have been able to do basic things like move the meshes around which is about it.
I'm beginning to think I may be approaching this the wrong way and there are better tools or if 3D is overkill for those kind of animations when 2D would suffice and maybe provide a smoother experience.
I'm having trouble understanding the process and am wondering two things:
1.)The main thing I'm having trouble understanding is how to get still graphics to make animations? Do the meshes keep the timeline from a program like Blender then a graphics engine like OGRE reads the timeline and plays them?
Most importantly:
2.)Do I even need graphics (meshes)? Most of the interface are thin-border boxes, text and shapes of transparent LED-like colors that can move around dynamically to make that futuristic effect.
Please share your opinions, suggestions and anything you think might help me accomplish to develop those kinds of sexy eye candy! Thanks.
When you look at awesome futuristic UIs in movies, they are usually made of
basic primitives
desaturated colors, and/or one color tone
transparency
a cool font or two
high-tech text, graphs or similar
simple animations to make things look "alive", blinking lights/text and similar
a touch interface, of course
Maybe you can't do a lot about the touch interface, but the rest is really not hard graphics wise, it's a matter of carefully crafted artwork and combining simple elements in a cool way.
Also I would look into Adobe Photoshop and fancy texturing rather than Blender and fancy modelling, as you are looking for a fancy 2D UI, and detailed 3D models will not be that important. Playing around in photoshop (well, or GIMP if you want a free alternative) can help you develop your art skills, and help you get that high-tech, sci-fi look on a 2D surface.
You know, I would go as far as to suggest making some sci-fi wallpapers in the style you are after before trying to solve this problem in code. I think you will find that photo manipulation skills and an eye for art will help you here. And for gods sake, look at those movies (Iron Man, Minority Report etc.) that have those UIs you are aiming at, and analyze what exactly they are. Decompose them like I did in the list above.
As for the "which tools should I use?", I say the answer to that is fairly simple:
OpenGL
Photoshop (or GIMP if you are a starving student etc.)
A compiler & toolchain
A code editor/IDE
A cup
I see this is tagged C++, which is an excellent choice of programming language if I may say so.
Ogre is a full blown 3D engine, which is fine, but not exactly targeted at what you want to use it for. You might find that you struggle to get what you want done (disclaimer: I have not tried this in Ogre, and it might work well for this. Then again, when did you last see Ogre used in an audio manipulation program?). My advice is to learn good, simple OpenGL. That would give you complete power over your UI, not get in your way or limit you in any way. It is also cross platform, well documented, and used by tons of developers all over the world (also for audio manipulation applications). I can't see how you could possibly go wrong with it. The fun part is that it probably won't take you long to get advanced enough in it to start developing some pretty nice UIs. As I mentioned, it's more of an art problem than a coding problem.
The cup is for the coffee, by the way. :)
The easiest and most efficient way is to keep track of all your graphics data (meshes, animations, effects) in "media files" and load & play them in runtime. Though you'll be able to easily change your game without changing the code.
For example, you have a Diablo-like game and you wanna turn it to the future-style. You just need to rewrite some player and AI scripts and modify meshes/effects/sounds/animations. But if you've done those via code - it will be a new game from scratch.
I would suggest Ogre, but you already used that, so by my opinion, you are on the right track.
Look up 'billboards' in Ogre documentation, re: LED and 2D stuff.
For those not familiar with Core Image, here's a good description of it:
http://developer.apple.com/macosx/coreimage.html
Is there something equivalent to Apple's CoreImage/CoreVideo for Windows? I looked around and found the DirectX/Direct3D stuff, which has all the underlying pieces, but there doesn't appear to be any high level API to work with, unless you're willing to use .NET AND use WPF, neither of which really interest me.
The basic idea would be create/load an image, attach any number of filters that can be chained together, forming a graph, and then render the image to an HDC, using the GPU to do most of the hard work. DirectX/Direct3D has these pieces, but you have to jump through a lot of hoops (or so it appears) to use it.
There are a variety of tools for working with shaders (such as RenderMonkey and FX-Composer), but no direct equivalent to CoreImage.
But stacking up fragment shaders on top of each other is not very hard, so if you don't mind learning OpenGL it would be quite doable to build a framework that applies shaders to an input image and draws the result to an HDC.
Adobe's new Pixel Blender is the closest technology out there. It is cross-platform -- it's part of the Flash 10 runtime, as well as the key pixel-oriented CS4 apps, namely After Effects and (soon) Photoshop. It's unclear, however, how much is currently exposed for embedding in other applications at this point. In the most extreme case it should be possible to embed by embedding a Flash view, but that is more overhead than would obviously be idea.
There is also at least one smaller-scale 3rd party offering: Conduit Pixel Engine. It is commercial, with no licensing price clearly listed, however.
I've now got a solution to this. I've implemented an ImageContext class, a special Image class, and a Filter class that allows similar functionality to Apple's CoreImage. All three use OpenGL (I gave up trying to get this to work on DirectX due to image quality issues, if someone knows DirectX well contact me, because I'd love to have a Dx version) to render an image(s) to a context and use the filters to apply their effects (as HLGL frag shaders). There's a brief write up here:
ImageKit
with a screen shot of an example filter and some sample source code.