I'm taking a look at graphviz (gvc) to embed the creation of some graphs in an MFC app that I am working with.
As far as I can see, it's pretty simple to render to a png file, but I wanted to render it to a gdiplus context without having to write a temporary file to disk to display (which seems to be the only option). Is this possible?
Regards Candag
Yes it's possible, if you write your own renderer plug-in. See http://www.graphviz.org/doc/libguide/libguide.pdf . It's already done for X11 (see http://www.graphviz.org/doc/info/output.html#d:xlib) so you can probably work with that as an inspiration, probably 'all' you'd have to do is translate xlib primitives into GDI(+) primitives.
That said, for me it wasn't worth it, I just render to a temporary file and read that in. It's not as nice conceptually but for the user it doesn't make any difference, and it would be a significant amount of work to implement and debug the renderer mentioned above. I suspect that for the use cases where the output of graphviz is good enough, that the optimisation of having a native Gdi renderer isn't worth it...
Related
I'm trying to figure out how to use WebRender to render to an image (say PNG).
If I understand correctly, I would need to somehow (?) acquire a Rc<dyn gl::Gl> context so I can pass it to create_webrender_instance* and do the rendering stuff with it. Then, I would somehow (?) get the image data out of the GL thingy and save it with a crate like image.
I noticed that the WebRender debugging tool wrench has a headless mode which seems to be doing something similar to what I need. However, it uses unsafe, confusing, and poorly documented functions like gl::GlFns::load_with and its really hard for me to understand what's going on and which bits I need.
How can I render a PNG image using WebRender? I am looking for a solution that works on many platforms, and without creating a window.
*Corresponds to Renderer::new in the latest version of WebRender at time of writing. I am using the master branch.
I'm developing a game, actually my first game, so I'm new in this world, I'm using OpenGL with NDK and C++ for the render part, and I call it from java with JNI. I'm stuck with the textures topic, since I need to use PNG with alpha channel and use TTF for some text.
I can include the libpng, but since I'm using the experimental gradle puglin, I don't know how to add the library and use it, I saw that the library can be precompiled and be added, but from what I saw, only for one architecture, then, I don't know if I'm wrong, but I think if I add the source code of the library and compile it with the program, I think, it will be compiled for the architectures that I need (MIPS, 64-bit ARM, x86, 64-bit x86, ARM), so that is one, I was thinking in pre-convert the png in raw RGBA and use that vector directly with opengl but again, I dont know how to do this.
and with the TTF issue, well I am in blank, if you have any advice for this I would greatly appreciate it.
Thanks for your help.
You can build the whole FreeType engine into your code, or you can just use what's already part of Android: use Canvas to render glyphs to a Bitmap. You can find an example of this in Android Breakout. The game is written in Java rather than C++, but the Java-language GLES code is just a thin wrapper around the native stuff, so it's pretty similar.
There's a pretty good blog post about GLES text on Android here.
On a similar note, you already have a copy of libpng in your app. You can call through the Bitmap API to use it.
If you have as a goal the creation of an entirely self-contained native app, then the approach of calling into Canvas/Bitmap isn't viable. I don't think that's a particularly useful goal, however. You're better off separating the "game engine" from the game logic, e.g. have platform-specific "decode PNG" and "pass this pile of RGBA pixels into glTexImage2D()" functions, and platform-agnostic "use texture N".
Taking that one step further, your best approach is to use an existing graphics engine or game engine, and focus on creating the game rather than writing the engine. Learning about engines by writing one is a worthwhile endeavor, but if your actual goal is to write a game then you should focus on the game itself.
Need help with an interesting task. I need to write a C++ program that builds the graph and save a graphic file format bmp. I know how to initialize the bmp, but how to build a graph in it , I can`t think up. Necessary practical and theoretical help if there is a link to an article on the subject.
P.S. I apologize for my bad English :I
There are a lot of ways to do graphics in Windows. The lowest level and most fundamental is to use the Win32 APIs that employ the GDI (Graphics Device Interface), which is built in to Windows. With GDI calls you can paint anything to the screen, and the same GDI calls can be used to paint on an in-memory bitmap that is off screen. To get started in this direction search the net for Win32 tutorials.
I recommend FreeImage library (http://freeimage.sourceforge.net/) it's simple and fast lib without additional problems, you can manipulate graphic files dealing with them like 2D array.
And also, they have nice PDF document about API, you don't need tutorials to use it, just read API and you will get it. Also pr0tip: DON'T PUT SPACES BEFORE SPECIAL CHARACTERS LIKE ",!?.".
You might want to take a look at the graphviz package.
I am trying to play a .flv file in the GLUT window using OpenGL and C++ in Linux, but I'm not sure where to start.
Is it possible to do this? If so, how?
Make sure you mean .flv not .swf.
It's quite easy. Decode the video with something like libavcodec and you can use raw frames as textures.
If you really want to do this, check out the source code of Gnash. They've a renderer that use OpenGL. However, rendering is just a small part of the job, you also have to decode audio/video, run actionscript, etc.. in order to run a flash file.
It so complicated that even Adobe didn't manage to make it right :)
If you want to play just some video, look at #Banthar's answer, otherwise:
OpenGL is a no-frills drawing API. It gives you the computer equivalent of "pens and brushes" to draw on some framebuffer. Period. No higher level functionality than that.
Flash it a really complex thing. It consists of a vector geometry object system, a script engine (ActionScript), provides sound and video de-/compression etc. All this must be supported by a SWF player. ATM there's only one fully featured SWF player and that's the one Adobe makes. There are free alternatives, but the are behind the official flash players by several major versions (Lightspark, Gnash).
So the most viable thing to do was loading the Flash player browser plugin in your program through the plugin interface, provide it, what a browser provided to a plugin (DOM, HTTP transport, etc.) and have the plugin render to a offscreen buffer which you then copy to OpenGL context. But that's not very efficient.
TL;DR: Complicated as sh**, and probably not worth the effort.
For those not familiar with Core Image, here's a good description of it:
http://developer.apple.com/macosx/coreimage.html
Is there something equivalent to Apple's CoreImage/CoreVideo for Windows? I looked around and found the DirectX/Direct3D stuff, which has all the underlying pieces, but there doesn't appear to be any high level API to work with, unless you're willing to use .NET AND use WPF, neither of which really interest me.
The basic idea would be create/load an image, attach any number of filters that can be chained together, forming a graph, and then render the image to an HDC, using the GPU to do most of the hard work. DirectX/Direct3D has these pieces, but you have to jump through a lot of hoops (or so it appears) to use it.
There are a variety of tools for working with shaders (such as RenderMonkey and FX-Composer), but no direct equivalent to CoreImage.
But stacking up fragment shaders on top of each other is not very hard, so if you don't mind learning OpenGL it would be quite doable to build a framework that applies shaders to an input image and draws the result to an HDC.
Adobe's new Pixel Blender is the closest technology out there. It is cross-platform -- it's part of the Flash 10 runtime, as well as the key pixel-oriented CS4 apps, namely After Effects and (soon) Photoshop. It's unclear, however, how much is currently exposed for embedding in other applications at this point. In the most extreme case it should be possible to embed by embedding a Flash view, but that is more overhead than would obviously be idea.
There is also at least one smaller-scale 3rd party offering: Conduit Pixel Engine. It is commercial, with no licensing price clearly listed, however.
I've now got a solution to this. I've implemented an ImageContext class, a special Image class, and a Filter class that allows similar functionality to Apple's CoreImage. All three use OpenGL (I gave up trying to get this to work on DirectX due to image quality issues, if someone knows DirectX well contact me, because I'd love to have a Dx version) to render an image(s) to a context and use the filters to apply their effects (as HLGL frag shaders). There's a brief write up here:
ImageKit
with a screen shot of an example filter and some sample source code.