Is Cairo acelerated on Opengl backend? - opengl

By this I mean, does Cairo draw lines, shapes and everything using opengl acelerated primitives or no? and if not, a library that does this?

The OpenGL backend certainly accelerates some functions. But there are many it can't accelerate. The fact that it's written against GL 2.1 (and thus can't use more advanced features of 3.x or 4.x hardware) means that there is a lot that it simply cannot accelerate.
If you are willing to limit yourself to NVIDIA hardware, NVIDIA just came out with the NV_path_rendering extension, which provides a lot of the 2D functionality you would find with Cairo. Indeed, it's possible that you could write a Cairo backend for it. The path rendering extension is only available on GeForce 8xxx hardware and above.
It's nifty in that it's focused on the vertex pipeline. It doesn't do things like gradients or colors or whatever. That's good, because it still allows you the use of a fragment shader. Which means you get to do pretty much whatever you want ;)

Cairo is designed to have a flexible backend for rendering. It can use OpenGL for rendering, though support is still listed as "experimental" at this point. For details, see using cairo with OpenGL.
It can also output to the X Window System, Quartz, Win32, image buffers, PostScript, PDF, and SVG, and more.

Related

Is it a big deal switching from OpenGL 3.0 to OpenGL ES 2.0?

If I am currently developing a game for windows using SDL and GLEW (for OpenGL 3.0+) and I later want to port my game to Android, will I have to rewrite the majority of my code to convert from OpenGL 3.0 to OpenGL ES 2.0? Are there any programs that do this for me? Is it a big deal switching from OpenGL to OpenGL ES?
Not at all, it is very easy to convert.
Only differences are shader variables and constants, and suffixes like GL_RGBA8 to GL_RGBA8_OES. However, there are limits in OpenGL ES. For instance, you can use only GL_UNSIGNED_BYTE or GL_UNSIGNED_SHORT as indices data type GL_UNSIGNED_INT. Which means, you can not draw more than 65,535 indices at one go. It is not a big deal although you should refer to the official OpenGL ES manual, https://www.khronos.org/opengles/sdk/docs/man/
Refer to the link OpenGL ES 2.0 vs OpenGL 3 - Similarities and Differences by coffeeandcode
It really depends on your code
OpenGL ES 2.0 (and 3.0) is mostly a subset of Desktop OpenGL.
The biggest difference is there is no legacy fixed function pipeline in ES. What's the fixed function pipeline? Anything having to do with glVertex, glColor, glNormal, glLight, glPushMatrix, glPopMatrix, glMatrixMode, etc... in GLSL using any of the variables that access the fixed function data like gl_Vertex, gl_Normal, gl_Color, gl_MultiTexCoord, gl_FogCoord etc...
If you use any of those features you'll have some work cut out for you. OpenGL ES 2.0 and 3.0 are just plain shaders. No "3d" is provided for you. You're required to write all projection, lighting, texture references, etc yourself.
If you're already doing that (which most modern games probably do ) you might not have too much work. If on the other hand you've been using those old deprecated OpenGL features which from my experience is still very very common (most tutorials still use that stuff). Then you've got a bit of work cut out for you as you try to reproduce those features on your own.
There is an open source library, regal, which I think was started by NVidia. It's supposed to reproduce that stuff. Be aware that whole fixed function system was fairly inefficient which is one of the reasons it was deprecated but it might be a way to get things working quickly.

Replicating Cathode retro terminal effect?

I'm trying to replicate the effect of Cathode but i'm not really aware of any rendering effects in SDL. Does anyone know the technique used in Cathode? Are they using OpenGL and shaders maybe?
If you are still interested in the subject I'm working on a similar project. The effects were obtained by using GLSL shaders.
You can grab the source code here: https://github.com/Swordifish90/cool-old-term/
The shaders strings might not be extremely readable due to the extensive use of the ternary operators (needed to customize the appearance) but they should give you a really good idea.
If you poke around a bit in the application bundle, you'll find that the only relevant framework is GLKit which, according to Apple, will "reduce the effort required to create new shader-based apps".
There's also a bunch of ".fragdata", ".vertdata", and ".glsldata" files, which are encrypted.
Very unfortunate for you.
So I would say: Yes, it's OpenGL shaders all the way.
Unfortunately, since the shaders are encrypted, you're going to have to locate suitable algorithms elsewhere.
(Perhaps it's possible to use the OpenGL debugging and profiling tools to capture the shader source as it is compiled, but I doubt it.)
You may have realized that Android phones have (had?) such animations when you put them to sleep. That code is available under in file named ElectronBeam.java.
However it is Java code and uses GLES 1.0 with GLES 1.1 Extenstions but algorithm for bending screen should be understandable.
Seems to be based on GLTerminal which uses OpenGL, it would have to use OpenGL and shaders for speed.
I guess the fastest approximation would be to render the text to buffers within OpenGL and use a deformed 2d grid to create the "rounded corners" radial distortion.
But it would take a lot of work to add all the features that cathode has, not to mention to run them quickly.
I suspect emulating a CRT perfectly is a bit like emulating an analog synth perfectly - hard to impossible.
If you want to work quickly and not killing the CPU, the GPU is the best solution! So pixel shaders. pixel shaders can do all of these effects. Once I made such an application. I wrote it in Silverlight, but it does not matter, I used the pixel shader.
Suggests to write this in Qt4 and add to the QWidget pixel shader effects.

drawing a pixelarray fast and efficient on linux

How can I draw a pixel array very fast in c++?
I've seen many questions like this on stackoverflow,
they are all answered with:
use gdi (windows)
use opengl
...
but there must be a way, how opengl is doing it!
I'm writing a little raytracer and need to draw every pixel
many times per second.
opengl is able to do it, platform independent and fast,
so how can i achieve that without opengl?
And "without opengl" dos not mean
use sdl (slow)
use this / that library
Please only suggest the platform native methods
or the library closest to that.
If it is possible (i know it is)
how can I do this?
platform independent solutions are preferred.
Drawing graphics on Linux you either have to use X11, or OpenGL. (And in the near future Wayland may be another option). In Linux there is no "native" way of doing graphics, because the Linux kernel doesn't care about graphics APIs. It provides a interfaces (DRM) using which graphics systems are then implemented in user space. If you just want to splat pixels on the screen, without caring about windows then you could also mmap /dev/fbdev – but you normally don't want that, because nobody wants his screen being clobbered by some program he can't move or hide.
Drawing single points is inefficient, no matter which API being uses, due to the protocol overhead.
So X11 it is. So the best bet is to use the MIT-SHM extension which you use to alter pixels in a buffer, which is then blitted in whole by the X11 server. Of course doing this using the pure X11 Xlib functions is annoyingly cumbersome. So this is what SDL effectively nicely wraps up for you.
The other option is OpenGL. OpenGL is not a library! It's a system level API, that gives you almost direct access to the GPU. And it integrates nicely with X11. Yes, the API is provided through a library that's being loaded, but technically that library is just a "wrapper" or "interface" to the actual driver. Drawing single points with OpenGL makes no sense. But you can "batch up" several points into a list (using a vertex array) and then process that list. So the idea is to collect all the incoming points between two display refresh intervals and draw them in one single batch.
platform independent solutions are preferred.
Why are you asking about native APIs then? By definition there can be no plattform independent native API. Either you're native, or you're plattform independent.
And in your particular scenario I think SDL would be the best solution, because it offers just the right kind of abstraction and program side interface for a raytracer. Just FYI: Virtual Machines like QEmu use SDL.
Or you use OpenGL which is a real plattform neutral API widely supported.
Drawing graphics on Linux you either have to use X11, or OpenGL.
This is absolutely false! Counterexample: there's platforms that don't run X11, yet they display pixels (eg. fonts).
Sidenote. OpenGL usually depends on X11 (it's possible, albeit hard, to run OpenGL without X11).
As #datenwork says, there's at least 2 other ways to draw pixels:
The framebuffer device (fbdev), an abstraction to interface with graphics hardware. Very old, designed by Martin Schaller, see the kernel docs. Source code is here. Also see here. Here's the simplest possible framebuffer driver.
The Direct Rendering Manager (DRM), a kernel subsystem that provides an API for userland apps to send commands/data directly to the GPU. (Seems suspiciously similar to what OpenGL does, but idk!). Source code is here. Here's a DRM example that inititializes a simple display pipeline.
Both of these are part of the kernel, so they're lower-level than X11, which is not part of the kernel. Both can draw arbitrary pixels (eg. penguins). I'd guess both of these are platform-independent (like OpenGL).
See this for more on how to draw stuff on Linux.

Mixing DirectX and OpenGL

I want to be able to render into an OpenGL render window using DirectX. This is because the features i'm after are only supported in DirectX.
I have heard it is possible to do this a few years ago and i'm hoping it should still be possible.
I'd imagine it will involve pointing DirectX to the correct part of VRAM and the correct depth buffer.
Also a tutorial or simply an explanation would be extremely useful.
At least NVIDIA has the NV_DX_interop extension, which let's you use Direct3D 9 buffers/textures/surfaces directly as OpenGL buffers/textures/renderbuffers (therefore being the other way around). But I don't have any experience with this and I don't know if it is widely supported or actually works any good.
It would be more interresting which features you think are only available in Direct3D. Maybe we can show you how to achieve it with OpenGL, as there are not many features (if any) that are available in Direct3D and not in OpenGL. Although if you got an ATI card, being available and actually working correctly may sometimes be two seperate things.
Mixing OpenGL and Direct3D will not work, and AFAIK it never used to. May I ask, which features of Direct3D you require, that OpenGL doesn't offer?
You can. At least for nVIDIA! Check NV_DX_interop. But however, EVERY DirectX feature is supported in OpenGL as OpenGL is more RAW / Low level than DirectX! Just the way it is implemented may be different. Again, tell us WHICH feature of DirectX you want and I can tell you hints how to reimplement it.
One specific example of a difference between OpenGL and Direct3D, which is something I am researching myself, is that in DirectShow (a subset of Direct3D), there are video capture filters for my Panasonic DVCPRO-HD based cameras. So the live streams collected via that API, I would like to use as inputs into OpenGL libraries. One such library is OpenFrameWorks, which is OpenGL based. I am looking to see how efficient I can make this transfer. The existing OpenFrameWorks Video API uses a slightly older and deprecated DirectShow API, and uses a brute force strategy for getting pixels from DirectShow over to OpenFrameWorks.

Rendering Vector Graphics in OpenGL? [duplicate]

This question already has answers here:
Displaying SVG in OpenGL without intermediate raster
(5 answers)
Closed 5 years ago.
Is there a way to load a vector graphics file and then render it using OpenGL? This is a vague question as I don't know much about file formats for vector graphics. I know of SVG, though.
Turning it to raster isn't really helpful as I want to do real time zooming in on the objects.
I see most of the answers are about Qt somehow, even though the original question doesn't mention it. Here's my answer in terms of OpenGL alone (which also benefits greatly from the passage of time, as it could not have been given in 2010):
Since 2011, the state of the art is Mark Kilgard's baby, NV_path_rendering, which is currently only a vendor (Nvidia) extension as you might have guessed already from its name. There are a lot of materials on that:
https://developer.nvidia.com/nv-path-rendering Nvidia hub, but some material on the landing page is not the most up-to-date
http://developer.download.nvidia.com/devzone/devcenter/gamegraphics/files/opengl/gpupathrender.pdf Siggraph 2012 paper
http://on-demand.gputechconf.com/gtc/2014/presentations/S4810-accelerating-vector-graphics-mobile-web.pdf GTC 2014 presentation
http://www.opengl.org/registry/specs/NV/path_rendering.txt official extension doc
NV_path_rendering is now used by Google's Skia library behind the scenes, when available. (Nvidia contributed the code in late 2013 and 2014.)
You can of course load SVGs and such https://www.youtube.com/watch?v=bCrohG6PJQE. They also support the PostScript syntax for paths. You can also mix path rendering with other OpenGL (3D) stuff, as demoed at:
https://www.youtube.com/watch?v=FVYl4o1rgIs
https://www.youtube.com/watch?v=yZBXGLlmg2U
An upstart having even less (or downright no) vendor support or academic glitz is NanoVG, which is currently developed and maintained. (https://github.com/memononen/nanovg) Given the number of 2D libraries over OpenGL that have come and gone over time, you're taking a big bet using something not supported by a major vendor, in my humble opinion.
This isn't an implementation, but very relevant to your question and viewers.
Chapter 25. Rendering Vector Art on the GPU
https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch25.html
Let me expand on Greg's answer.
It's true that Qt has a SVG renderer class, QSvgRenderer. Also, any drawing that you do in Qt can be done on any "QPaintDevice", where we're interested in the following "paint devices":
A Qt widget;
In particular, a GL-based Qt widget (QGLWidget);
A Qt image
So, if you decide to use Qt, your options are:
Stop using your current method of setting up the window (and GL context), and start using QGLWidget for all your rendering, including the SVG rendering. This might be a pretty small change, depending on your needs. QGLWidget isn't particularly limiting in its capabilities.
Use QSvgRenderer to render to a QImage, then put the data from that QImage into a GL texture (as you normally would), and render it any way you want (e.g. into a rectangular GL_QUAD). Might have worse performance than the other method but requires the least change to your code.
Wondering what QGLWidget does exactly? Well, when you issue Qt rendering commands to a QGLWidget, they're translated to GL calls for you. And this also happens when the rendering commands are issued by the SVG renderer. So in the end, your SVG is going to end up being rendered via a bunch of GL primitives (lines, polygons, etc).
This has a disadvantage. Different videocards implement OpenGL slightly differently, and Qt does not (and can not) account for all those differences. So, for example, if your user has a cheap on-board Intel videocard, then his videocard doesn't support OpenGL antialiasing, and this means your SVG will also look aliased (jaggy), if you render it directly to a QGLWidget. Going through a QImage avoids such problems.
You can use the QImage method when you're zooming in realtime, too. It just depends on how fast you need it to be. You may need careful optimizations such as reusing the same QImage, and enabling clipping for your QPainter.
Qt has good support for directly rendering SVG images using OpenGL functionality (see the documentation for QSvgRenderer).
I hope that helps.
It has primitives like GL_LINES and GL_LINE_STRIP for drawing lines in space if that's what you mean. Edit: This site has some information: http://www.falloutsoftware.com/tutorials/gl/gl2p5.htm