Using WebRender to render a PNG image - opengl

I'm trying to figure out how to use WebRender to render to an image (say PNG).
If I understand correctly, I would need to somehow (?) acquire a Rc<dyn gl::Gl> context so I can pass it to create_webrender_instance* and do the rendering stuff with it. Then, I would somehow (?) get the image data out of the GL thingy and save it with a crate like image.
I noticed that the WebRender debugging tool wrench has a headless mode which seems to be doing something similar to what I need. However, it uses unsafe, confusing, and poorly documented functions like gl::GlFns::load_with and its really hard for me to understand what's going on and which bits I need.
How can I render a PNG image using WebRender? I am looking for a solution that works on many platforms, and without creating a window.
*Corresponds to Renderer::new in the latest version of WebRender at time of writing. I am using the master branch.

Related

Supersampling AA with PyOpenGL and GLFW

I am developing a application with OpenGL+GLFW and Linux as a target platform.
The default rasterizing has VERY strong aliasing. I have implemented FXAA on top of my pipeline and I still got pretty strong aliasing. Especially when there's some kind of animation or movement, the edges of meshes are flickering. This literally renders the whole project useless.
So, I thought I would also add a supersampling and I have been trying to implement it for two weeks already and still can't make it work. I start to think it's not possible with the combination PyOpenGL+GLFW+Ubuntu18.04.
So, the question is, can I do a supersampling by hand (without OpenGL extentions)? At the end of my (deferred) rendering pipeline I save all the data from different passes to the hard drive, so I thought I would do something like this:
Render the image with 2x/3x resolution to the texture.
Save the texturebuffer to the array.
Get the average pixel's value from each 2x2/3x3/4x4 block
of this array.
Save it to the hard drive.
Obviously, it's gonna be slower than mulstisampling with OpenGL extention and require more memory, but I don't need high fps and I have a pretty small resolution (like 480x640 or similar) so it might work out.
Do you guys have any thoughts about it? I would be glad to any advice.

Pango layout flow around container (image)

I'm using Pango for text layouting without the cairo backend (currently testing with the win32 backend). And I like to know if pango is capable of a flow layout around an image, or any given container. Or maybe inside a custom container.
Something like this: Flow around image
I have checked many examples and the Pango API and didn't found such a feature. Maybe I'm missing something or Pango does not have this feature.
As I said in this answer, you can't. I went through the source code Pango graphics handling is primitive to the point of uselessness. Unless there's been some major reworking in the past year, which the release notes don't indicate, it's probably the same now.
The image you provide as an example is only available as PDF at the moment which requires every line, word and glyph be hard-positioned on the page. While theoretically possible to check the alpha channel of the image to wrap the text around the actual image instead of the block it contains, this has not (to the best of my knowledge) ever been implemented in a dynamic output system.
Pango, specifically, cannot even open "holes" in the text for graphics to be added later and, at the code level, doesn't even have the concept of a multi-line cell - hence a line being the size of its largest component.
Your best bet is to look at WebKit for more complex displays. I, for one, have pretty much given up on Pango and it seems to be getting less popular.

graphviz render to gdiplus

I'm taking a look at graphviz (gvc) to embed the creation of some graphs in an MFC app that I am working with.
As far as I can see, it's pretty simple to render to a png file, but I wanted to render it to a gdiplus context without having to write a temporary file to disk to display (which seems to be the only option). Is this possible?
Regards Candag
Yes it's possible, if you write your own renderer plug-in. See http://www.graphviz.org/doc/libguide/libguide.pdf . It's already done for X11 (see http://www.graphviz.org/doc/info/output.html#d:xlib) so you can probably work with that as an inspiration, probably 'all' you'd have to do is translate xlib primitives into GDI(+) primitives.
That said, for me it wasn't worth it, I just render to a temporary file and read that in. It's not as nice conceptually but for the user it doesn't make any difference, and it would be a significant amount of work to implement and debug the renderer mentioned above. I suspect that for the use cases where the output of graphviz is good enough, that the optimisation of having a native Gdi renderer isn't worth it...

Chromium/WebKit render to OpenGL texture

Last few days I have been looking around the chromium and WebKit source codes, reading wikis, and watching Google videos. What I want to do is take what WebKit renders and place it into a GL texture. But I need to have different DOM nodes in different textures. I have a few questions and Im not sure if I should go about using Chromium or implementing my own simple browser. Chromium obviously has many nice features, but it is very large and extensive. I also figure that it's algorithms for splitting render layers are unpredictable (I want pretty much full control).
Where should in WebKit or Chromium's source to find where it outputs raster data? It would be convenient if I could get access to Chromium's render layer raster data before it is composted. But as I said the render layers would probably me mixed in a way I didn't want them to be.
Is WebKit GPU accelerated, in that case I should be able to access the data directly. I know Chromium+Blink is but I can't find out if WebKit on its own is.
How much work is it too put together a simple browser?
P.S. I can't use Awesomium because I need to render different DOM nodes/subtrees into different textures. Chromium Embedded Framework doesn't appear to have support for DOM manipulation either and I believe it just renders the entire page and gives you the raster data.

Cinder: How to get a pointer to data\frame generated but never shown on screen?

There is that grate lib I want to use called libCinder, I looked thru its docs but do not get if it is possible and how to render something with out showing it first?
Say we want to create a simple random color 640x480 canvas with 3 red white blue circles on it, and get RGB\HSL\any char * to raw image data out of it with out ever showing any windows to user. (say we have console application project type). I want to use such feaure for server side live video stream generation and for video streaming I would prefer to use ffmpeg so that is why I want a pointer to some RGB\HSV or what ever buffer with actuall image data. How to do such thing with libCInder?
You will have to use off-screen rendering. libcinder seems to be just a wrapper for OpenGL, as far as graphics go, so you can use OpenGL code to achieve this.
Since OpenGL does not have a native mechanism for off-screen rendering, you'll have to use an extension. A tutorial for using such an extension, called Framebuffer Rendering, can be found here. You will have to modify renderer.cpp to use this extension's commands.
An alternative to using such an extension is to use Mesa 3D, which is an open-source implementation of OpenGL. Mesa has a software rendering engine which allows it to render into memory without using a video card. This means you don't need a video card, but on the other hand the rendering might be slow. Mesa has an example of rendering to a memory buffer at src/osdemos/ in the Demos zip file. This solution will probably require you to write a complete Renderer class, similar to Renderer2d and RendererGl which will use Mesa's intrusctions instead of Windows's or Mac's.