X11 Compositor / Layer manager - c++

I have n applications with GUIs made in different technologies.
This is what I want to do -
Render all application windows off-screen using a compositor (If I am using the term correctly).
Then combine them to form a single layer to display after applying several operations like re-sizing, changing opacity, angle etc.
Language of implementation : C++ with XLib
Can someone give me an idea how I should proceed with this ?
Also, I had trying doing this also and succeeded with some help from Stack Overflow-
[ X11 layer manager ]
Create n layers, one for each application, onto which applications draw.
Have a layer manager which can perform operations on each of these layers
(like re sizing, changing opacity etc. ) and then combine them to form a
single layer.
Is there an advantage in terms of performance if I use the first approach ( rendering the application output myself than allowing them to do so on their own ) ? And how exactly can this be achieved.

Related

Rendering with OpenGL on a web server

I have an application that runs on Nintendo 3DS -- it uses a variant of OpenGL to render 3D animation. The user is able to store these scenes online as data files. That is, only the data needed to render the scene is stored - the image frames are rendered on the device.
Additionally I would like for people to be able to view these scenes online. One way might be to render them in the browser via WebGL, but I'm worried about the amount of time and memory this would require. I would rather have the server render the scenes into movie files which can be played from a web page.
I don't have a lot of experience with server side programming - is it possible for a server program to render frames to an OpenGL context? They would be offscreen framebuffers since there is no screen.
Any suggestions on an approach to doing that? I've used PHP mostly for web programming, but it seems like that is not feasible for this. Ideally I'd like to write a C++ program which ran on the server, that way I could re-use code from the 3DS. Is that possible? Where can I read about doing this?
Server-side rendering is possible, and would provide more consistent results to the user than relying on consistent WebGL behavior across different browsers and platforms (as well as the time/memory performance concerns you already mentioned). Users with capable browsers and platforms will not get any benefits, so you'll want to consider what your users want and the platforms they're using.
For Windows-based servers, using OpenGL (w/offscreen framebuffers) with "no screen" will present a challenge. You need to start with a window to establish a graphics context. (There may be a provision to establish a "windowless" graphics context for Linux.) You also will need to determine how to manage any GPU resources for rendering, as it will have limits on the number of concurrent rendering requests it can support before slowing down and/or failing to allocate resources (e.g. framebuffer memory).
One alternative might be to use Mesa (software OpenGL) implementation - this won't be as fast, but in theory, this would scale with added server CPU and memory, which matches how most web servers scale out: Mesa offscreen rendering info
It looks like once written, spawning the C++ executable with args from PHP is trivial - although you may wish to route any long-running renderings to a separate rendering server to keep your web server responsive.

Setup OpenGL for multiple monitors

I am beginning OpenGL programming on a Windows 7 computer and my application is made up of fullscreen windows where there is a separate window and thread for each monitor. What are the steps I have to take to have a continuous scene? I am still confused about many OpenGL concepts and how I should handle this. Is it basically the same as single monitor render except with view matrix and context extra work, or is it more complicated?
EDIT:
I found a website with information, but it is vague and without example code:
http://www.rchoetzlein.com/theory/2010/multi-monitor-rendering-in-opengl/
My first question would be why do you need two different OpenGL windows?
Have you considered the solution that the games industry has been using already? Many 3D applications and games that support multi-monitor setups don't actually manage their own separate windows, but let the GPU manage rendering over multiple screens. I used this in a project this year to have an oculus rift view and a spectator view on a TV screen. I didn't manage two OpenGL scenes, just two different "cameras".
http://www.amd.com/en-us/innovations/software-technologies/eyefinity
http://www.nvidia.com/object/3d-vision-surround-technology.html
Pros
Easier to code for. You just treat your code as being one scene, no weird scene management needed.
Graceful degradation. If your user only has one screen instead of two your app will still behave just fine sans a few UI details.
Better performance (Anecdotal). In my own project I found better performance over using two different 3D windows.
Cons
Lack of control. You're at the behest of driver providers. For example nVidia surround requires that GPUs be setup in SLI for whatever reason.
Limited support. Only relatively new graphics card support this multi monitor technology.
Works best wheen screens are same resolution. Dealing with different aspect ratios and even resolutions of the same aspect ratio can be difficult.
Inconvenient. The user will have to setup their computer to be in multi monitor mode when they may have their own preferred mode.

Cocos2d: is it better to have single layer and add nodes or is it better to have multiple layers?

I've got this little game of mine and it consists of main game screen and of quite a few "windows" which appear on the screen on top of the main screen.
In most cases it's just one window [which is over 90% of the screen], sometimes window will open up another one.
Right now, my main screen is a layer and each window I have is a CCNode I add and remove from the layer.
I am not really using any touch detection on my nodes. If I want something touchable it will be a CCMenu.
I do have plenty of CCSprites added, if that has anything to do.
I'm wondering if it's a good [performance wise] way to go? Or in other words, if there's a rationale behind changing what I have to let say have each window as a layer.
As far as I know, two most important benefits of using multiple layers instead of single layer are touch detection and z-ordering:
Touch detection: Using multiple layers makes it easier to employ touch detection logic that makes use of the layer hierarchy since cocos2d engine passes any touch event to the layers one after another based on the hierarchy.
z-ordering: For scenarios where certain sprites are always in front of other sprites, using multiple layer makes it much easier to enforce the z-ordering rather than having to tinker with zOrder parameter when using single layer.
IMO there is not much difference performance wise between using single layer and multiple layers, but if you have plenty sprites on the screen at one time, and especially if there are lots of repeating sprites, I would highly recommend using CCSpriteBatchNode (previously known as Texture Atlas or CCSpriteSheet) which is the recommended cocos2d method of improving game performance when dealing with big number of sprites, so I'd say single layer with sprite batch nodes is better in term of performance then multiple layers with individual sprites added directly to the layers.

Controlling the individual pixels of a projector

I need to control the individual pixels of a projector (an Infocus IN3104) whose native resolution is 1024x768. I would like to know which subset of functions in C or an APL to do this either by:
Functions that control the individual pixels of the adapter (not the pixels of a window).
A pixel-perfect, 1:1 map from an image file (1024x728) to the adaptor set at the native resolution of the projector.
In a related question ([How can I edit individual pixels in a window?][1]) the answerer Caladain states "Things have come a bit from the old days of direct memory manipulation.". I feel I need to go back to that to achieve my goal.
I don't know enough of the "graphic pipeline" to know what API or software tool to use. I'm overwhelmed by the number of technologies when I search this topic. I program in R, which easily interfaces to C, but would welcome suggestions of subsets of functions in OpenGL or C++ or ..... any other technology?
Or even an full blown application (rendering) which will map without applying a transformation.
For example even MS paint has the >VIEW>Bitmap but I get some transformation applied and I don't get pixel perfect rendering. This projector has DisplayLink digital input and I've also tried to tweek the timing parameters when using the VESA inputs and I don't think the transformation happens in the projector. In any case, using MS paint would not be flexible enough for me.
Platform: Linux or Windows.
I don't see a reason why a full-screen window, e.g. using SDL, wouldn't work. Normal bitmapped graphics is always 1:1, there shouldn't be any weird scaling going on behind your back for a full-screen:ed window.
Since SDL is portable, you should be able to run the same code in Windows or Linux (or any other supported platform).
The usual approach to this problem on current systems is:
Set graphics card to desired resolution
Create borderless full screen window
Draw whatever you want
There's really not much to gain from a "low level access", although it were certainly possible.

How to draw opengl graphics from different threads?

I want to make an opengl application that shows some 3d graphics and a command-line. I would like to make them separate threads, because they both are heavy processes. I thought that I could approach this with 2 different viewports, but I would like to know how to handle the threads in opengl.
According to what I've been reading, Opengl is asynchronous, and calling its functions from different threads can be very problematic. Is there a way that I could use to approach this problem? Ideally, I would like to draw the command line on top of the 3d graphics with some transparecy effect... (this is impossible with viewports I guess)
Is important that the solution is portable.
Thanks!
Its possible you can achieve what you want to do using Overlays.
Overlays are a somewhat dated feature but it should still be supported in most setups. Basically an overlay is a separate GL Context which is rendered in the same window as another layer, drawing on top of whatever was drawn on the windows with its original context.
You can read about it here.
I think, rather than trying to create two threads to draw to the screen, you need to use the MVC pattern and make your model thread-safe. Then you can have one thread that grabs the necessary info from the model each frame and throws it on screen, and then two other threads, one for the graphics and one for the command-line, which manage only the model.
So for instance, you have a Simulation class that has your 3D graphics stuff, and then a CommandLine class that has your command line. Each of these classes does not use OpenGL at all; only manages the data, such as where things are in 3d-space, and in the case of the command-line a queue of the lines on-screen. Then the opengl thread can query thread-safe functions of these classes each frame, to get the necessary info. So for example, it grabs the positions of the 3d things, draws them on-screen, then grabs the lines to display on the command line and draws them.
You can't do it with 2 viewports.
Each OpenGL context must be used from the same thread in which it was created.
Two separate window handles, each with their own context, can be setup and used from two separate threads safely, but not 2 viewports in one window.
A good place to look at ideas for this is OpenSceneGraph. It does a lot of work with threading to help try to speed up handling and maintain a constant, high framerate with very large scenes.
I've successfully embedded it and used it from 2 separate threads, but OSG has many checks in place to help with this sort of scenario.