Rendering with OpenGL on a web server - c++

I have an application that runs on Nintendo 3DS -- it uses a variant of OpenGL to render 3D animation. The user is able to store these scenes online as data files. That is, only the data needed to render the scene is stored - the image frames are rendered on the device.
Additionally I would like for people to be able to view these scenes online. One way might be to render them in the browser via WebGL, but I'm worried about the amount of time and memory this would require. I would rather have the server render the scenes into movie files which can be played from a web page.
I don't have a lot of experience with server side programming - is it possible for a server program to render frames to an OpenGL context? They would be offscreen framebuffers since there is no screen.
Any suggestions on an approach to doing that? I've used PHP mostly for web programming, but it seems like that is not feasible for this. Ideally I'd like to write a C++ program which ran on the server, that way I could re-use code from the 3DS. Is that possible? Where can I read about doing this?

Server-side rendering is possible, and would provide more consistent results to the user than relying on consistent WebGL behavior across different browsers and platforms (as well as the time/memory performance concerns you already mentioned). Users with capable browsers and platforms will not get any benefits, so you'll want to consider what your users want and the platforms they're using.
For Windows-based servers, using OpenGL (w/offscreen framebuffers) with "no screen" will present a challenge. You need to start with a window to establish a graphics context. (There may be a provision to establish a "windowless" graphics context for Linux.) You also will need to determine how to manage any GPU resources for rendering, as it will have limits on the number of concurrent rendering requests it can support before slowing down and/or failing to allocate resources (e.g. framebuffer memory).
One alternative might be to use Mesa (software OpenGL) implementation - this won't be as fast, but in theory, this would scale with added server CPU and memory, which matches how most web servers scale out: Mesa offscreen rendering info
It looks like once written, spawning the C++ executable with args from PHP is trivial - although you may wish to route any long-running renderings to a separate rendering server to keep your web server responsive.

Related

How do I optimize my OpenGL textures for Remote Desktop/ANGLE?

I display a 2D texture in OpenGL using Qt.
Unfortunately I have found out that I need to support running my application via Remote Desktop to a Windows 7 PC. In this case I need to use OpenGL ES 2.0 API (ANGLE).
Due to low bandwidth my 2D visualization seems to be lagging.
My texture may have higher resolution than the screen so that it needs to be minified.
When not using remote desktop my approach have been to specify a very detailed texture and let the graphics card do the minification.
However now I am thinking that the OpenGL calls are executed in software locally and not on the remote machine? In which case the textures have to be transmitted via TCP/IP?
Does this mean that I should do minification myself before using the textures?
As an example instead of using a 2048x2048 texture I may bin 2x2 pixels in C++ and upload a 1024x1024 texture.
Alternatively I could use glGenerateMipmap?
I feel multiple terms are confused here: RDP just transfers the entire remote desktop for you whatever is on it, so no "OpenGL calls are executed in software locally". Hence, unfortunately it will not help if you reduce the texture size in your app, even if you remove it entirely (try it). RDP is not really suitable for real time animation.
Your app better be running locally on the user machine, so better to think how to distribute your OGL app to users.
If you cannot install your app on users machine, or give them installation kit, then
maybe turning your app to a browser app is a better option.
WebGL there for exactly this kind of applications, and is a standard too:
https://www.khronos.org/webgl/

How to stream OpenGL rendered scene from the cloud to remote clients

So I have a desktop app, using OpenGL to render large data sets in 3D. I want to move it to the cloud and use server-side rendering in order to stream the rendered images to remote clients (JS, etc.).
From what I understand, WebRTC is the best approach for that. However, it's complicated and expensive to implement, and mainly aimed for video conferencing applications. Are there any frameworks/open source which are more suitable for 3D graphics streaming. Is Nvidia's GameStreaming a suitable technology to explore or is it tailored for games? Any other ideas and approaches?
There are many ideas and approaches, and which one works best depends a lot on your particular application, budget, client, and server.
If you render on the server side, the big advantage is that you control the GPU, the available memory, the OS and driver version, etc so cross-platform or OS version problems largely disappear.
But now you're sending every frame pixel by pixel to the user. (And MPEG-4 isn't great when compressing visualization rather than video.)
And you've got a network latency delay on every keystroke, or mouse click, or mouse movement.
And if tens? hundreds? thousands? of people want to use your app simultaneously, you've got to have enough server side CPU/GPU to handle that many users.
So yeah, it's complicated and expensive to implement, no matter what you choose. As well as WebRTC, you could also look at screen sharing software such as VNC. Nvidia game streaming might be a more suitable technology to explore, because there's a lot of similarity between 3D games and 3D visualisation, but don't expect it to be a magic bullet.
Have you looked at WebGL? It's the slightly cut down EGL version of OpenGL for JavaScript. If you're not making heavy use of advanced OpenGL 4 capabilities, a lot of OpenGL C/C++ code translates without too much difficulty into JavaScript and WebGL. And just about every web browser on the planet runs WebGL, even if (like Apple) the platform manufacturer discourages regular OpenGL.
The big advantage is that all the rendering and interactivity happens on the client, so latency is not a problem and you're not paying for the CPU/GPU if lots of people want to run it at the same time.
Hope this helps.

How can I xforwarding openGL applications from server

I have a server with nvidia graphics card, and I want to run some openGL applications and xforwarding the display to client.
How can I achieve this? Currently I have not installed X window System yet.
X forwarding means, that all rendering commands are encapsulated into the X transport and transferred over to the machine with the display and executed there. The upside is, that the remote end does not require a GPU whatsoever. The downside is, that it consumes (well, rather gobbles up) lots of network bandwidth.
OpenGL up to version 2.1 specifies GLX opcodes for the X11 transport, so is network transparent. And if you make liberal use of display lists and keep the amount of data transferred small (i.e. no client side vertex array, only a few and little textures), OpenGL-over-GLX-over-X11-over-TCP works rather fine.
However these days it's more efficient to render remotely and only transfer the generated image using a high efficiency compression codec. Plain X11 forwarding can't do that, though. But you can do it using Xpra backed by a "true" X server, talking to an actual GPU. The problem is, that you'll need that particular X server to occupy the GPU.
A better method is, to detect if there's the GLX extension available, and if not if there's a GPU around and use that to render into a XSHM pixmap. That way also Xpra on a virtual framebuffer server will work. Unfortunately doing the later with OpenGL is annoyingly difficult to implement in a way, that it works transparently accross context creation APIs. It can be done (BT;DT) but actually for this kind of thing I actually prefer Vulkan, because despite Vulkan's verbosity it takes less work to do reliably with Vulkan than with OpenGL.
Maybe (unlikely) we'll see some X11 extension for compressed transfer of pixmaps, some high compression XV or similar. That, in combination with an pure off-screen, GPU rendering (we already have that), would make for a far more efficient system.

Setup OpenGL for multiple monitors

I am beginning OpenGL programming on a Windows 7 computer and my application is made up of fullscreen windows where there is a separate window and thread for each monitor. What are the steps I have to take to have a continuous scene? I am still confused about many OpenGL concepts and how I should handle this. Is it basically the same as single monitor render except with view matrix and context extra work, or is it more complicated?
EDIT:
I found a website with information, but it is vague and without example code:
http://www.rchoetzlein.com/theory/2010/multi-monitor-rendering-in-opengl/
My first question would be why do you need two different OpenGL windows?
Have you considered the solution that the games industry has been using already? Many 3D applications and games that support multi-monitor setups don't actually manage their own separate windows, but let the GPU manage rendering over multiple screens. I used this in a project this year to have an oculus rift view and a spectator view on a TV screen. I didn't manage two OpenGL scenes, just two different "cameras".
http://www.amd.com/en-us/innovations/software-technologies/eyefinity
http://www.nvidia.com/object/3d-vision-surround-technology.html
Pros
Easier to code for. You just treat your code as being one scene, no weird scene management needed.
Graceful degradation. If your user only has one screen instead of two your app will still behave just fine sans a few UI details.
Better performance (Anecdotal). In my own project I found better performance over using two different 3D windows.
Cons
Lack of control. You're at the behest of driver providers. For example nVidia surround requires that GPUs be setup in SLI for whatever reason.
Limited support. Only relatively new graphics card support this multi monitor technology.
Works best wheen screens are same resolution. Dealing with different aspect ratios and even resolutions of the same aspect ratio can be difficult.
Inconvenient. The user will have to setup their computer to be in multi monitor mode when they may have their own preferred mode.

Is it possible to render one half of a scene by OpenGL and other half by DirectX

My straight answer would be NO. But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They used video editing software. They recorded two nearly deterministic run-throughs of their engine and spliced them together.
As for the question posed by your title, not within the same window. It may be possible within the same application from two windows, but you'd be better off with two separate applications.
Yes, it is possible. I did this as an experiment for a graduate course; I implemented half of a deferred shading graphics engine in OpenGL and the other half in D3D10. You can share surfaces between OpenGL and D3D contexts using the appropriate vendor extensions.
Does it have any practical applications? Not many that I can think of. I just wanted to prove that it could be done :)
I digress, however. That video is just a side-by-side of two separately recorded videos of the Haven benchmark running in the two different APIs.
My straight answer would be NO.
My straight answer would be "probably yes, but you definitely don't want to do that."
But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They prerendered the video, and simply combined it via video editor. Because camera has fixed path, that can be done easily.
Anyway, you could render both (DirectX/OpenGL) scenes onto offscreen buffers, and then combine them using either api to render final result. You would read data from render buffer in one api and transfer it into renderable buffer used in another api. The dumbest way to do it will be through system memory (which will be VERY slow), but it is possible that some vendors (nvidia, in particular) provide extensions for this scenario.
On windows platform you could also place two child windows/panels side-by-side on the main windows (so you'll get the same effect as in that youtube video), and create OpenGL context for one of them, and DirectX device for another. Unless there's some restriction I'm not aware of, that should work, because in order to render 3d graphics, you need window with a handle (HWND). However, both windows will be completely independent of each other and will not share resources, so you'll need 2x more memory for textures alone to run them both.