How to stream OpenGL rendered scene from the cloud to remote clients - opengl

So I have a desktop app, using OpenGL to render large data sets in 3D. I want to move it to the cloud and use server-side rendering in order to stream the rendered images to remote clients (JS, etc.).
From what I understand, WebRTC is the best approach for that. However, it's complicated and expensive to implement, and mainly aimed for video conferencing applications. Are there any frameworks/open source which are more suitable for 3D graphics streaming. Is Nvidia's GameStreaming a suitable technology to explore or is it tailored for games? Any other ideas and approaches?

There are many ideas and approaches, and which one works best depends a lot on your particular application, budget, client, and server.
If you render on the server side, the big advantage is that you control the GPU, the available memory, the OS and driver version, etc so cross-platform or OS version problems largely disappear.
But now you're sending every frame pixel by pixel to the user. (And MPEG-4 isn't great when compressing visualization rather than video.)
And you've got a network latency delay on every keystroke, or mouse click, or mouse movement.
And if tens? hundreds? thousands? of people want to use your app simultaneously, you've got to have enough server side CPU/GPU to handle that many users.
So yeah, it's complicated and expensive to implement, no matter what you choose. As well as WebRTC, you could also look at screen sharing software such as VNC. Nvidia game streaming might be a more suitable technology to explore, because there's a lot of similarity between 3D games and 3D visualisation, but don't expect it to be a magic bullet.
Have you looked at WebGL? It's the slightly cut down EGL version of OpenGL for JavaScript. If you're not making heavy use of advanced OpenGL 4 capabilities, a lot of OpenGL C/C++ code translates without too much difficulty into JavaScript and WebGL. And just about every web browser on the planet runs WebGL, even if (like Apple) the platform manufacturer discourages regular OpenGL.
The big advantage is that all the rendering and interactivity happens on the client, so latency is not a problem and you're not paying for the CPU/GPU if lots of people want to run it at the same time.
Hope this helps.

Related

Rendering with OpenGL on a web server

I have an application that runs on Nintendo 3DS -- it uses a variant of OpenGL to render 3D animation. The user is able to store these scenes online as data files. That is, only the data needed to render the scene is stored - the image frames are rendered on the device.
Additionally I would like for people to be able to view these scenes online. One way might be to render them in the browser via WebGL, but I'm worried about the amount of time and memory this would require. I would rather have the server render the scenes into movie files which can be played from a web page.
I don't have a lot of experience with server side programming - is it possible for a server program to render frames to an OpenGL context? They would be offscreen framebuffers since there is no screen.
Any suggestions on an approach to doing that? I've used PHP mostly for web programming, but it seems like that is not feasible for this. Ideally I'd like to write a C++ program which ran on the server, that way I could re-use code from the 3DS. Is that possible? Where can I read about doing this?
Server-side rendering is possible, and would provide more consistent results to the user than relying on consistent WebGL behavior across different browsers and platforms (as well as the time/memory performance concerns you already mentioned). Users with capable browsers and platforms will not get any benefits, so you'll want to consider what your users want and the platforms they're using.
For Windows-based servers, using OpenGL (w/offscreen framebuffers) with "no screen" will present a challenge. You need to start with a window to establish a graphics context. (There may be a provision to establish a "windowless" graphics context for Linux.) You also will need to determine how to manage any GPU resources for rendering, as it will have limits on the number of concurrent rendering requests it can support before slowing down and/or failing to allocate resources (e.g. framebuffer memory).
One alternative might be to use Mesa (software OpenGL) implementation - this won't be as fast, but in theory, this would scale with added server CPU and memory, which matches how most web servers scale out: Mesa offscreen rendering info
It looks like once written, spawning the C++ executable with args from PHP is trivial - although you may wish to route any long-running renderings to a separate rendering server to keep your web server responsive.

Setup OpenGL for multiple monitors

I am beginning OpenGL programming on a Windows 7 computer and my application is made up of fullscreen windows where there is a separate window and thread for each monitor. What are the steps I have to take to have a continuous scene? I am still confused about many OpenGL concepts and how I should handle this. Is it basically the same as single monitor render except with view matrix and context extra work, or is it more complicated?
EDIT:
I found a website with information, but it is vague and without example code:
http://www.rchoetzlein.com/theory/2010/multi-monitor-rendering-in-opengl/
My first question would be why do you need two different OpenGL windows?
Have you considered the solution that the games industry has been using already? Many 3D applications and games that support multi-monitor setups don't actually manage their own separate windows, but let the GPU manage rendering over multiple screens. I used this in a project this year to have an oculus rift view and a spectator view on a TV screen. I didn't manage two OpenGL scenes, just two different "cameras".
http://www.amd.com/en-us/innovations/software-technologies/eyefinity
http://www.nvidia.com/object/3d-vision-surround-technology.html
Pros
Easier to code for. You just treat your code as being one scene, no weird scene management needed.
Graceful degradation. If your user only has one screen instead of two your app will still behave just fine sans a few UI details.
Better performance (Anecdotal). In my own project I found better performance over using two different 3D windows.
Cons
Lack of control. You're at the behest of driver providers. For example nVidia surround requires that GPUs be setup in SLI for whatever reason.
Limited support. Only relatively new graphics card support this multi monitor technology.
Works best wheen screens are same resolution. Dealing with different aspect ratios and even resolutions of the same aspect ratio can be difficult.
Inconvenient. The user will have to setup their computer to be in multi monitor mode when they may have their own preferred mode.

Does stage3d use OpenGL? or Direct3D when on Windows

WebGl is based on OpelGL ES 2.0.
Is it correct to say that Stage3d is also based OpenGL? I mean does it call OpenGL functions? Or ot calles Direct3D when runs on Windows?
If no, could you explain me, what API does Stage3d use for hardware acceleration?
The accepted answer is incorrect unfortunately. Stage 3D uses:
DirectX on Windows systems
OpenGL on OSX systems
OpenGL ES on mobile
Software Renderer when no hardware acceleration is available. (Due to
older hardware or no hardware at all.)
Please see: http://www.slideshare.net/danielfreeman779/adobe-air-stage3d-and-agal
Good day, Stage3D isn't based on anything, it may share similar methodology/terminology. It is another rendering pipeline, this is why Adobe is soo pumped about it.
Have a look at this: http://www.adobe.com/devnet/flashplayer/articles/how-stage3d-works.html
You can skip down to this heading "Comparing the advantages and restrictions of working with Stage3D" to get right down to it.
Also, take a peak at this: http://www.adobe.com/devnet/flashplayer/stage3d.html, excerpt:
The Stage3D APIs in Flash Player and Adobe AIR offer a fully
hardware-accelerated architecture that brings stunning visuals across
desktop browsers and iOS and Android apps enabling advanced 2D and 3D
capabilities. This set of low-level GPU-accelerated APIs provide
developers with the flexibility to leverage GPU hardware acceleration
for significant performance gains in video game development, whether
you’re using cutting-edge 3D game engines or the intuitive, lightning
fast Starling 2D framework that powers Angry Birds.

Is it possible to render one half of a scene by OpenGL and other half by DirectX

My straight answer would be NO. But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They used video editing software. They recorded two nearly deterministic run-throughs of their engine and spliced them together.
As for the question posed by your title, not within the same window. It may be possible within the same application from two windows, but you'd be better off with two separate applications.
Yes, it is possible. I did this as an experiment for a graduate course; I implemented half of a deferred shading graphics engine in OpenGL and the other half in D3D10. You can share surfaces between OpenGL and D3D contexts using the appropriate vendor extensions.
Does it have any practical applications? Not many that I can think of. I just wanted to prove that it could be done :)
I digress, however. That video is just a side-by-side of two separately recorded videos of the Haven benchmark running in the two different APIs.
My straight answer would be NO.
My straight answer would be "probably yes, but you definitely don't want to do that."
But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They prerendered the video, and simply combined it via video editor. Because camera has fixed path, that can be done easily.
Anyway, you could render both (DirectX/OpenGL) scenes onto offscreen buffers, and then combine them using either api to render final result. You would read data from render buffer in one api and transfer it into renderable buffer used in another api. The dumbest way to do it will be through system memory (which will be VERY slow), but it is possible that some vendors (nvidia, in particular) provide extensions for this scenario.
On windows platform you could also place two child windows/panels side-by-side on the main windows (so you'll get the same effect as in that youtube video), and create OpenGL context for one of them, and DirectX device for another. Unless there's some restriction I'm not aware of, that should work, because in order to render 3d graphics, you need window with a handle (HWND). However, both windows will be completely independent of each other and will not share resources, so you'll need 2x more memory for textures alone to run them both.

OpenGL render transmission in real time

I'm experimenting with developing a tool for remote OpenGL rendering, in C++. The basic idea is:
The client issues OpenGL commands like it's a normal app
Those commands are actually sent over the network to an external server
The server performs the rendering using some off-screen technique
Once done, the server transmits a single frame over the network to the client
The client renders the frame on screen.
Loop.
I know I shouldn't start worrying about optimization if I don't have a finished product yet, but I'm pretty sure that is going to be very slow, and the bottleneck is probably going to be the single frame transmission over the network, even if those computers are connected in the same LAN.
I'm thinking about using some kind of video streaming library. That way, the frames would be transmitted using proper compression algorithms, making the process faster.
Am I in the right path about this? Is it right to use a video streaming library here? If you think so, what's a good library for this task (in C or C++, preferably C++)?
Thank you for your help!
You have two solutions.
Solution 1
Run the app remotely
Intercept the openGL calls
Forward them on the network
Issue the openGL calls localy
-> complicated, especially when dealing with buffers and textures; the real openGL code is executed locally, which may not be what's wanted, but it's up to you. What's more, it's transparent for the remote app (no source modification, no rebuild). Almost no network communication.
Solution 2 : what you described, with the pros and cons.
If you go for Solution 2, don't bother about speed for now. You will have enough challenges with openGL as it is, trust me.
Begin by a synchronous mode : render, fetch, send, render, fetch, send
Then a asynchronous mode : render, begin the fetch, render, end of the fetch, begin the send, render, etc
It will be hard enough, I think
Depending on the resolution you need to support and the speed of your LAN it may be possible to stream the data uncompressed.
A 24-bit 1280x1024 frame requires 30 Mbit, and with a gigabit ethernet this means a theoretical 33 frames per second uncompressed.
If that is not enough, adding a simple RLE-compression yourself is fairly straightforward.
Imagine having to spend $ on both machines to provide them with proper graphics processing power. You could avoid this and simplify the client development if you centralize all the graphics related tasks on one single machine. The job of the client would be only to send/receive/display data, and the server could focus on processing the graphics (OpenGL) and sending the data (as frames) back to the client.
The bottleneck you referred to depends on a couple of things on your side: the size of the images and the frame rate you need to send/receive/display them.
These are some of the interesting topics I've read and hopefully they will shed a light on the subject:
Video streaming using c++
How do I stream video and play it?