I'm experimenting with developing a tool for remote OpenGL rendering, in C++. The basic idea is:
The client issues OpenGL commands like it's a normal app
Those commands are actually sent over the network to an external server
The server performs the rendering using some off-screen technique
Once done, the server transmits a single frame over the network to the client
The client renders the frame on screen.
Loop.
I know I shouldn't start worrying about optimization if I don't have a finished product yet, but I'm pretty sure that is going to be very slow, and the bottleneck is probably going to be the single frame transmission over the network, even if those computers are connected in the same LAN.
I'm thinking about using some kind of video streaming library. That way, the frames would be transmitted using proper compression algorithms, making the process faster.
Am I in the right path about this? Is it right to use a video streaming library here? If you think so, what's a good library for this task (in C or C++, preferably C++)?
Thank you for your help!
You have two solutions.
Solution 1
Run the app remotely
Intercept the openGL calls
Forward them on the network
Issue the openGL calls localy
-> complicated, especially when dealing with buffers and textures; the real openGL code is executed locally, which may not be what's wanted, but it's up to you. What's more, it's transparent for the remote app (no source modification, no rebuild). Almost no network communication.
Solution 2 : what you described, with the pros and cons.
If you go for Solution 2, don't bother about speed for now. You will have enough challenges with openGL as it is, trust me.
Begin by a synchronous mode : render, fetch, send, render, fetch, send
Then a asynchronous mode : render, begin the fetch, render, end of the fetch, begin the send, render, etc
It will be hard enough, I think
Depending on the resolution you need to support and the speed of your LAN it may be possible to stream the data uncompressed.
A 24-bit 1280x1024 frame requires 30 Mbit, and with a gigabit ethernet this means a theoretical 33 frames per second uncompressed.
If that is not enough, adding a simple RLE-compression yourself is fairly straightforward.
Imagine having to spend $ on both machines to provide them with proper graphics processing power. You could avoid this and simplify the client development if you centralize all the graphics related tasks on one single machine. The job of the client would be only to send/receive/display data, and the server could focus on processing the graphics (OpenGL) and sending the data (as frames) back to the client.
The bottleneck you referred to depends on a couple of things on your side: the size of the images and the frame rate you need to send/receive/display them.
These are some of the interesting topics I've read and hopefully they will shed a light on the subject:
Video streaming using c++
How do I stream video and play it?
Related
So I have a desktop app, using OpenGL to render large data sets in 3D. I want to move it to the cloud and use server-side rendering in order to stream the rendered images to remote clients (JS, etc.).
From what I understand, WebRTC is the best approach for that. However, it's complicated and expensive to implement, and mainly aimed for video conferencing applications. Are there any frameworks/open source which are more suitable for 3D graphics streaming. Is Nvidia's GameStreaming a suitable technology to explore or is it tailored for games? Any other ideas and approaches?
There are many ideas and approaches, and which one works best depends a lot on your particular application, budget, client, and server.
If you render on the server side, the big advantage is that you control the GPU, the available memory, the OS and driver version, etc so cross-platform or OS version problems largely disappear.
But now you're sending every frame pixel by pixel to the user. (And MPEG-4 isn't great when compressing visualization rather than video.)
And you've got a network latency delay on every keystroke, or mouse click, or mouse movement.
And if tens? hundreds? thousands? of people want to use your app simultaneously, you've got to have enough server side CPU/GPU to handle that many users.
So yeah, it's complicated and expensive to implement, no matter what you choose. As well as WebRTC, you could also look at screen sharing software such as VNC. Nvidia game streaming might be a more suitable technology to explore, because there's a lot of similarity between 3D games and 3D visualisation, but don't expect it to be a magic bullet.
Have you looked at WebGL? It's the slightly cut down EGL version of OpenGL for JavaScript. If you're not making heavy use of advanced OpenGL 4 capabilities, a lot of OpenGL C/C++ code translates without too much difficulty into JavaScript and WebGL. And just about every web browser on the planet runs WebGL, even if (like Apple) the platform manufacturer discourages regular OpenGL.
The big advantage is that all the rendering and interactivity happens on the client, so latency is not a problem and you're not paying for the CPU/GPU if lots of people want to run it at the same time.
Hope this helps.
I have a server with nvidia graphics card, and I want to run some openGL applications and xforwarding the display to client.
How can I achieve this? Currently I have not installed X window System yet.
X forwarding means, that all rendering commands are encapsulated into the X transport and transferred over to the machine with the display and executed there. The upside is, that the remote end does not require a GPU whatsoever. The downside is, that it consumes (well, rather gobbles up) lots of network bandwidth.
OpenGL up to version 2.1 specifies GLX opcodes for the X11 transport, so is network transparent. And if you make liberal use of display lists and keep the amount of data transferred small (i.e. no client side vertex array, only a few and little textures), OpenGL-over-GLX-over-X11-over-TCP works rather fine.
However these days it's more efficient to render remotely and only transfer the generated image using a high efficiency compression codec. Plain X11 forwarding can't do that, though. But you can do it using Xpra backed by a "true" X server, talking to an actual GPU. The problem is, that you'll need that particular X server to occupy the GPU.
A better method is, to detect if there's the GLX extension available, and if not if there's a GPU around and use that to render into a XSHM pixmap. That way also Xpra on a virtual framebuffer server will work. Unfortunately doing the later with OpenGL is annoyingly difficult to implement in a way, that it works transparently accross context creation APIs. It can be done (BT;DT) but actually for this kind of thing I actually prefer Vulkan, because despite Vulkan's verbosity it takes less work to do reliably with Vulkan than with OpenGL.
Maybe (unlikely) we'll see some X11 extension for compressed transfer of pixmaps, some high compression XV or similar. That, in combination with an pure off-screen, GPU rendering (we already have that), would make for a far more efficient system.
I have an application that runs on Nintendo 3DS -- it uses a variant of OpenGL to render 3D animation. The user is able to store these scenes online as data files. That is, only the data needed to render the scene is stored - the image frames are rendered on the device.
Additionally I would like for people to be able to view these scenes online. One way might be to render them in the browser via WebGL, but I'm worried about the amount of time and memory this would require. I would rather have the server render the scenes into movie files which can be played from a web page.
I don't have a lot of experience with server side programming - is it possible for a server program to render frames to an OpenGL context? They would be offscreen framebuffers since there is no screen.
Any suggestions on an approach to doing that? I've used PHP mostly for web programming, but it seems like that is not feasible for this. Ideally I'd like to write a C++ program which ran on the server, that way I could re-use code from the 3DS. Is that possible? Where can I read about doing this?
Server-side rendering is possible, and would provide more consistent results to the user than relying on consistent WebGL behavior across different browsers and platforms (as well as the time/memory performance concerns you already mentioned). Users with capable browsers and platforms will not get any benefits, so you'll want to consider what your users want and the platforms they're using.
For Windows-based servers, using OpenGL (w/offscreen framebuffers) with "no screen" will present a challenge. You need to start with a window to establish a graphics context. (There may be a provision to establish a "windowless" graphics context for Linux.) You also will need to determine how to manage any GPU resources for rendering, as it will have limits on the number of concurrent rendering requests it can support before slowing down and/or failing to allocate resources (e.g. framebuffer memory).
One alternative might be to use Mesa (software OpenGL) implementation - this won't be as fast, but in theory, this would scale with added server CPU and memory, which matches how most web servers scale out: Mesa offscreen rendering info
It looks like once written, spawning the C++ executable with args from PHP is trivial - although you may wish to route any long-running renderings to a separate rendering server to keep your web server responsive.
Displaying images on a computer monitor involves the usage of a graphic API, which dispatches a series of asynchronous calls... and at some given time, put the wanted stuff on the computer screen.
But, what if you are interested in knowing the exact CPU time at the point where the required image is fully drawn (and visible to the user)?
I really need to grab a CPU timestamp when everything is displayed to relate this point in time to other measurements I take.
Without taking account of the asynchronous behavior of the graphic stack, many things can get the length of the graphic calls to jitter:
multi-threading;
Sync to V-BLANK (unfortunately required to avoid some tearing);
what else have I forgotten? :P
I target a solution on Linux, but I'm open to any other OS. I've already studied parts of the xvideo extension for X.org server and the OpenGL API but I havent found an effective solution yet.
I only hope the solution doesn't involve hacking into video drivers / hardware!
Note: I won't be able to use the recent Nvidia G-SYNC thing on the required hardware. Although, this technology would get rid of some of the unpredictable jitter, I think it wouldn't completely solve this issue.
OpenGL Wiki suggests the following: "If GPU<->CPU synchronization is desired, you should use a high-precision/multimedia timer rather than glFinish after a buffer swap."
Does somebody knows how properly grab such a high-precision/multimedia timer value just after the swapBuffer call is completed in the GPU queue?
Recent OpenGL provides sync/fence objects. You can place sync objects in the OpenGL command stream and later wait for them to get passed. See http://www.opengl.org/wiki/Sync_Object
I am using AIR to do some augmented reality using fudicial marker tracking. I am using FLARToolkit and it works fine, except the frame rate drops to ridiculous lows in certain lighting conditions. This is because Flash only uses the CPU for processing, and every frame it is applying filters, adjusting thresholds, and analyzing the pixels to find the marker pattern. Without any hardware acceleration, it can get really slow.
I did some searching and it looks like the fastest and most stable tracking library is Studierstube ( http://handheldar.icg.tugraz.at/stbtracker.php and http://studierstube.icg.tugraz.at/download.php ). Unfortunately, I am not a C++ developer. But it seems that the tracking is insanely fast using this tracker (especially since it isn't all CPU processing like Flash is).
So my plan is to build (or rather have someone build) a small C++ program that leverages this tracker, and then sends the marker position data every frame (only need 30 FPS) to my Flash client application to display back the video and some augmented reality experiences. I believe this would be done through a socket server or something right? Is this possible and fairly easy for someone who is a decent C++ developer? I would ask him/her but I am in search for such a person.
May this link will be helpful?
http://www.adobe.com/devnet/air/flex/quickstart/articles/interacting_with_native_process.html
As some told here, it'll be done with nativeprocess...