I have a server with nvidia graphics card, and I want to run some openGL applications and xforwarding the display to client.
How can I achieve this? Currently I have not installed X window System yet.
X forwarding means, that all rendering commands are encapsulated into the X transport and transferred over to the machine with the display and executed there. The upside is, that the remote end does not require a GPU whatsoever. The downside is, that it consumes (well, rather gobbles up) lots of network bandwidth.
OpenGL up to version 2.1 specifies GLX opcodes for the X11 transport, so is network transparent. And if you make liberal use of display lists and keep the amount of data transferred small (i.e. no client side vertex array, only a few and little textures), OpenGL-over-GLX-over-X11-over-TCP works rather fine.
However these days it's more efficient to render remotely and only transfer the generated image using a high efficiency compression codec. Plain X11 forwarding can't do that, though. But you can do it using Xpra backed by a "true" X server, talking to an actual GPU. The problem is, that you'll need that particular X server to occupy the GPU.
A better method is, to detect if there's the GLX extension available, and if not if there's a GPU around and use that to render into a XSHM pixmap. That way also Xpra on a virtual framebuffer server will work. Unfortunately doing the later with OpenGL is annoyingly difficult to implement in a way, that it works transparently accross context creation APIs. It can be done (BT;DT) but actually for this kind of thing I actually prefer Vulkan, because despite Vulkan's verbosity it takes less work to do reliably with Vulkan than with OpenGL.
Maybe (unlikely) we'll see some X11 extension for compressed transfer of pixmaps, some high compression XV or similar. That, in combination with an pure off-screen, GPU rendering (we already have that), would make for a far more efficient system.
Related
I display a 2D texture in OpenGL using Qt.
Unfortunately I have found out that I need to support running my application via Remote Desktop to a Windows 7 PC. In this case I need to use OpenGL ES 2.0 API (ANGLE).
Due to low bandwidth my 2D visualization seems to be lagging.
My texture may have higher resolution than the screen so that it needs to be minified.
When not using remote desktop my approach have been to specify a very detailed texture and let the graphics card do the minification.
However now I am thinking that the OpenGL calls are executed in software locally and not on the remote machine? In which case the textures have to be transmitted via TCP/IP?
Does this mean that I should do minification myself before using the textures?
As an example instead of using a 2048x2048 texture I may bin 2x2 pixels in C++ and upload a 1024x1024 texture.
Alternatively I could use glGenerateMipmap?
I feel multiple terms are confused here: RDP just transfers the entire remote desktop for you whatever is on it, so no "OpenGL calls are executed in software locally". Hence, unfortunately it will not help if you reduce the texture size in your app, even if you remove it entirely (try it). RDP is not really suitable for real time animation.
Your app better be running locally on the user machine, so better to think how to distribute your OGL app to users.
If you cannot install your app on users machine, or give them installation kit, then
maybe turning your app to a browser app is a better option.
WebGL there for exactly this kind of applications, and is a standard too:
https://www.khronos.org/webgl/
I want to find a way to send all the geometry from an opengl framebuffer to a remote computer, who would do the rendering. This would allow me to have very complex simulations running on some kind of a big supercomputer, and rendered on a small mobile or simply cheap client machine doing the rendering.
Before starting digging in my code, I though it would be relatively easy: let's copy the vertex arrays and send it through the network, using boost::serialisation for example, and that's it. But my geometry are encapsulated, which prevents me from accessing it from where I want to.
I have been able to render into a framebuffer instead of rendering directly on screen though, and I was wondering if there is a way to retrieve data from OpenGL's fbo's in anyway?
First your terminology is wrong. Frame Buffer Objects are encapsulations of off-screen images/surfaces and don't hold geometry.
Second: What you imagine has been implemented already by the VirtualGL project (however it's stuck at a rather old OpenGL profile and doesn't support modern GPUs).
Also X11/GLX always supported indirect OpenGL operation, i.e. a remote machine would send OpenGL commands to the local display server, which is exactly what you probably think of. But this has a major drawback: Network bandwidth becomes the major bottleneck.
The situation is as follows:
There is a remote Linux server (no GUI), which builds the OpenGL scene.
Objective: Transfer generated image(s) to client windows machine
I can not understand some thing with offscreen rendering, read a lot of literature, but still not well understood:
Using GLUT implies setting variable DISPLAY. If I right understand means remote rendering via x11. If I run x11 server on windows (XWin server) machine everything works. If I try to run without rendering server , then : freeglut (. / WFWorkspace): failed to open display 'localhost: 11.0'. Anyway x11 is not suitable.
Do I need to create a graphics context (hardware rendering support is required)?
How can I create a graphics context on Linux server without GLUT/x11?
Framebuffer object - whether it is suitable for my task and whether the graphics context is necessary for it?
What is the most efficient way to solve this problem (rendering requires hardware support).
Not an important issue, but nevertheless:
Pixel buffer object. I plan to use it to increase the read performance of GPU memory. Is it profitable within my task?
You need to modify your program to use OSMesa - it's a "null display" driver used by Mesa for software rendering. Consider this answer for near duplicate question as a starter:
https://stackoverflow.com/a/8442800/2702398
For a full example, you can check out the examples in the Mesa distribution itself, such as this: http://cgit.freedesktop.org/mesa/demos/tree/src/osdemos/osdemo.c
Update
It appears that VirtualGL (http://www.virtualgl.org) supports remote rendering of OpenGL/GLX protocol and serves rendered pixmaps to the client over VNC (whereupon, VNC head can be trivially made virtual).
If you want to use full OpenGL spec, use X11 to create context. Here is a tutorial showing how you can do this:
http://arrayfire.com/remote-off-screen-rendering-with-opengl/
Motivation - Write a program in C (and Assembly, if required) to color a rectangular area in the screen red.
STRICT requirements - GNU/Linux running with the bare minimum utilities and interfaces in text/console mode. So no X (or equivalent like Wayland/Mir), no non-default (outside POSIX, LSB, etc. provided by the kernel) library or interface and no extra assumptions except the presence of the device driver for the monitor.
Effectively, what I am looking for is information on how to write a program which will eventually send a signal through the VGA port and cable to the monitor to color a particular portion of the screen red.
Apologies if this sounds rude, but no "Why do you want to do this?" or "Why don't you use ABC library?" answer. I am trying to understand how to write an implementation of the X server or a kernel framebuffer (/dev/fb0) library for example. It is ok to provide a link to the source of a C library.
no extra assumptions except the presence of the device driver for the monitor.
That means you can use X or Wayland, because those are the graphics driver infrastructure on Linux.
Linux (the Kernel) by itself doesn't contain any graphics primitives. It provides some interfaces to talk to the GPU, allocate memory on it and configure the on-screen framebuffer. But except raw framebuffer memory access the Linux kernel does have no way to perform drawing operations. For that you need some infrastructure in userspace.
Wayland builds on top of DRI2, which in turn talks to the DRM Kernel-API. Then you require GPU dependent state tracker. Mesa has state trackers for a number of GPUs and provides OpenGL and OpenVG frontends.
The NVidia and ATI propiatary, closed source graphics drivers are designed to work with X only. So with those to make use of the GPU you must use X. That's the way it is.
Outside of that you can manipulate the on-screen framebuffer memory through /dev/fbdev, but that's mere pixel pushing, without any GPU acceleration.
Once upon a time we we had svgalib (or was it called vgalib?) which did exactly what you are trying to do. I would recomend that you look at its source code. I don't know if it is still possible to find it anywhere, or if it would actually work with a modern kernel. Just whatever you do, be prepared to reboot often.
For whatever it's worth, for any future viewers, I have found a decent tutorial at http://betteros.org/tut/graphics1.php. It goes through Framebuffer, DRI/DRM, and X Windows at "the lowest level possible" (basically ioctls and file read/write).
I have gotten both the Framebuffer and DRI/DRM examples to work on both QEMU Debian (on a MacOS Host) and a Raspberry Pi, with a bit of modification for the latter.
How can I draw a pixel array very fast in c++?
I've seen many questions like this on stackoverflow,
they are all answered with:
use gdi (windows)
use opengl
...
but there must be a way, how opengl is doing it!
I'm writing a little raytracer and need to draw every pixel
many times per second.
opengl is able to do it, platform independent and fast,
so how can i achieve that without opengl?
And "without opengl" dos not mean
use sdl (slow)
use this / that library
Please only suggest the platform native methods
or the library closest to that.
If it is possible (i know it is)
how can I do this?
platform independent solutions are preferred.
Drawing graphics on Linux you either have to use X11, or OpenGL. (And in the near future Wayland may be another option). In Linux there is no "native" way of doing graphics, because the Linux kernel doesn't care about graphics APIs. It provides a interfaces (DRM) using which graphics systems are then implemented in user space. If you just want to splat pixels on the screen, without caring about windows then you could also mmap /dev/fbdev – but you normally don't want that, because nobody wants his screen being clobbered by some program he can't move or hide.
Drawing single points is inefficient, no matter which API being uses, due to the protocol overhead.
So X11 it is. So the best bet is to use the MIT-SHM extension which you use to alter pixels in a buffer, which is then blitted in whole by the X11 server. Of course doing this using the pure X11 Xlib functions is annoyingly cumbersome. So this is what SDL effectively nicely wraps up for you.
The other option is OpenGL. OpenGL is not a library! It's a system level API, that gives you almost direct access to the GPU. And it integrates nicely with X11. Yes, the API is provided through a library that's being loaded, but technically that library is just a "wrapper" or "interface" to the actual driver. Drawing single points with OpenGL makes no sense. But you can "batch up" several points into a list (using a vertex array) and then process that list. So the idea is to collect all the incoming points between two display refresh intervals and draw them in one single batch.
platform independent solutions are preferred.
Why are you asking about native APIs then? By definition there can be no plattform independent native API. Either you're native, or you're plattform independent.
And in your particular scenario I think SDL would be the best solution, because it offers just the right kind of abstraction and program side interface for a raytracer. Just FYI: Virtual Machines like QEmu use SDL.
Or you use OpenGL which is a real plattform neutral API widely supported.
Drawing graphics on Linux you either have to use X11, or OpenGL.
This is absolutely false! Counterexample: there's platforms that don't run X11, yet they display pixels (eg. fonts).
Sidenote. OpenGL usually depends on X11 (it's possible, albeit hard, to run OpenGL without X11).
As #datenwork says, there's at least 2 other ways to draw pixels:
The framebuffer device (fbdev), an abstraction to interface with graphics hardware. Very old, designed by Martin Schaller, see the kernel docs. Source code is here. Also see here. Here's the simplest possible framebuffer driver.
The Direct Rendering Manager (DRM), a kernel subsystem that provides an API for userland apps to send commands/data directly to the GPU. (Seems suspiciously similar to what OpenGL does, but idk!). Source code is here. Here's a DRM example that inititializes a simple display pipeline.
Both of these are part of the kernel, so they're lower-level than X11, which is not part of the kernel. Both can draw arbitrary pixels (eg. penguins). I'd guess both of these are platform-independent (like OpenGL).
See this for more on how to draw stuff on Linux.