Can EGL application run in console mode? - opengl

I want to implement an opengl application which generates images and I view the image via a webpage.
the application is intended to run on a linux server which has no display, no x windows, but with gpu.
I know that egl can use pixmap or pbuffer as render targets.
but the function eglGetDisplay worries me, it sounds like I still need to have attached display to make it work?
does egl work without display and xwindows or wayland?

This is a recurring question. TL;DR: With the current Linux graphics driver model it is impossible to use the GPU with traditional drivers without running a X server. If the GPU is supported by KMS+DRM+DRI you can do it. (EDIT:) Also in 2016 Nvidia finally introduced truly headless OpenGL support in their drivers through EGL.
The long story is, that technically GPUs are perfectly capable of rendering to an offscreen buffer without a display being attached or a graphics server running. However due to the history of graphics driver and environment development this is not possible, yet has not been possible for a long time. The assumption back then (when graphics was first introduced to Linux) was: "The graphics device is there to deliver a picture to a screen." That a graphics card could be used as an accelerating coprocessor was not even a figment of an idea.
Add to this, that until a few years ago, the Linux kernel itself had no idea how to talk to graphics devices (other than a dumb framebuffer somewhere in the system's address space). The X server was what talked to GPUs, so you needed that to run. And the first X server developers made the assumption that there is a person between keyboard and chair.
So what are your options:
Short term, if you're using a NVidia GPU: Just start an X server. You don't need a full blown desktop environment. You can even save yourself the trouble of starting a window manager. Just have the X server claim the VT and being active. There is now support for headless OpenGL contexts through EGL in the Nvidia drivers.
If you're using an AMD or Intel GPU you can talk directly to it. Either through EGL or using KMS (Google for something called kmscube, when trying it, make sure you switch away from your X server to a text VT first, otherwise you'll crash the X server). I've not tried it yet, but it should be possible to adjust the kmscube example that it uses the GPU to render into an offscreen buffer, without switching the VT to graphics mode or have any graphics output on the display framebuffer at all.

As datenwolf told u can create a frame buffer without using x with AMD and intel GPU. since iam using AMD graphics card with EGL and iam able to create a frame buffer and iam drawing on it.with Mesa Library by configuring without x u can achieve.

Related

How do I optimize my OpenGL textures for Remote Desktop/ANGLE?

I display a 2D texture in OpenGL using Qt.
Unfortunately I have found out that I need to support running my application via Remote Desktop to a Windows 7 PC. In this case I need to use OpenGL ES 2.0 API (ANGLE).
Due to low bandwidth my 2D visualization seems to be lagging.
My texture may have higher resolution than the screen so that it needs to be minified.
When not using remote desktop my approach have been to specify a very detailed texture and let the graphics card do the minification.
However now I am thinking that the OpenGL calls are executed in software locally and not on the remote machine? In which case the textures have to be transmitted via TCP/IP?
Does this mean that I should do minification myself before using the textures?
As an example instead of using a 2048x2048 texture I may bin 2x2 pixels in C++ and upload a 1024x1024 texture.
Alternatively I could use glGenerateMipmap?
I feel multiple terms are confused here: RDP just transfers the entire remote desktop for you whatever is on it, so no "OpenGL calls are executed in software locally". Hence, unfortunately it will not help if you reduce the texture size in your app, even if you remove it entirely (try it). RDP is not really suitable for real time animation.
Your app better be running locally on the user machine, so better to think how to distribute your OGL app to users.
If you cannot install your app on users machine, or give them installation kit, then
maybe turning your app to a browser app is a better option.
WebGL there for exactly this kind of applications, and is a standard too:
https://www.khronos.org/webgl/

Running OpenGL on windows server 2012 R2

This should be straightforward, but for some reason I can't make it work.
I hired a Softlayer Bare Metal Server that comes with an Nvidea Tesla GPU.
I'm remotley executing a program (openScad) that needs OpenGL > 2.0 in order to properly export a PNG file.
When I invoke openScad and export a model, I get a 0kb png file as output, a clear symptom that OpenGL > 2.0 support is not present.
In order to make sure that I was running openGL > 2.0 I connected to my server via RD and ran GlView. To my surprise I saw that the server was supporting nothing but openGL 1.1.
After a little research I found out that for standard RD sessions the GPU is not used so it makes sense that I'm only seeing openGL 1.1.
The problem is that when I execute openscad remotley, it seems that the GPU is not used either.
What can I do to successfully make the GPU capabilities of my server work when I invoke openscad remotely?
PS: I checked with softlayer support and they are not taking any responsibility
Most (currently all) OpenGL implementations that use a GPU assume that there's a display system of some sort using that GPU; in the case of Windows that would be GDI. However on a headless server Windows usually doesn't start the GDI on the GPU but uses some framebuffer.
The NVidia Tesla GPUs are marketed as compute-only-devices and hence their driver does not support any graphics functionality (note that this is a marketing limitation implemented in software, as the silicon is perfectly capable of doing graphics). Or in other words: If you can implement your graphics operations using CUDA or OpenCL, then you can use it to generate pictures. Otherwise (i.e. for OpenGL or Direct3D) it's useless.
Note that NVidia is marketing their "GRID" products for remote/cloud rendering.
I'm replying because i faced a similar problem in the past; also trying to run an application that needed openGL 4 on a windows server.
windows remote desktop indeed doesn't trigger opengl. However if you use tigervnc instead and then start your openScad application it might recognize your opengl drivers. At least this trick did it for me.
(when opening an openGL context in a program it scan's for monitors/RD's attached i pressume).
hope it helps.

Windowless OpenGL Context in Apache2 Module

I'm trying to develop an Apache2 module that utilizes OpenGL to perform off-screen rendering and dynamically generate images that I can then send back to the client.
Apache2 is running on an Ubuntu 12.04 machine and I created a test module that renders a quad and stores the frame as an image to disk using OpenGL/GLX. But when the module receives a client request, it crashes at XOpenDisplay(0) with a segmentation fault. Any ideas what could be going wrong?
Edit:
All the examples I have seen talk about using a pixel buffer (PBuffer). As far as I know, these are deprecated and FBOs should be used instead. Can someone explain how to create a context and use FBOs to perform off-screen rendering?
While technically it's perfectly possible to do windowless, display server less off-screen GPU accelerated rendering with OpenGL, practically it's impossible these days because you need a display environment to actually get access to the GPU. Fortunately the structure of graphics systems is changing these days (Hybrid graphics, display compositors). Already Mesa provides an off-screen context creation mode (OSMesa), but it's far from being feature complete.
So right now, you'll need some kind of display server drawable to work with on which you can bind a context. X11 offers two kinds of GPU accelerated drawables: Windows and PBuffers. You can use FBOs with either (PBuffers are technically Windows that can not be mapped to the root window and have an off-screen canvas). The easiest way to go is to create a regular window on an X server but not showing it; you can still create an OpenGL context on it and create FBOs, like shown in numerous tutorials. But for OpenGL to work the X server you use must be active hold the console and be configured to use the GPU (theoretically with newer Hybrid graphics capable X servers and drivers it should be possible to configure the X server to use a dummy display device and configure the GPU as a secondary device for accelerated rendering, but I never tried that, so far).

Remote off-screen rendering (Linux / no GUI)

The situation is as follows:
There is a remote Linux server (no GUI), which builds the OpenGL scene.
Objective: Transfer generated image(s) to client windows machine
I can not understand some thing with offscreen rendering, read a lot of literature, but still not well understood:
Using GLUT implies setting variable DISPLAY. If I right understand means remote rendering via x11. If I run x11 server on windows (XWin server) machine everything works. If I try to run without rendering server , then : freeglut (. / WFWorkspace): failed to open display 'localhost: 11.0'. Anyway x11 is not suitable.
Do I need to create a graphics context (hardware rendering support is required)?
How can I create a graphics context on Linux server without GLUT/x11?
Framebuffer object - whether it is suitable for my task and whether the graphics context is necessary for it?
What is the most efficient way to solve this problem (rendering requires hardware support).
Not an important issue, but nevertheless:
Pixel buffer object. I plan to use it to increase the read performance of GPU memory. Is it profitable within my task?
You need to modify your program to use OSMesa - it's a "null display" driver used by Mesa for software rendering. Consider this answer for near duplicate question as a starter:
https://stackoverflow.com/a/8442800/2702398
For a full example, you can check out the examples in the Mesa distribution itself, such as this: http://cgit.freedesktop.org/mesa/demos/tree/src/osdemos/osdemo.c
Update
It appears that VirtualGL (http://www.virtualgl.org) supports remote rendering of OpenGL/GLX protocol and serves rendered pixmaps to the client over VNC (whereupon, VNC head can be trivially made virtual).
If you want to use full OpenGL spec, use X11 to create context. Here is a tutorial showing how you can do this:
http://arrayfire.com/remote-off-screen-rendering-with-opengl/

Using OpenGL on lower-power side of Hybrid Graphics chip

I have hit a brick wall and I wonder if someone here can help. My program opens an OpenGL surface for very minor rendering needs. It seems on the MacbookPro this causes the graphics card driver to switch the hybrid card from low performance intel graphics to high performance AMD ATI graphics.
This causes me problems as there seems to be an issue with the AMD driver and putting the Mac to sleep, but also it drains the battery unnecessarily fast. I only need OpenGL to create a static 3D image on occasion, I do not require a fast frame rate!
Is there a way in a Cocoa app to prevent OpenGL switching a hybrid graphics card into performance mode?
The relevant documentation for this is QA1734, “Allowing OpenGL applications to utilize the integrated GPU”:
… On OS X 10.6 and earlier, you are not allowed to choose to run on the integrated GPU instead. …
On OS X 10.7 and later, there is a new attribute called NSSupportsAutomaticGraphicsSwitching. To allow your OpenGL application to utilize the integrated GPU, you must add in the Info.plist of your application this key with a Boolean value of true…
So you can only do this on Lion, and “only … on the dual-GPU MacBook Pros that were shipped Early 2011 and after.”
There are a couple of other important caveats:
Additionally, you must make sure that your application works correctly with multiple GPUs or else the system may continue forcing your application to use the discrete GPU. TN2229 Supporting Multiple GPUs on Mac OS X discusses in detail the required steps that you need to follow.
and:
Features that are available on the discrete GPU may not be available on the integrated GPU. You must check that features you desire to use exist on the GPU you are using. For a complete listing of supported features by GPU class, please see: OpenGL Capabilities Tables.