How to use DRI instead of glx? [duplicate] - c++

I want to use OpenGL rendering without X, with google i find it: http://dvdhrm.wordpress.com/2012/08/11/kmscon-linux-kmsdrm-based-virtual-console/ there says that it is possible. I should use DRM and EGL. EGL can create opengl context but requires a NativeWindow. DRM probably will provide me NativeWindow, is not it? Should i use KMS? I know that i must have open source video driver. I want exactly OpenGL context, but not OpenGL ES (Linux). Maybe, someone knows tutorial or example code?

Yes, you need kms stack (example). Here is a simple example under linux, it use OpenGL es, But the step to have it working against OpenGL api are simple.
In the egl attribs set EGL_RENRERABLE_TYPE to EGL_OPENGL_BIT
And tell egl which api to bind to:
eglBindAPI(EGL_OPENGL_API);
Be sure to have latest kernel drivers and mesa-dev, libdrm-dev, libgbm-dev. This pieces of code is portable on android, it's just not so easy to have default android graphic stack silenced.
note: I had trouble with 32bit version, but still don't know why. those libs are actively developed, so not sure it wasn't a bug.
*note2: depending on your GLSL version, float precision is supported or not.
precision mediump float;
note3: if you have permision failure with /dev/dri/card0, grant it with:
sudo chmod 666 /dev/dri/card0
or add current user to video group with
sudo adduser $user video
you may also setguid for your executable with group set to video. (maybe best option)

Related

Can I use programs using OpenGL and GLUT in Raspberry Pi 3b+?

I have a ready-made program code that uses OpenGL and glut, can I compile and run this program in Rasbian on Raspberry 3b+?
Yes, but with a caveat
You can use OpenGL software on RPis, but you need to note that RPIs do not implement up to the newest standard, 4.6. I think they implement up to version OpenGL 2.0 ES, but don't quote me on that.
If you want to find out, you can simply enable the GL driver in raspi-config and then run the command
glxinfo | grep "OpenGL version"
to figure out what standard you can use.
This will determine what standard of OpenGL code works and therefore what code will compile.
Only after you verify the version will you know if the code will run.
Even then, there is sooo little RAM that it may not work.
That is outside the scope of this question though.

(linux) How can I know inside a c++ program if Opengl 4 is supported?

I would like to detect inside my C++ program if opengl 4 is supported on the running computer.
I don't know if I search on google and stackoverflow with wrong/bad terms (my english skill...), but surprisingly I didn't found any example... I would not be suprise if you tell me this question is a duplicate...
It would eventually useful for me to know how to get more usefull datas from the video card and the drivers used by it on the running computer. I didn't take time to look around to know how to do that, but if you have some usefull link, feel free to share it with me.
Step 1: Create an OpenGL Context; first try by the "attrib" method requesting the minium OpenGL version you want to have. If that succeeds you're done.
Step 2: If that didn't work and you can gracefully downgrade create a no-frills context
and call glGetString(GL_VERSION) to get the actual context version supported. Note that on MacOS X this limits you to 2.1 and earlier.
Step 3: If you want some context, portable and reliably between 2.1 and your optimimal version, try with the attribs method in a loop, decrementing your needs until it succeeds.
Note that there is no way to determine in advance which version is supported in OpenGL. The main reason for this is, that operating systems and the graphics layer may decide on demand which locally available OpenGL version to use, depending on the request and the resources available at the moment (graphics cards in theory can be hotplugged).

Setup OpenGL 4 build / unittest server?

I'm trying to find a solution to setting up an OpenGL build server. My preference would be to have a virtual or cloud server, but as far as I can see those only go up to 3.0/3.1 using software rendering. I have a server running Windows, but my tests are Linux specific and I'd have to run it in a VM, which as far as I know also only support OpenGL 3.1.
So, is it possible to set up a OpenGL 4 build/unittest server?
OpenGL specification does not include any pixel-perfect warranties. This means your tests may fail just by switching to the other GPU or even to the other version of the driver.
So, you have to be specific. You should test not the result of rendering, but the result of math that just precedes the submission of the primitives to the API.

Moving from fixed-pipeline to modern OpenGL

I have done some simple OpenGL (old fixed pipeline ,without shaders ..etc) and want to start some serious "modern" OpenGL programming. (Should compile on Windows and Linux)
I have few questions.
1) In Windows , the "gl.h" doesnt have OpenGL2.0+ related API calls declared .(eg. glShaderSource() ) . How can I access these API calls?
I dont want to install graphics-card specific headers since, I want to compile this application in other machines.
2) In Linux ,If I install Mesa library can I access above OpenGL2+ APIs functions ?
There has been a long-held belief among some (due to the slowness of GL version updates in the late 90s/early 00s) that the way to get core OpenGL calls was to just include a header and a library. That loading function pointers manually was something you did for extensions, for "graphics-card specific" function. That isn't the case.
You should always use an extension loading library to get access to OpenGL functions, whether core or extension. GLEW is a pretty good one, and GL3W is decent if you can live with its limitations (3.0 core or better only).

Forcing OpenGL Core Profile Only

Is there a compiler flag or another way of forcing OpenGL core profile only? I want to get an error when i use deprecated functions like glRotatef and so on.
EDIT1: I am using Linux, however, i am also interested in knowing how to do this in Windows
EDIT2: I would prefer to get an error at compile time, but runtime error would be ok as well.
You could compile your code using gl3.h instead of gl.h.
http://www.opengl.org/registry/api/gl3.h
Try wglCreateContextAttribsARB() with WGL_CONTEXT_CORE_PROFILE_BIT_ARB.
Or glXCreateContextAttribsARB with GLX_CONTEXT_CORE_PROFILE_BIT_ARB.
You might find this example useful as a testbed.
Depends on what creates your OpenGL context.
If you're using GLFW (which I sincerely recommend for standalone OGL window apps), then you can do before you create the window:
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR,3);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR,1);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE,GLFW_OPENGL_CORE_PROFILE);
// the last line shouldn't be necessary
// as you request a specific GL context version -
// - at least my ATI will then default to core profile
Note that if you request a pre-3.0 GL context on modern hardware/drivers, you're likely to receive a newest possible context in compatibility mode instead. Check what your GPU returns from glGetString(GL_VERSION) to make sure.
If you use another API for creation of OpenGL context, check its reference manual for similar functions.
BTW:
I believe it's impossible to get an error in compile time - your compiler can't be aware what OpenGL context you will receive after your request (if any). The correct way of ensuring that you're not using out-of-version functionality is testing for glGetError().
Also, I recommend using the gl3w extension wrapper if you compile for Windows.
I have found another way to do it using the Unofficial OpenGL Software Development Kit:
http://glsdk.sourceforge.net/docs/html/index.html
By using the 'GL Load' component its possible to load a core profile of OpenGL and to remove compatibility enums and functions for versions of OpenGL 3.1 or greater. A short howto can be found here:
https://www.opengl.org/wiki/OpenGL_Loading_Library#Unofficial_OpenGL_SDK