When I run my test JOGL app, it says that I only have GL2 available on the thread when my system supports up to OpenGL 4.1 according to the OpenGl Extensions Viewer.
Does anyone see anything obvious why only GL2 would be supported in the thread?
I am using a mid-2015 Macbook Pro with Intel Iris and AMD Radeon R9 graphics cards.
This is the very first exercise in the book [Computer Graphics Programming in OpenGL with Java].4
Java Version: Java8
JOGL Version: 2.3.2
GlueGen Version: 2.3.2
import java.nio.*;
import javax.swing.*;
import static com.jogamp.opengl.GL4.*;
import com.jogamp.opengl.*;
import com.jogamp.opengl.awt.GLCanvas;
import com.jogamp.common.nio.Buffers;
public class Code extends JFrame implements GLEventListener {
private GLCanvas myCanvas;
public Code() {
setTitle("Chapter 2 - program1");
setSize(600, 400);
setLocation(200, 200);
myCanvas = new GLCanvas();
myCanvas.addGLEventListener(this);
this.add(myCanvas);
this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
setVisible(true);
}
public void display(GLAutoDrawable drawable) {
GL4 gl = (GL4) GLContext.getCurrentGL();
drawable.setGL(new DebugGL4(gl));
float bkg[] = { 1.0f, 0.0f, 0.0f, 1.0f };
FloatBuffer bkgBuffer = Buffers.newDirectFloatBuffer(bkg);
gl.glClearBufferfv(GL_COLOR, 0, bkgBuffer);
}
public static void main(String[] args) {
new Code();
}
public void init(GLAutoDrawable drawable) {
GL4 gl = drawable.getGL().getGL4(); // This is where the code fails
String version = gl.glGetString(GL4.GL_VERSION);
String shaderversion = gl.glGetString(GL4.GL_SHADING_LANGUAGE_VERSION);
System.out.println("GLVERSION: " + version + " shading language: " + shaderversion );
}
public void reshape(GLAutoDrawable drawable, int x, int y, int width, int height) { }
public void dispose(GLAutoDrawable drawable) { }
}
Exception:
/Library/Java/JavaVirtualMachines/jdk1.8.0_102.jdk/Contents/Home/bin/java -Didea.launcher.port=7535 "-Didea.launcher.bin.path=/Applications/IntelliJ IDEA.app/Contents/bin" -classpath /private/var/folders/rd/tltb7sk928x_n429dyctdt8c0000gn/T/classpath1.jar -Dfile.encoding=UTF-8 com.intellij.rt.execution.application.AppMain Code
Exception in thread "AWT-EventQueue-0" com.jogamp.opengl.GLException: Caught GLException: Not a GL4 implementation on thread AWT-EventQueue-0
at com.jogamp.opengl.GLException.newGLException(GLException.java:76)
at jogamp.opengl.GLDrawableHelper.invokeGLImpl(GLDrawableHelper.java:1327)
at jogamp.opengl.GLDrawableHelper.invokeGL(GLDrawableHelper.java:1147)
at com.jogamp.opengl.awt.GLCanvas$12.run(GLCanvas.java:1438)
at com.jogamp.opengl.Threading.invoke(Threading.java:223)
at com.jogamp.opengl.awt.GLCanvas.display(GLCanvas.java:505)
at com.jogamp.opengl.awt.GLCanvas.paint(GLCanvas.java:559)
at sun.awt.RepaintArea.paintComponent(RepaintArea.java:264)
at sun.lwawt.LWRepaintArea.paintComponent(LWRepaintArea.java:59)
at sun.awt.RepaintArea.paint(RepaintArea.java:240)
at sun.lwawt.LWComponentPeer.handleJavaPaintEvent(LWComponentPeer.java:1314)
at sun.lwawt.LWComponentPeer.handleEvent(LWComponentPeer.java:1198)
at java.awt.Component.dispatchEventImpl(Component.java:4965)
at java.awt.Component.dispatchEvent(Component.java:4711)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:758)
at java.awt.EventQueue.access$500(EventQueue.java:97)
at java.awt.EventQueue$3.run(EventQueue.java:709)
at java.awt.EventQueue$3.run(EventQueue.java:703)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:76)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:86)
at java.awt.EventQueue$4.run(EventQueue.java:731)
at java.awt.EventQueue$4.run(EventQueue.java:729)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:76)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:728)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:201)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)
Caused by: com.jogamp.opengl.GLException: Not a GL4 implementation
at jogamp.opengl.gl4.GL4bcImpl.getGL4(GL4bcImpl.java:40464)
at Code.init(Code.java:38)
at jogamp.opengl.GLDrawableHelper.init(GLDrawableHelper.java:644)
at jogamp.opengl.GLDrawableHelper.init(GLDrawableHelper.java:667)
at com.jogamp.opengl.awt.GLCanvas$10.run(GLCanvas.java:1407)
at jogamp.opengl.GLDrawableHelper.invokeGLImpl(GLDrawableHelper.java:1291)
... 30 more
Exception in thread "AWT-EventQueue-0" com.jogamp.opengl.GLException: Caught GLException: Thread[AWT-EventQueue-0,6,main] glGetError() returned the following error codes after a call to glActiveTexture(<int> 0x84C0): GL_INVALID_OPERATION ( 1282 0x502), on thread AWT-EventQueue-0
at com.jogamp.opengl.GLException.newGLException(GLException.java:76)
at jogamp.opengl.GLDrawableHelper.invokeGLImpl(GLDrawableHelper.java:1327)
at jogamp.opengl.GLDrawableHelper.invokeGL(GLDrawableHelper.java:1147)
at com.jogamp.opengl.awt.GLCanvas$12.run(GLCanvas.java:1438)
at com.jogamp.opengl.Threading.invoke(Threading.java:223)
at com.jogamp.opengl.awt.GLCanvas.display(GLCanvas.java:505)
at com.jogamp.opengl.awt.GLCanvas.paint(GLCanvas.java:559)
at com.jogamp.opengl.awt.GLCanvas.update(GLCanvas.java:866)
at sun.awt.RepaintArea.updateComponent(RepaintArea.java:255)
at sun.lwawt.LWRepaintArea.updateComponent(LWRepaintArea.java:47)
at sun.awt.RepaintArea.paint(RepaintArea.java:232)
at sun.lwawt.LWComponentPeer.handleJavaPaintEvent(LWComponentPeer.java:1314)
at sun.lwawt.LWComponentPeer.handleEvent(LWComponentPeer.java:1198)
at java.awt.Component.dispatchEventImpl(Component.java:4965)
at java.awt.Component.dispatchEvent(Component.java:4711)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:758)
at java.awt.EventQueue.access$500(EventQueue.java:97)
at java.awt.EventQueue$3.run(EventQueue.java:709)
at java.awt.EventQueue$3.run(EventQueue.java:703)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:76)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:86)
at java.awt.EventQueue$4.run(EventQueue.java:731)
at java.awt.EventQueue$4.run(EventQueue.java:729)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:76)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:728)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:201)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)
Caused by: com.jogamp.opengl.GLException: Thread[AWT-EventQueue-0,6,main] glGetError() returned the following error codes after a call to glActiveTexture(<int> 0x84C0): GL_INVALID_OPERATION ( 1282 0x502),
at com.jogamp.opengl.DebugGL4bc.writeGLError(DebugGL4bc.java:31803)
at com.jogamp.opengl.DebugGL4bc.glActiveTexture(DebugGL4bc.java:232)
at jogamp.opengl.GLFBODrawableImpl.swapFBOImpl(GLFBODrawableImpl.java:471)
at jogamp.opengl.GLFBODrawableImpl.swapBuffersImpl(GLFBODrawableImpl.java:426)
at jogamp.opengl.GLDrawableImpl.swapBuffers(GLDrawableImpl.java:88)
at jogamp.opengl.GLDrawableHelper.invokeGLImpl(GLDrawableHelper.java:1295)
... 31 more
Process finished with exit code 0
It turns out that OSX falls back to OpenGL 2.1 so you need to set the core profile yourself.
$ glxinfo | grep OpenGL
OpenGL vendor string: ATI Technologies Inc.
OpenGL renderer string: AMD Radeon R9 M370X OpenGL Engine
OpenGL version string: 2.1 ATI-1.42.15
OpenGL shading language version string: 1.20
I was able to set the core version (OpenGL 4.1) by passing GLCapabilities into the GLCanvas constructor.
Here is the new, fixed constructor:
public Code() {
setTitle("Chapter 2 - program1");
setSize(600, 400);
setLocation(200, 200);
// This was the fix
GLProfile glp = GLProfile.getMaxProgrammableCore(true);
GLCapabilities caps = new GLCapabilities(glp);
myCanvas = new GLCanvas(caps);
myCanvas.addGLEventListener(this);
this.add(myCanvas);
this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
setVisible(true);
}
For running the book's examples on a Mac, I have placed instructions on this website: http://athena.ecs.csus.edu/~gordonvs/errataMac.html
In summary, you need to:
make sure you've installed the latest Java SE
place the relevant JOGL libraries into System/Library/Java/Extensions (the particular ones required are listed in the above website)
add the code described above by Julien (thanks!)
change the version numbers on the shaders to 410 (or whatever your Mac supports)
in the examples that use textures, replace the binding layout qualifiers in the shaders to appropriate calls to glUniform1i() in the Java application (for compatibility with version 4.1)
If more idiosyncrasies are identified, I'll add them to the instructions in the website.
Related
Currently using Ubuntu 20.04 LTS, g++11.1.0, C++20, SDL2
Compiler Flags: -lvulkan -ldl -lSDL2main -lSDL2
Code:
#include "VkBootstrap.h"
#include <SDL2/SDL.h>
#include <SDL2/SDL_vulkan.h>
#include <vulkan/vulkan.hpp>
#include <iostream>
int main()
{
VkExtent2D windowExtent {1600, 900};
// Initialize SDL
if (SDL_Init(SDL_INIT_VIDEO))
{
std::cerr << "Unable to initialize SDL: " << SDL_GetError();
std::abort();
}
// Create a SDL Window
SDL_Window* window {SDL_CreateWindow(
"Vulkan",
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
windowExtent.width,
windowExtent.height,
SDL_WINDOW_VULKAN | SDL_WINDOW_RESIZABLE
)};
// Check if window was created sucessfully
if (!window)
{
std::cerr << "Failed to create SDL Window: " << SDL_GetError();
std::abort();
}
vkb::InstanceBuilder instanceBuilder {};
// Initialize the Vulkan instance, with basic debug features
auto builderResult {instanceBuilder
.set_app_name("Vulkan Game")
.request_validation_layers(true)
.use_default_debug_messenger()
.require_api_version(1, 2, 0)
.build()};
vkb::Instance vkbInstance {builderResult.value()};
// Store the Instance
VkInstance instance {vkbInstance.instance};
// Store the Debug Messenger
VkDebugUtilsMessengerEXT debugMessenger {vkbInstance.debug_messenger};
VkSurfaceKHR surface {};
// Get a Vulkan Rendering Surface of the SDL Window
if (!SDL_Vulkan_CreateSurface(window, instance, &surface))
{
std::cerr << "Unable to Create Vulkan Rendering Surface.\n";
std::abort();
}
vkb::PhysicalDeviceSelector gpuSelector {vkbInstance};
// Initialize the Physical Device with a GPU that can render to the window
vkb::PhysicalDevice vkbPhysicalDevice {gpuSelector
.set_minimum_version(1, 2)
.set_surface(surface)
.select()
.value()};
// Store the Vulkan Physical Device
VkPhysicalDevice physicalDevice {vkbPhysicalDevice.physical_device};
// Surface Deletion
vkDestroySurfaceKHR(instance, surface, nullptr);
// Debug Messenger Deletion
vkb::destroy_debug_utils_messenger(instance, debugMessenger);
// Instance Deletion
vkDestroyInstance(instance, nullptr);
// Window Deletion
SDL_DestroyWindow(window);
}
Driver:
Following the tutorial https://vkguide.dev/ and using the VkBootstrap library, when I try to select the physical device, it causes a memory leak like the one shown below:
=================================================================
==44934==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 576 byte(s) in 4 object(s) allocated from:
#0 0x7f8e9c905e17 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x7f8e97a13a1d (/lib/x86_64-linux-gnu/libdrm.so.2+0x4a1d)
#2 0x31647261632e (<unknown module>)
Direct leak of 128 byte(s) in 4 object(s) allocated from:
#0 0x7f8e9c905c47 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
#1 0x7f8e9b35a0c5 (/lib/x86_64-linux-gnu/libxcb.so.1+0xe0c5)
SUMMARY: AddressSanitizer: 704 byte(s) leaked in 8 allocation(s).
I ran a diagnostic on the available GPUs by modifying the VkBootstrap.cpp file to output data, and got this:
GPU: Intel(R) HD Graphics 530 (SKL GT2)
Api Version: 4202641
Device ID: 6418
Device Type: 1
Driver Version: 88080387
GPU: llvmpipe (LLVM 12.0.0, 256 bits)
Api Version: 4194306
Device ID: 0
Device Type: 4
Driver Version: 1
Selected GPU: Intel(R) HD Graphics 530 (SKL GT2)
Although I have an Nvidia and Intel graphics card, how come it isn't showing the Nvidia graphics card as a viable GPU?
Also, why are these memory leaks occurring? How can I resolve them?
I have a certain OpenGL application which I compiled in the past but now can't in the same machine. The problem seems to be in the fragment shader not compiling properly.
I'm using:
Glew 2.1.0
Glfw 3.2.1
Also all necessary context is being created on the beginning of the program. Here's how my program creation function looks like:
std::string vSource, fSource;
try
{
vSource = getSource(vertexShader, "vert");
fSource = getSource(fragmentShader, "frag");
}
catch (std::runtime_error& e)
{
std::cout << e.what() << std::endl;
}
GLuint vsID, fsID;
try
{
vsID = compileShader(vSource.c_str(), GL_VERTEX_SHADER); //Source char* was checked and looking good
fsID = compileShader(fSource.c_str(), GL_FRAGMENT_SHADER);
}
catch (std::runtime_error& e)
{
std::cout << e.what() << std::endl; //incorrect glsl version 450 thrown here
}
GLuint programID;
try
{
programID = createProgram(vsID, fsID); //Debugging fails here
}
catch (std::runtime_error& e)
{
std::cout << e.what() << std::endl;
}
glDeleteShader(vsID);
glDeleteShader(fsID);
return programID;
My main:
/* ---------------------------- */
/* OPENGL CONTEXT SET WITH GLEW */
/* ---------------------------- */
static bool contextFlag = initializer::createContext(vmath::uvec2(1280, 720), "mWs", window);
std::thread* checkerThread = new std::thread(initializer::checkContext, contextFlag);
/* --------------------------------- */
/* STATIC STATE SINGLETON DEFINITION */
/* --------------------------------- */
Playing Playing::playingState; //Failing comes from here which tries to create a program
/* ---- */
/* MAIN */
/* ---- */
int main(int argc, char** argv)
{
checkerThread->join();
delete checkerThread;
Application* app = new Application();
...
return 0;
}
Here is the looking of an example of the fragmentShader file:
#version 450 core
out vec4 fColor;
void main()
{
fColor = vec4(0.5, 0.4, 0.8, 1.0);
}
And this is what I catch as errors:
[Engine] Glew initialized! Using version: 2.1.0
[CheckerThread] Glew state flagged as correct! Proceeding to mainthread!
Error compiling shader: ERROR: 0:1: '' : incorrect GLSL version: 450
ERROR: 0:7: 'fColor' : undeclared identifier
ERROR: 0:7: 'assign' : cannot convert from 'const 4-component vector of float' to 'float'
My specs are the following:
Intel HD 4000
Nvidia GeForce 840M
I shall state that I compiled shaders in this same machine before. I can't do it anymore after a disk format. However, every driver is updated.
As stated in comments the problem seemed to be with a faulty option of running the IDE with selected graphics card. As windows defaults the integrated Intel HD 4000 card, switching the NVIDIA card to the default preferred one by the OS fixed the problem.
Both my local computer and EC2 server is on Ubuntu 14.04. Suppose I am testing a cuda opengl interop code as below.
Test.cu
#include <iostream>
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <cuda_gl_interop.h>
__global__ static void CUDAKernelTEST(float *data){
const int x = blockIdx.x * blockDim.x + threadIdx.x;
const int y = blockIdx.y * blockDim.y + threadIdx.y;
const int mx = gridDim.x * blockDim.x;
data[y * mx + x] = 0.5;
}
GLFWwindow *glfw_window_;
void Setup(){
if (!glfwInit()) exit(EXIT_FAILURE);
glfwWindowHint(GLFW_VISIBLE, GL_FALSE);
glfw_window_ = glfwCreateWindow(10, 10, "", NULL, NULL);
if (!glfw_window_) glfwTerminate();
glfwMakeContextCurrent(glfw_window_);
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK) exit(EXIT_FAILURE);
}
void TearDown(){
glfwDestroyWindow(glfw_window_);
glfwTerminate();
}
int main(){
Setup();
GLuint id;
glGenBuffers(1, &id);
glBindBuffer(GL_ARRAY_BUFFER, id);
glBufferData(GL_ARRAY_BUFFER, 3 * 24 * sizeof(GLfloat), 0, GL_STATIC_DRAW);
cudaGraphicsResource *vbo_res;
cudaGraphicsGLRegisterBuffer(&vbo_res, id, cudaGraphicsMapFlagsWriteDiscard);
cudaGraphicsMapResources(1, &vbo_res, 0);
float *test;
size_t size;
cudaGraphicsResourceGetMappedPointer(
reinterpret_cast<void **>(&test), &size, vbo_res);
dim3 blks(1, 1);
dim3 threads(72, 1);
CUDAKernelTEST<<<blks, threads>>>(test);
cudaDeviceSynchronize();
cudaGraphicsUnmapResources(1, &vbo_res, 0);
// do some more with OpenGL
std::cout << "you passed the test" << std::endl;
TearDown();
return 0;
}
The current approach is create a hidden window and a context. The code compiles and runs fine on my local machine. However, glfwInit() returns GL_FALSE when run on EC2. If I log the messages sent to the error callback, it shows "X11: The DISPLAY environment variable is missing", which looks like it needs a display monitor to be connected in order for it work.
I tried replacing the Setup and TearDown section from GLFW into SDL or GLX and it returns similar error seemingly also requiring a display monitor attached.
I also try running the code with Xvfb and Xdummy which is supposedly to faked a monitor but I got error message from Xvfb "Xlib: extension "GLX" missing on display ":99", and from Xdummy "Fatal server error: (EE) no screens found(EE)"
I can't be the first one attempting to unit test opengl related code on EC2, but I can't find any solutions after googling around. Please advice, thank you so much.
The DISPLAY variable has nothing to do with connected monitors. This environment variable tells X11 client programs which X11 server to talk to. In Linux and Unix systems the X11 server is the de-facto standard graphics system and window multiplexer. It is also the host to the GPU driver.
With your program expecting to talk to a X11 server, you must provide it a server with the necessary capabilities. Which in your case means a Xorg server with support for GLX protocol (so that OpenGL can be used) and, because you're using CUDA, it should host the NVidia driver. The only X11 server that can do that is the full blown Xorg server with the nvidia driver loaded. Xvfb or Xdummy can do neither.
So if you really want to talk X11 then you'll have to setup a Xorg server with the nvidia driver. Never mind if there are no displays connected, you can coax the driver into headless operation just fine (it may take some convinving though).
However since recently there's a better way: NVidias latest driver release includes support for creating a fully headless, off-screen OpenGL context on the GPU with full support for CUDA–OpenGL interop: http://devblogs.nvidia.com/parallelforall/egl-eye-opengl-visualization-without-x-server/
It boils down to create the OpenGL context with EGL instead of with X11/GLX using display device configured for headless operation by selecting PBuffer framebuffer attribute. The essential code outline looks like this (taken directly from the NVidia code example):
#include <EGL/egl.h>
static const EGLint configAttribs[] = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT, // make this off-screen
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};
static const int pbufferWidth = 9;
static const int pbufferHeight = 9;
static const EGLint pbufferAttribs[] = {
EGL_WIDTH, pbufferWidth,
EGL_HEIGHT, pbufferHeight,
EGL_NONE,
};
int main(int argc, char *argv[])
{
// 1. Initialize EGL
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);
EGLint major, minor;
eglInitialize(eglDpy, &major, &minor);
// 2. Select an appropriate configuration
EGLint numConfigs;
EGLConfig eglCfg;
eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);
// 3. Create a surface
EGLSurface eglSurf = eglCreatePbufferSurface(eglDpy, eglCfg,
pbufferAttribs);
// 4. Bind the API
eglBindAPI(EGL_OPENGL_API);
// 5. Create a context and make it current
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT,
NULL);
eglMakeCurrent(eglDpy, eglSurf, eglSurf, eglCtx);
// from now on use your OpenGL context
// 6. Terminate EGL when finished
eglTerminate(eglDpy);
return 0;
}
#datenwolf: unfortunately, the nvidia's example you provide above won't run w/o an X11 server running. AFAIK, libEGL-nvidia (either linux or BSD) is linked to libX11:
$ ldd libEGL-NVIDIA.so.1
/usr/X11R6/lib/libEGL-NVIDIA.so.1:
libthr.so.3 => /lib/libthr.so.3 (0x801302000)
librt.so.1 => /usr/lib/librt.so.1 (0x80152a000)
libm.so.5 => /lib/libm.so.5 (0x80172f000)
libc.so.7 => /lib/libc.so.7 (0x800824000)
libnvidia-glsi.so.1 => /usr/X11R6/lib/libnvidia-glsi.so.1 (0x80195a000)
libX11.so.6 => /usr/X11R6/lib/libX11.so.6 (0x801bdf000)
libXext.so.6 => /usr/X11R6/lib/libXext.so.6 (0x801f1f000)
libxcb.so.1 => /usr/X11R6/lib/libxcb.so.1 (0x802130000)
libXau.so.6 => /usr/X11R6/lib/libXau.so.6 (0x802356000)
libXdmcp.so.6 => /usr/X11R6/lib/libXdmcp.so.6 (0x802559000)
and there's no way to change this (nvidia provides its drivers already compiled).
So, if you compile the nvidia's example like that (either with ES or GL API):
$ gcc egltest.c -o egltest -lEGL
you will get this (using GLESx or GL as well):
egltest:
libEGL.so.1 => /usr/X11R6/lib/libEGL-NVIDIA.so.1 (0x800823000)
libc.so.7 => /lib/libc.so.7 (0x800b25000)
libthr.so.3 => /lib/libthr.so.3 (0x800edd000)
librt.so.1 => /usr/lib/librt.so.1 (0x801105000)
libm.so.5 => /lib/libm.so.5 (0x80130a000)
libnvidia-glsi.so.1 => /usr/X11R6/lib/libnvidia-glsi.so.1 (0x801535000)
libX11.so.6 => /usr/X11R6/lib/libX11.so.6 (0x8017ba000)
libXext.so.6 => /usr/X11R6/lib/libXext.so.6 (0x801afa000)
libxcb.so.1 => /usr/X11R6/lib/libxcb.so.1 (0x801d0b000)
libXau.so.6 => /usr/X11R6/lib/libXau.so.6 (0x801f31000)
libXdmcp.so.6 => /usr/X11R6/lib/libXdmcp.so.6 (0x802134000)
Perhaps it could be more accurate to name the nvidia's EGL library EGLX, because it uses X11 and cannot run w/o X.
Caveats: from your example, nvidia EGL could bind to GL API (see attrib EGL_OPENGL_BIT ...) from v355 drivers only. From previous version, you could bind to GLES only (ie use EGL_OPENGL_ESx_BIT instead of EGL_OPENGL_BIT).
The only distro I knew that could run native window/drawable straight on the linux console - meaning w/o any X server or Wayland running - was the raspbian for the RPI-B, from which you will find the 'dispmanx' library that provides an easy way to access to the GPU/Fb through EGL (GLES2 API only supported).
B.R.
V.S.
I want to use Qt 4.8.6 to render OpenGL content with a QGLWidget. The machine i'm working on is a macbook pro with OS X 10.9.4.
The QGLWidget is created by passing a QGLFormat object with a requested format version of the 3.2 core profile. The problem i am encountering is that the OpenGL version reported by the QGLContext remains 1.0, no matter what GLFormat I specify.
After researching the topic i found the Qt OpenGL Core Profile Tutorial. However the example source code reports the same OpenGL version 1.0 from before. Curiously the call
qDebug() << "Widget OpenGl: " << format().majorVersion() << "." << format().minorVersion();
qDebug() << "Context valid: " << context()->isValid();
qDebug() << "Really used OpenGl: " << context()->format().majorVersion() << "." << context()->format().minorVersion();
qDebug() << "OpenGl information: VENDOR: " << (const char*)glGetString(GL_VENDOR);
qDebug() << " RENDERDER: " << (const char*)glGetString(GL_RENDERER);
qDebug() << " VERSION: " << (const char*)glGetString(GL_VERSION);
qDebug() << " GLSL VERSION: " << (const char*)glGetString(GL_SHADING_LANGUAGE_VERSION);
reported a version string of 2.1
Widget OpenGl: 1 . 0
Context valid: true
Really used OpenGl: 1 . 0
OpenGl information: VENDOR: NVIDIA Corporation
RENDERDER: NVIDIA GeForce GT 750M OpenGL Engine
VERSION: 2.1 NVIDIA-8.26.26 310.40.45f01
GLSL VERSION: 1.20
Using the Cocoa code suggested in this OS X opengl context discussion from 2011 the output of the version numbers changed to
Widget OpenGl: 1 . 0
Context valid: true
Really used OpenGl: 1 . 0
OpenGl information: VENDOR: NVIDIA Corporation
RENDERDER: NVIDIA GeForce GT 750M OpenGL Engine
VERSION: 4.1 NVIDIA-8.26.26 310.40.45f01
GLSL VERSION: 4.10
While the driver is now reporting expected OpenGL version number, i am still only able to get a 1.0 QGLWidget context. The QGLFormat object that is passed to the QGLWidget constructor is set up using
QGLFormat fmt;
fmt.setProfile(QGLFormat::CoreProfile);
fmt.setVersion(3, 2);
fmt.setSampleBuffers(true);
I am somewhat at a loss as to why i am still only getting a version 1.0 context. Even without the Cocoa framework generated OpenGL Context it should be possible to increase the context version to 2.1, but it remains fixed at 1.0 regardless of the QGLFormat passed to the constructor.
Any pointers as to why the QGLWidget Context remains at version 1.0 are very much appreciated.
Update 1
Further experimentation showed that the code returns the requested OpenGL version on a Ubuntu 13.04 Linux. The issue seems to be specific to OS X.
Update 2
I build a minimal non-/working example
#include <QtOpenGL/QGLFormat>
#include <QtOpenGL/QGLWidget>
#include <QtGui/QApplication>
#include <QtCore/QDebug>
int main(int argc, char **argv) {
QApplication app(argc, argv);
QGLFormat fmt = QGLFormat::defaultFormat();
fmt.setVersion(3,2);
fmt.setProfile(QGLFormat::CoreProfile);
fmt.setSampleBuffers(true);
QGLWidget c(fmt);
c.show();
qDebug() << c.context()->requestedFormat();
qDebug() << c.context()->format();
return app.exec();
}
which can be build in Ubuntu using
g++ main.cpp -I/usr/include/qt4 -lQtGui -lQtCore -lQtOpenGL -lGL -o test
or under OS X
g++ main.cpp -framework OpenGL -framework QtGui -framework QtCore -framework QtOpenGL -o test
It prints two lines of QGLFormat debug output. The first is the requested format and the second line is the actual context format. Both are supposed to show a major.minor version number of 3.2. It seems to be working under Ubuntu Linux, but fails when using OS X.
Update 3
Fun times. It might be a bug in Qt4.8.6, since the issue does not occur when compiling the example agains Qt5.3.1. A bug report has been filed.
Can someone else verify this behaviour?
Yes. That's platform specific. Please find solution here.
Override QGLContex::chooseMacVisual to specify platform specific initialization.
CustomGLContext.hpp:
#ifdef Q_WS_MAC
void* select_3_2_mac_visual(GDHandle handle);
#endif // Q_WS_MAC
class CustomGLContext : public QGlContext {
...
#ifdef Q_WS_MAC
void* chooseMacVisual(GDHandle handle) override {
return select_3_2_mac_visual(handle); // call cocoa code
}
#endif // Q_WS_MAC
};
gl_mac_specific.mm:
void* select_3_2_mac_visual(GDHandle handle)
{
static const int Max = 40;
NSOpenGLPixelFormatAttribute attribs[Max];
int cnt = 0;
attribs[cnt++] = NSOpenGLPFAOpenGLProfile;
attribs[cnt++] = NSOpenGLProfileVersion3_2Core;
attribs[cnt++] = NSOpenGLPFADoubleBuffer;
attribs[cnt++] = NSOpenGLPFADepthSize;
attribs[cnt++] = (NSOpenGLPixelFormatAttribute)16;
attribs[cnt] = 0;
Q_ASSERT(cnt < Max);
return [[NSOpenGLPixelFormat alloc] initWithAttributes:attribs];
}
I am finding that QGLShaderProgram is consistently failing to compile any shader and providing no error log. Here are the symptoms:
QGLShaderProgram reports that it failed to compile but produces an empty error log. If I try to bind the shader an exception is thrown.
I can compile a shader using glCompileShader without problem. However, the first time I try to compile this way after QGLShaderProgram has failed, fails with this error log:
ERROR: error(#270) Internal error: Wrong symbol table level
ERROR: 0:2: error(#232) Function declarations cannot occur inside of functions:
main
ERROR: error(#273) 2 compilation errors. No code generated
Following that one failure, the next time I try to compile using glCompileShader works fine.
The problem has arisen only since upgrading from Qt 4.8 to 5.2. Nothing else has changed on this machine.
I have tested on two PCs, one with an ATI Radeon HD 5700, the other with an AMD FirePro V7900. The problem only appears on the Radeon PC.
Here is my test code demonstrating the problem:
main.cpp
#include <QApplication>
#include "Test.h"
int main(int argc, char* argv[])
{
QApplication* app = new QApplication(argc, argv);
Drawer* drawer = new Drawer;
return app->exec();
}
Test.h
#pragma once
#include <qobject>
#include <QTimer>
#include <QWindow>
#include <QOpenGLContext>
#include <QOpenGLFunctions>
class Drawer : public QWindow, protected QOpenGLFunctions
{
Q_OBJECT;
public:
Drawer();
QTimer* mTimer;
QOpenGLContext* mContext;
int frame;
public Q_SLOTS:
void draw();
};
Test.cpp
#include "Test.h"
#include <QGLShaderProgram>
#include <iostream>
#include <ostream>
using namespace std;
Drawer::Drawer()
: mTimer(new QTimer)
, mContext(new QOpenGLContext)
, frame(0)
{
mContext->create();
setSurfaceType(OpenGLSurface);
mTimer->setInterval(40);
connect(mTimer, SIGNAL(timeout()), this, SLOT(draw()));
mTimer->start();
show();
}
const char* vertex = "#version 110 \n void main() { gl_Position = gl_Vertex; }";
const char* fragment = "#version 110 \n void main() { gl_FragColor = vec4(0.0,0.0,0.0,0.0); }";
void Drawer::draw()
{
mContext->makeCurrent(this);
if (frame==0) {
initializeOpenGLFunctions();
}
// Compile using QGLShaderProgram. This always fails
if (frame < 5)
{
QGLShaderProgram* prog = new QGLShaderProgram;
bool f = prog->addShaderFromSourceCode(QGLShader::Fragment, fragment);
cout << "fragment "<<f<<endl;
bool v = prog->addShaderFromSourceCode(QGLShader::Vertex, vertex);
cout << "vertex "<<v<<endl;
bool link = prog->link();
cout << "link "<<link<<endl;
}
// Manual compile using OpenGL direct. This works except for the first time it
// follows the above block
{
GLuint prog = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(prog, 1, &fragment, 0);
glCompileShader(prog);
GLint success = 0;
glGetShaderiv(prog, GL_COMPILE_STATUS, &success);
GLint logSize = 0;
glGetShaderiv(prog, GL_INFO_LOG_LENGTH, &logSize);
GLchar* log = new char[8192];
glGetShaderInfoLog(prog, 8192, 0, log);
cout << "manual compile " << success << endl << log << endl;
delete[] log;
}
glClearColor(1,1,0,1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
mContext->swapBuffers(this);
frame++;
}
Elsewhere, I have tested using QGLWidget, and on a project that uses GLEW instead of QOpenGLFunctions with exactly the same results.
The version of Qt I'm linking against was built with the following configuration:
configure -developer-build -opensource -nomake examples -nomake tests -mp -opengl desktop -icu -confirm-license
Any suggestions? Or shall I just send this in as a bug report?
Update
In response to peppe's comments:
1) What does QOpenGLDebugLogger says?
The only thing I can get from QOpenGLDebugLogger is
QWindowsGLContext::getProcAddress: Unable to resolve 'glGetPointerv'
This is printed when I initialize it (and not as a debug event firing, but just to console). It happens even though mContext->hasExtension(QByteArrayLiteral("GL_KHR_debug")) returns true and I'm initializing it within the first frame's draw() function.
2) Can you print the compile log of the QOGLShaders even if they compile successfully?
I cannot successfully compile QOpenGLShader or QGLShader at any point so I'm not able to test this. However, when compiling successfully using plain GL functions, the log returns blank.
3) Which GL version did you get from the context? (Check with QSurfaceFormat).
I've tried with versions 3.0, 3.2, 4.2, all with the same result.
4) Please set the same QSurfaceFormat on both the context and the window before creating them
5) Remember to create() the window
I've implemented both of these now and the result is the same.
I've just tested on a third PC and that has no issues. So it is this specific computer which, incidentally, happens to be a Mac Pro running Windows in bootcamp. It has had absolutely no trouble in any other context running the latest ATI drivers but I can only really conclude that there is a bug somewhere between the ATI drivers, this computer's graphics chip and QOpenGLShaderProgram.
I think I'm unlikely to find a solution, so giving up. Thank you for all your input!