Currently using Ubuntu 20.04 LTS, g++11.1.0, C++20, SDL2
Compiler Flags: -lvulkan -ldl -lSDL2main -lSDL2
Code:
#include "VkBootstrap.h"
#include <SDL2/SDL.h>
#include <SDL2/SDL_vulkan.h>
#include <vulkan/vulkan.hpp>
#include <iostream>
int main()
{
VkExtent2D windowExtent {1600, 900};
// Initialize SDL
if (SDL_Init(SDL_INIT_VIDEO))
{
std::cerr << "Unable to initialize SDL: " << SDL_GetError();
std::abort();
}
// Create a SDL Window
SDL_Window* window {SDL_CreateWindow(
"Vulkan",
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
windowExtent.width,
windowExtent.height,
SDL_WINDOW_VULKAN | SDL_WINDOW_RESIZABLE
)};
// Check if window was created sucessfully
if (!window)
{
std::cerr << "Failed to create SDL Window: " << SDL_GetError();
std::abort();
}
vkb::InstanceBuilder instanceBuilder {};
// Initialize the Vulkan instance, with basic debug features
auto builderResult {instanceBuilder
.set_app_name("Vulkan Game")
.request_validation_layers(true)
.use_default_debug_messenger()
.require_api_version(1, 2, 0)
.build()};
vkb::Instance vkbInstance {builderResult.value()};
// Store the Instance
VkInstance instance {vkbInstance.instance};
// Store the Debug Messenger
VkDebugUtilsMessengerEXT debugMessenger {vkbInstance.debug_messenger};
VkSurfaceKHR surface {};
// Get a Vulkan Rendering Surface of the SDL Window
if (!SDL_Vulkan_CreateSurface(window, instance, &surface))
{
std::cerr << "Unable to Create Vulkan Rendering Surface.\n";
std::abort();
}
vkb::PhysicalDeviceSelector gpuSelector {vkbInstance};
// Initialize the Physical Device with a GPU that can render to the window
vkb::PhysicalDevice vkbPhysicalDevice {gpuSelector
.set_minimum_version(1, 2)
.set_surface(surface)
.select()
.value()};
// Store the Vulkan Physical Device
VkPhysicalDevice physicalDevice {vkbPhysicalDevice.physical_device};
// Surface Deletion
vkDestroySurfaceKHR(instance, surface, nullptr);
// Debug Messenger Deletion
vkb::destroy_debug_utils_messenger(instance, debugMessenger);
// Instance Deletion
vkDestroyInstance(instance, nullptr);
// Window Deletion
SDL_DestroyWindow(window);
}
Driver:
Following the tutorial https://vkguide.dev/ and using the VkBootstrap library, when I try to select the physical device, it causes a memory leak like the one shown below:
=================================================================
==44934==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 576 byte(s) in 4 object(s) allocated from:
#0 0x7f8e9c905e17 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x7f8e97a13a1d (/lib/x86_64-linux-gnu/libdrm.so.2+0x4a1d)
#2 0x31647261632e (<unknown module>)
Direct leak of 128 byte(s) in 4 object(s) allocated from:
#0 0x7f8e9c905c47 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
#1 0x7f8e9b35a0c5 (/lib/x86_64-linux-gnu/libxcb.so.1+0xe0c5)
SUMMARY: AddressSanitizer: 704 byte(s) leaked in 8 allocation(s).
I ran a diagnostic on the available GPUs by modifying the VkBootstrap.cpp file to output data, and got this:
GPU: Intel(R) HD Graphics 530 (SKL GT2)
Api Version: 4202641
Device ID: 6418
Device Type: 1
Driver Version: 88080387
GPU: llvmpipe (LLVM 12.0.0, 256 bits)
Api Version: 4194306
Device ID: 0
Device Type: 4
Driver Version: 1
Selected GPU: Intel(R) HD Graphics 530 (SKL GT2)
Although I have an Nvidia and Intel graphics card, how come it isn't showing the Nvidia graphics card as a viable GPU?
Also, why are these memory leaks occurring? How can I resolve them?
Related
I developed an app to push live stream with ffmpeg. When I checked the app with leaks --atExit -- <the app> (I'm on mac), I found some memory leak with AVFormatContext.
The minimized code are provided below:
#include <iostream>
extern "C" {
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavdevice/avdevice.h>
}
void foo() {
avdevice_register_all();
AVFormatContext *avInputFormatContext = avformat_alloc_context();
AVInputFormat *avInputFormat = av_find_input_format("avfoundation");
std::cout << "open input" << std::endl;
int ret = avformat_open_input(&avInputFormatContext, "Capture screen 0", avInputFormat, nullptr);
if (ret < 0) { std::cout << "open input failed: " << ret << std::endl; return;}
avformat_close_input(&avInputFormatContext);
}
int main() {
foo();
return 0;
}
The output is
Process: ffmpegtest [87726]
Path: /Users/USER/*/ffmpegtest
Load Address: 0x10a752000
Identifier: ffmpegtest
Version: ???
Code Type: X86-64
Platform: macOS
Parent Process: leaks [87725]
Date/Time: 2021-01-20 15:44:57.533 +0800
Launch Time: 2021-01-20 15:44:55.760 +0800
OS Version: macOS 11.1 (20C69)
Report Version: 7
Analysis Tool: /Applications/Xcode.app/Contents/Developer/usr/bin/leaks
Analysis Tool Version: Xcode 12.3 (12C33)
Physical footprint: 9.9M
Physical footprint (peak): 10.6M
----
leaks Report Version: 4.0
Process 87726: 14143 nodes malloced for 2638 KB
Process 87726: 1 leak for 32 total leaked bytes.
1 (32 bytes) ROOT LEAK: 0x7f8c61e1b040 [32] length: 16 "Capture screen 0"
Did I miss something?
I'm trying to write an OpenCL wrapper in C++.
Yesterday I was working on my Windows 10 machine (NVIDIA GTX970 Ti, latest NVIDIA GeForce drivers I believe) and my code worked flawless.
Today, I'm trying it out on my laptop (Arch Linux, AMD Radeon R7 M265, Mesa 17.3.3) and I get a segfault when trying to create a command queue.
Here's the GDB backtrace:
#0 0x00007f361119db80 in ?? () from /usr/lib/libMesaOpenCL.so.1
#1 0x00007f36125dacb1 in clCreateCommandQueueWithProperties () from /usr/lib/libOpenCL.so.1
#2 0x0000557b2877dfec in OpenCL::createCommandQueue (ctx=..., dev=..., outOfOrderExec=false, profiling=false) at /home/***/OpenCL/Util.cpp:296
#3 0x0000557b2876f0cf in main (argc=1, argv=0x7ffd04fcdac8) at /home/***/main.cpp:27
#4 0x00007f361194cf4a in __libc_start_main () from /usr/lib/libc.so.6
#5 0x0000557b2876ecfa in _start ()
(I've censored part of the paths)
Here's the code that's producing the error:
CommandQueue createCommandQueue(Context ctx, Device dev, bool outOfOrderExec, bool profiling) noexcept
{
cl_command_queue_properties props [3]= {CL_QUEUE_PROPERTIES, 0, 0};
if (outOfOrderExec)
{
props[1] |= CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE;
}
if (profiling)
{
props[1] |= CL_QUEUE_PROFILING_ENABLE;
}
int error = CL_SUCCESS;
cl_command_queue queue = clCreateCommandQueueWithProperties(ctx.get(), dev.get(), props, &error);
if (error != CL_SUCCESS)
{
std::cerr << "Error while creating command queue: " << OpenCL::getErrorString(error) << std::endl;
}
CommandQueue commQueue = CommandQueue(queue);
Session::get().registerQueue(commQueue);
return commQueue;
}
The line with clCreateCommandQueueWithProperties is where the segfault happens.
Context is a wrapper class for a cl_context, Context::get() returns the original cl_context:
class Context
{
private:
...
cl_context context;
public:
...
cl_context get() const noexcept;
...
};
Device is a wrapper for cl_device, Device::get() also returns the cl_device:
class Device
{
private:
...
cl_device_type type;
cl_device_id id;
public:
...
cl_device_id get() const noexcept;
cl_device_type getType () const noexcept;
...
};
Here's the main function:
int main (int argc, char* argv [])
{
OpenCL::Session::get().init();
for (const std::string& deviceAddress : OpenCL::Session::get().getAddresses())
{
std::cout << "[" << deviceAddress << "]: " << OpenCL::Session::get().getDevice(deviceAddress);
}
OpenCL::Context ctx = OpenCL::getContext();
std::cout << "OpenCL version: " << ctx.getVersionString() << std::endl;
OpenCL::Kernel kernel = OpenCL::createKernel(OpenCL::createProgram("src/Kernels/Hello.cl", ctx), "SAXPY");
OpenCL::CommandQueue queue = OpenCL::createCommandQueue(ctx, OpenCL::Session::get().getDevice(ctx.getAssociatedDevices()[0]));
unsigned int testDataSize = 1 << 13;
std::vector <float> a = std::vector <float> (testDataSize);
std::vector <float> b = std::vector <float> (testDataSize);
for (int i = 0; i < testDataSize; i++)
{
a[i] = static_cast<float>(i);
b[i] = 0.0;
}
OpenCL::Buffer aBuffer = OpenCL::allocateBuffer(ctx, a.data(), sizeof(float), a.size());
OpenCL::Buffer bBuffer = OpenCL::allocateBuffer(ctx, b.data(), sizeof(float), b.size());
kernel.setArgument(0, aBuffer);
kernel.setArgument(1, bBuffer);
kernel.setArgument(2, 2.0f);
OpenCL::Event saxpy_event = queue.enqueue(kernel, {testDataSize});
OpenCL::Event read_event = queue.read(bBuffer, b.data(), bBuffer.size());
std::cout << "SAXPY kernel took " << saxpy_event.getRunTime() << "ns to complete." << std::endl;
std::cout << "Read took " << read_event.getRunTime() << "ns to complete." << std::endl;
OpenCL::Session::get().cleanup();
return 0;
}
(The profiling won't work because I've disabled it (thinking that was the cause of the problem), re-enabling profiling doesn't fix the issue however).
I'm using CLion, so here's a screenshot of my debugging window:
Finally here's the console output of the program:
/home/***/cmake-build-debug/Main
[gpu0:0]: AMD - AMD OLAND (DRM 2.50.0 / 4.14.15-1-ARCH, LLVM 5.0.1): 6 compute units # 825MHz
OpenCL version: OpenCL 1.1 Mesa 17.3.3
Signal: SIGSEGV (Segmentation fault)
The context and device objects all seem to have been created without any issues so I really have no idea what's causing the segfault.
Is it possible that I've found a bug in the Mesa driver, or am I missing something obvious?
Edit: This person seems to have had a similar problem, unfortunately, his problem was just a C-style-forget-to-allocate-memory-problem.
2nd Edit: I may have found a possible cause of this problem, CMake is finding, using and linking against OpenCL 2.0 while my GPU only supports OpenCL 1.1. I'll look into this.
I haven't found a way to roll back to OpenCL 1.1 on Arch Linux, but clinfo seems to be working fine and so is blender (which depends on OpenCL), so I don't think this is the problem.
Here's the output from clinfo:
Number of platforms 1
Platform Name Clover
Platform Vendor Mesa
Platform Version OpenCL 1.1 Mesa 17.3.3
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_icd
Platform Extensions function suffix MESA
Platform Name Clover
Number of devices 1
Device Name AMD OLAND (DRM 2.50.0 / 4.14.15-1-ARCH, LLVM 5.0.1)
Device Vendor AMD
Device Vendor ID 0x1002
Device Version OpenCL 1.1 Mesa 17.3.3
Driver Version 17.3.3
Device OpenCL C Version OpenCL C 1.1
Device Type GPU
Device Available Yes
Device Profile FULL_PROFILE
Max compute units 6
Max clock frequency 825MHz
Max work item dimensions 3
Max work item sizes 256x256x256
Max work group size 256
Compiler Available Yes
Preferred work group size multiple 64
Preferred / native vector sizes
char 16 / 16
short 8 / 8
int 4 / 4
long 2 / 2
half 8 / 8 (cl_khr_fp16)
float 4 / 4
double 2 / 2 (cl_khr_fp64)
Half-precision Floating-point support (cl_khr_fp16)
Denormals No
Infinity and NANs Yes
Round to nearest Yes
Round to zero No
Round to infinity No
IEEE754-2008 fused multiply-add No
Support is emulated in software No
Single-precision Floating-point support (core)
Denormals No
Infinity and NANs Yes
Round to nearest Yes
Round to zero No
Round to infinity No
IEEE754-2008 fused multiply-add No
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 2147483648 (2GiB)
Error Correction support No
Max memory allocation 1503238553 (1.4GiB)
Unified memory for Host and Device No
Minimum alignment for any data type 128 bytes
Alignment of base address 32768 bits (4096 bytes)
Global Memory cache type None
Image support No
Local memory type Local
Local memory size 32768 (32KiB)
Max constant buffer size 1503238553 (1.4GiB)
Max number of constant args 16
Max size of kernel argument 1024
Queue properties
Out-of-order execution No
Profiling Yes
Profiling timer resolution 0ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Device Extensions cl_khr_byte_addressable_store cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_fp64 cl_khr_fp16
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) Clover
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) Success [MESA]
clCreateContext(NULL, ...) [default] Success [MESA]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) Success (1)
Platform Name Clover
Device Name AMD OLAND (DRM 2.50.0 / 4.14.15-1-ARCH, LLVM 5.0.1)
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) Success (1)
Platform Name Clover
Device Name AMD OLAND (DRM 2.50.0 / 4.14.15-1-ARCH, LLVM 5.0.1)
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) Success (1)
Platform Name Clover
Device Name AMD OLAND (DRM 2.50.0 / 4.14.15-1-ARCH, LLVM 5.0.1)
ICD loader properties
ICD loader Name OpenCL ICD Loader
ICD loader Vendor OCL Icd free software
ICD loader Version 2.2.12
ICD loader Profile OpenCL 2.2
3rd Edit: I've just run the code on my NVIDIA machine, works without issue, this is what the console shows:
[gpu0:0]: NVIDIA Corporation - GeForce GTX 970: 13 compute units # 1253MHz
OpenCL version: OpenCL 1.2 CUDA 9.1.75
SAXPY kernel took 2368149686ns to complete.
Read took 2368158390ns to complete.
I've also fixed the 2 things Andreas mentioned
clCreateCommandQueueWithProperties got added in OpenCL 2.0. You should not use it with platforms and devices that are less than version 2.0 (such as 1.1 and 1.2 shown in your logs).
clCreateCommandQueue is deprecated in OpenCL 1.2
It means you can use it both with or without properties.
I get this error when I try to execute my program:
libGL error: unable to load driver: i965_dri.so
libGL error: driver pointer missing
libGL error: failed to load driver: i965
libGL error: unable to load driver: swrast_dri.so
libGL error: failed to load driver: swrast
X Error of failed request: GLXBadFBConfig
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 34 ()
Serial number of failed request: 42
Current serial number in output stream: 41
My Code (I took it from the "OpenGL Development Cookbook" book) :
#include <GL/glew.h>
#include <GL/freeglut.h>
#include <iostream>
const int WIDTH = 640;
const int HEIGHT = 480;
void OnInit()
{
glClearColor(1, 0, 0, 0);
std::cout << "Initialization successfull" << std::endl;
}
void OnShutdown()
{
std::cout << "Shutdown successfull" << std::endl;
}
void OnResize(int nw, int nh)
{
}
void OnRender()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glutSwapBuffers();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitContextVersion(3, 3);
glutInitContextFlags(GLUT_CORE_PROFILE | GLUT_DEBUG);
glutInitContextProfile(GLUT_FORWARD_COMPATIBLE);
glutInitWindowSize(WIDTH, HEIGHT);
glutCreateWindow("OpenGL");
glewExperimental = GL_TRUE;
GLenum err = glewInit();
if(GLEW_OK != err) {std::cerr << "Error: " << glewGetErrorString(err) << std::endl; }
else{if(GLEW_VERSION_3_3) {std::cout << "Driver supports OpenGL 3.3\n Details: " << std::endl; }}
std::cout << "\tUsing glew: " << glewGetString(GLEW_VERSION) << std::endl;
std::cout << "\tVendor: " << glGetString(GL_VENDOR) << std::endl;
std::cout << "\tRenderer: " << glGetString(GL_RENDERER) << std::endl;
std::cout << "\tGLSL: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
OnInit();
glutCloseFunc(OnShutdown);
glutDisplayFunc(OnRender);
glutReshapeFunc(OnResize);
glutMainLoop();
return 0;
}
I verified if my driver supports the OpenGL version I am using with the glxinfo | grep "OpenGL" command:
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Sandybridge Mobile
OpenGL core profile version string: 3.3 (Core Profile) Mesa 10.5.9
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 10.5.9
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
I am using Ubuntu 14.04.3.
I'm not sure but I think I get this error because I am using intel and not Nvidia.
It's hard to tell from a distance, but the errors you have there look like a damaged OpenGL client library installation. glxinfo queries the GLX driver loaded into the Xorg server, which is somewhat independent from the installed libGL (as long as only indirect rendering calls are made). The errors indicate that the installed libGL either doesn't match the DRI drivers or the DRI libraries are damaged.
Either way, the best course of action is to do a clean reinstall of everything related to OpenGL on your system. I.e. do a forced reinstall of xorg-server, xf86-video-…, mesa, libdri… and so on.
I faced a very similar error:
X Error of failed request: GLXBadFBConfig
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 34 ()
Serial number of failed request: 42
Current serial number in output stream: 41
Removing the following line solved it:
glutInitContextVersion(3, 3);
Both my local computer and EC2 server is on Ubuntu 14.04. Suppose I am testing a cuda opengl interop code as below.
Test.cu
#include <iostream>
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <cuda_gl_interop.h>
__global__ static void CUDAKernelTEST(float *data){
const int x = blockIdx.x * blockDim.x + threadIdx.x;
const int y = blockIdx.y * blockDim.y + threadIdx.y;
const int mx = gridDim.x * blockDim.x;
data[y * mx + x] = 0.5;
}
GLFWwindow *glfw_window_;
void Setup(){
if (!glfwInit()) exit(EXIT_FAILURE);
glfwWindowHint(GLFW_VISIBLE, GL_FALSE);
glfw_window_ = glfwCreateWindow(10, 10, "", NULL, NULL);
if (!glfw_window_) glfwTerminate();
glfwMakeContextCurrent(glfw_window_);
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK) exit(EXIT_FAILURE);
}
void TearDown(){
glfwDestroyWindow(glfw_window_);
glfwTerminate();
}
int main(){
Setup();
GLuint id;
glGenBuffers(1, &id);
glBindBuffer(GL_ARRAY_BUFFER, id);
glBufferData(GL_ARRAY_BUFFER, 3 * 24 * sizeof(GLfloat), 0, GL_STATIC_DRAW);
cudaGraphicsResource *vbo_res;
cudaGraphicsGLRegisterBuffer(&vbo_res, id, cudaGraphicsMapFlagsWriteDiscard);
cudaGraphicsMapResources(1, &vbo_res, 0);
float *test;
size_t size;
cudaGraphicsResourceGetMappedPointer(
reinterpret_cast<void **>(&test), &size, vbo_res);
dim3 blks(1, 1);
dim3 threads(72, 1);
CUDAKernelTEST<<<blks, threads>>>(test);
cudaDeviceSynchronize();
cudaGraphicsUnmapResources(1, &vbo_res, 0);
// do some more with OpenGL
std::cout << "you passed the test" << std::endl;
TearDown();
return 0;
}
The current approach is create a hidden window and a context. The code compiles and runs fine on my local machine. However, glfwInit() returns GL_FALSE when run on EC2. If I log the messages sent to the error callback, it shows "X11: The DISPLAY environment variable is missing", which looks like it needs a display monitor to be connected in order for it work.
I tried replacing the Setup and TearDown section from GLFW into SDL or GLX and it returns similar error seemingly also requiring a display monitor attached.
I also try running the code with Xvfb and Xdummy which is supposedly to faked a monitor but I got error message from Xvfb "Xlib: extension "GLX" missing on display ":99", and from Xdummy "Fatal server error: (EE) no screens found(EE)"
I can't be the first one attempting to unit test opengl related code on EC2, but I can't find any solutions after googling around. Please advice, thank you so much.
The DISPLAY variable has nothing to do with connected monitors. This environment variable tells X11 client programs which X11 server to talk to. In Linux and Unix systems the X11 server is the de-facto standard graphics system and window multiplexer. It is also the host to the GPU driver.
With your program expecting to talk to a X11 server, you must provide it a server with the necessary capabilities. Which in your case means a Xorg server with support for GLX protocol (so that OpenGL can be used) and, because you're using CUDA, it should host the NVidia driver. The only X11 server that can do that is the full blown Xorg server with the nvidia driver loaded. Xvfb or Xdummy can do neither.
So if you really want to talk X11 then you'll have to setup a Xorg server with the nvidia driver. Never mind if there are no displays connected, you can coax the driver into headless operation just fine (it may take some convinving though).
However since recently there's a better way: NVidias latest driver release includes support for creating a fully headless, off-screen OpenGL context on the GPU with full support for CUDA–OpenGL interop: http://devblogs.nvidia.com/parallelforall/egl-eye-opengl-visualization-without-x-server/
It boils down to create the OpenGL context with EGL instead of with X11/GLX using display device configured for headless operation by selecting PBuffer framebuffer attribute. The essential code outline looks like this (taken directly from the NVidia code example):
#include <EGL/egl.h>
static const EGLint configAttribs[] = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT, // make this off-screen
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};
static const int pbufferWidth = 9;
static const int pbufferHeight = 9;
static const EGLint pbufferAttribs[] = {
EGL_WIDTH, pbufferWidth,
EGL_HEIGHT, pbufferHeight,
EGL_NONE,
};
int main(int argc, char *argv[])
{
// 1. Initialize EGL
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);
EGLint major, minor;
eglInitialize(eglDpy, &major, &minor);
// 2. Select an appropriate configuration
EGLint numConfigs;
EGLConfig eglCfg;
eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);
// 3. Create a surface
EGLSurface eglSurf = eglCreatePbufferSurface(eglDpy, eglCfg,
pbufferAttribs);
// 4. Bind the API
eglBindAPI(EGL_OPENGL_API);
// 5. Create a context and make it current
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT,
NULL);
eglMakeCurrent(eglDpy, eglSurf, eglSurf, eglCtx);
// from now on use your OpenGL context
// 6. Terminate EGL when finished
eglTerminate(eglDpy);
return 0;
}
#datenwolf: unfortunately, the nvidia's example you provide above won't run w/o an X11 server running. AFAIK, libEGL-nvidia (either linux or BSD) is linked to libX11:
$ ldd libEGL-NVIDIA.so.1
/usr/X11R6/lib/libEGL-NVIDIA.so.1:
libthr.so.3 => /lib/libthr.so.3 (0x801302000)
librt.so.1 => /usr/lib/librt.so.1 (0x80152a000)
libm.so.5 => /lib/libm.so.5 (0x80172f000)
libc.so.7 => /lib/libc.so.7 (0x800824000)
libnvidia-glsi.so.1 => /usr/X11R6/lib/libnvidia-glsi.so.1 (0x80195a000)
libX11.so.6 => /usr/X11R6/lib/libX11.so.6 (0x801bdf000)
libXext.so.6 => /usr/X11R6/lib/libXext.so.6 (0x801f1f000)
libxcb.so.1 => /usr/X11R6/lib/libxcb.so.1 (0x802130000)
libXau.so.6 => /usr/X11R6/lib/libXau.so.6 (0x802356000)
libXdmcp.so.6 => /usr/X11R6/lib/libXdmcp.so.6 (0x802559000)
and there's no way to change this (nvidia provides its drivers already compiled).
So, if you compile the nvidia's example like that (either with ES or GL API):
$ gcc egltest.c -o egltest -lEGL
you will get this (using GLESx or GL as well):
egltest:
libEGL.so.1 => /usr/X11R6/lib/libEGL-NVIDIA.so.1 (0x800823000)
libc.so.7 => /lib/libc.so.7 (0x800b25000)
libthr.so.3 => /lib/libthr.so.3 (0x800edd000)
librt.so.1 => /usr/lib/librt.so.1 (0x801105000)
libm.so.5 => /lib/libm.so.5 (0x80130a000)
libnvidia-glsi.so.1 => /usr/X11R6/lib/libnvidia-glsi.so.1 (0x801535000)
libX11.so.6 => /usr/X11R6/lib/libX11.so.6 (0x8017ba000)
libXext.so.6 => /usr/X11R6/lib/libXext.so.6 (0x801afa000)
libxcb.so.1 => /usr/X11R6/lib/libxcb.so.1 (0x801d0b000)
libXau.so.6 => /usr/X11R6/lib/libXau.so.6 (0x801f31000)
libXdmcp.so.6 => /usr/X11R6/lib/libXdmcp.so.6 (0x802134000)
Perhaps it could be more accurate to name the nvidia's EGL library EGLX, because it uses X11 and cannot run w/o X.
Caveats: from your example, nvidia EGL could bind to GL API (see attrib EGL_OPENGL_BIT ...) from v355 drivers only. From previous version, you could bind to GLES only (ie use EGL_OPENGL_ESx_BIT instead of EGL_OPENGL_BIT).
The only distro I knew that could run native window/drawable straight on the linux console - meaning w/o any X server or Wayland running - was the raspbian for the RPI-B, from which you will find the 'dispmanx' library that provides an easy way to access to the GPU/Fb through EGL (GLES2 API only supported).
B.R.
V.S.
I want to use Qt 4.8.6 to render OpenGL content with a QGLWidget. The machine i'm working on is a macbook pro with OS X 10.9.4.
The QGLWidget is created by passing a QGLFormat object with a requested format version of the 3.2 core profile. The problem i am encountering is that the OpenGL version reported by the QGLContext remains 1.0, no matter what GLFormat I specify.
After researching the topic i found the Qt OpenGL Core Profile Tutorial. However the example source code reports the same OpenGL version 1.0 from before. Curiously the call
qDebug() << "Widget OpenGl: " << format().majorVersion() << "." << format().minorVersion();
qDebug() << "Context valid: " << context()->isValid();
qDebug() << "Really used OpenGl: " << context()->format().majorVersion() << "." << context()->format().minorVersion();
qDebug() << "OpenGl information: VENDOR: " << (const char*)glGetString(GL_VENDOR);
qDebug() << " RENDERDER: " << (const char*)glGetString(GL_RENDERER);
qDebug() << " VERSION: " << (const char*)glGetString(GL_VERSION);
qDebug() << " GLSL VERSION: " << (const char*)glGetString(GL_SHADING_LANGUAGE_VERSION);
reported a version string of 2.1
Widget OpenGl: 1 . 0
Context valid: true
Really used OpenGl: 1 . 0
OpenGl information: VENDOR: NVIDIA Corporation
RENDERDER: NVIDIA GeForce GT 750M OpenGL Engine
VERSION: 2.1 NVIDIA-8.26.26 310.40.45f01
GLSL VERSION: 1.20
Using the Cocoa code suggested in this OS X opengl context discussion from 2011 the output of the version numbers changed to
Widget OpenGl: 1 . 0
Context valid: true
Really used OpenGl: 1 . 0
OpenGl information: VENDOR: NVIDIA Corporation
RENDERDER: NVIDIA GeForce GT 750M OpenGL Engine
VERSION: 4.1 NVIDIA-8.26.26 310.40.45f01
GLSL VERSION: 4.10
While the driver is now reporting expected OpenGL version number, i am still only able to get a 1.0 QGLWidget context. The QGLFormat object that is passed to the QGLWidget constructor is set up using
QGLFormat fmt;
fmt.setProfile(QGLFormat::CoreProfile);
fmt.setVersion(3, 2);
fmt.setSampleBuffers(true);
I am somewhat at a loss as to why i am still only getting a version 1.0 context. Even without the Cocoa framework generated OpenGL Context it should be possible to increase the context version to 2.1, but it remains fixed at 1.0 regardless of the QGLFormat passed to the constructor.
Any pointers as to why the QGLWidget Context remains at version 1.0 are very much appreciated.
Update 1
Further experimentation showed that the code returns the requested OpenGL version on a Ubuntu 13.04 Linux. The issue seems to be specific to OS X.
Update 2
I build a minimal non-/working example
#include <QtOpenGL/QGLFormat>
#include <QtOpenGL/QGLWidget>
#include <QtGui/QApplication>
#include <QtCore/QDebug>
int main(int argc, char **argv) {
QApplication app(argc, argv);
QGLFormat fmt = QGLFormat::defaultFormat();
fmt.setVersion(3,2);
fmt.setProfile(QGLFormat::CoreProfile);
fmt.setSampleBuffers(true);
QGLWidget c(fmt);
c.show();
qDebug() << c.context()->requestedFormat();
qDebug() << c.context()->format();
return app.exec();
}
which can be build in Ubuntu using
g++ main.cpp -I/usr/include/qt4 -lQtGui -lQtCore -lQtOpenGL -lGL -o test
or under OS X
g++ main.cpp -framework OpenGL -framework QtGui -framework QtCore -framework QtOpenGL -o test
It prints two lines of QGLFormat debug output. The first is the requested format and the second line is the actual context format. Both are supposed to show a major.minor version number of 3.2. It seems to be working under Ubuntu Linux, but fails when using OS X.
Update 3
Fun times. It might be a bug in Qt4.8.6, since the issue does not occur when compiling the example agains Qt5.3.1. A bug report has been filed.
Can someone else verify this behaviour?
Yes. That's platform specific. Please find solution here.
Override QGLContex::chooseMacVisual to specify platform specific initialization.
CustomGLContext.hpp:
#ifdef Q_WS_MAC
void* select_3_2_mac_visual(GDHandle handle);
#endif // Q_WS_MAC
class CustomGLContext : public QGlContext {
...
#ifdef Q_WS_MAC
void* chooseMacVisual(GDHandle handle) override {
return select_3_2_mac_visual(handle); // call cocoa code
}
#endif // Q_WS_MAC
};
gl_mac_specific.mm:
void* select_3_2_mac_visual(GDHandle handle)
{
static const int Max = 40;
NSOpenGLPixelFormatAttribute attribs[Max];
int cnt = 0;
attribs[cnt++] = NSOpenGLPFAOpenGLProfile;
attribs[cnt++] = NSOpenGLProfileVersion3_2Core;
attribs[cnt++] = NSOpenGLPFADoubleBuffer;
attribs[cnt++] = NSOpenGLPFADepthSize;
attribs[cnt++] = (NSOpenGLPixelFormatAttribute)16;
attribs[cnt] = 0;
Q_ASSERT(cnt < Max);
return [[NSOpenGLPixelFormat alloc] initWithAttributes:attribs];
}