OpenGL glBufferStorage crashes - c++

Whenever I call glBufferStorage(...) the subsequent glBindBuffer(..) always crashes. Ex:
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 1);
glBufferStorage(GL_SHADER_STORAGE_BUFFER, sizeof(unsigned int) * 100, NULL, GL_DYNAMIC_STORAGE_BIT | GL_MAP_WRITE_BIT | GL_MAP_READ_BIT );
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 2); // <- CRASH HERE!
If I remove the glBufferStorage(...) call, the subsequent glBindBuffer calls dont crashes!
This code was working normally in my Desktop under a GTX650 Ti and PhenonII x6, with openGl installed via NugeT on VS2015 ( nupengl.core package ). Then I pasted the entire project folder to my Notebook ( GeForce 740M / i7 ), removed the openGl nuget package and reinstalled it.
How can I proceed to investigate what is wrong? Is this logical error or maybe gpu driver error?

I could make it.
As stated, I moved my project from Desktop to Laptop. My laptop have newer OpenGL support than Desktop, but my Laptop was using the CPU graphics ( Intel HD Graphics ) instead of dedicated GPU GeForce 740M.
This way, my OpenGL program was executing on a device that not supports some newer OpenGL features, like the GL_SHADER_STORAGE_BUFFER target, and that's why it crashes.

Related

OpenGL version 2.1 is not supported by graphics driver and maybe need to update

I am using mayavi to do some visualization task on my remote server with GPUs.When my code run mlab.show(),the following error occurred
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (8 8 8 0)
...
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 1 0)
ERROR: In /work/standalone-x64-build/VTK-source/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 797
vtkXOpenGLRenderWindow (0x559c336fd4e0): GL version 2.1 with the gpu_shader4 extension is not supported by your graphics driver but is required for the new OpenGL rendering backend. Please update your OpenGL driver. If you are using Mesa please make sure you have version 10.6.5 or later and make sure your driver in Mesa supports OpenGL 3.2.
I am using Ubuntu16.04 and here is some info about my remote server.
(base) zz#SYS-4028GR-TR:~$ glxinfo | grep OpenGL
OpenGL vendor string: Mesa project: www.mesa3d.org
OpenGL renderer string: Mesa GLX Indirect
OpenGL version string: 1.3 Mesa 4.0.4
OpenGL extensions:
(base) zz#SYS-4028GR-TR:~$ glxinfo | grep render
direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose)
GLX_MESA_multithread_makecurrent, GLX_MESA_query_renderer,
OpenGL renderer string: Mesa GLX Indirect
Does anyone have some ideas about this situation?I try to found some ways to update Mesa in Ubuntu but failed.If there is any way to deal with this kind of problem, it would be very helpful.
I am using mayavi to do some visualization task on my remote server with GPUs.
"Remote Server", that's your problem right there. If you log in via SSH forwarding the X11 connection, all OpenGL commands are serialized as GLX commands and tunneled through the X11 connection over the network to your computer to be executed on your local graphics system.
If you have a GPU on the remote system, your best option these days is to use Xpra, configuring so that it launches its backing X server on the GPU and not with a virtual framebuffer device.
What this comes down to is to install the regular Xorg server. Modify /etc/X11/Xwrapper to allow start by a regular user. You can then start the X server with Xpra being the first client with the command line
startx /usr/bin/Xpra start :100 --use-display --daemon=no -- :100
If you don't want to fix your display, then create a executable file /usr/local/bin/xpra_display
#!/bin/sh
exec xpra start $DISPLAY --use-display --daemon=no
which you can then launch with
startx /usr/local/bin/xpra_display
without further arguments

-1001 error in OpenCL with Nvidia card in Ubuntu Linux

I am trying to run this OpenCL Example in Ubuntu 10.04.
My graphics card is an NVIDIA GeForce GTX 480. I have installed the latest NVIDIA driver and CUDA toolkit manually.
The program compiles without any errors. Thus linking with libOpenCL works. The application also runs but the output is very strange (mostly zeros and some random numbers). Debugging shows that
clGetPlatformIDs(1, &platform_id, &ret_num_platforms);
returns -1001.
google and stack told me that the reason may be a missing nvidia.icd in /etc/OpenCL/vendors. It was not there so I've added /etc/OpenCL/vendors/nvidia.icd with the following line
libnvidia-opencl.so.1
I have also tried some variants (absolute paths etc). But nothing solved the problem. Right now I have no idea what else I can try. Any suggestions?
EDIT: I have installed the Intel OpenCL SDK and I have copied its icd into /etc/OpenCL/vendors and the application works fine for
clGetDeviceIDs( platform_id, CL_DEVICE_TYPE_DEFAULT, 1,
&device_id, &ret_num_devices);
For
clGetDeviceIDs( platform_id, CL_DEVICE_TYPE_GPU, 1,
&device_id, &ret_num_devices);
I get the error -1.
EDIT:
I have noticed one thing in the console when executing the application. After execution of line
cl_int ret = clGetPlatformIDs(1, &platform_id, &ret_num_platforms);
the application gives me the output
modprobe: ERROR: ../libkmod/libkmod-module.c:809 kmod_module_insert_module() could not find module by name='nvidia_331_uvm'
modprobe: ERROR: could not insert 'nvidia_331_uvm': Function not implemented
There seems to be a conflict with an older driver version since I am using 340.
cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module 340.32 Tue Aug 5 20:58:26 PDT 2014
Maybe I should try to remove Ubuntu's own NVIDIA drivers one more time and reinstall the latest manually one more time?
EDIT:
The old driver was the problem. Somehow it wasn't removed properly thus I have done it one more time with
apt-get remove nvidia-331 nvidia-opencl-icd-331 nvidia-libopencl1-331
and now it works. I hope this helps someone who has similar problems.
The above mentioned problems occurred due to a driver conflict. If you have a similar problem then read the above edits to get the solution.

How to enable OpenGL 3.3 using Mesa 10.1 on Ubuntu

I am trying to get an OpenGL-based rendering engine that relies on OpenGL 3.3 and GLSL 3.3 to run on Ubuntu 13.10 using an AMD Radeon 6950. I want to use the open source drivers (radeon), which rely on Mesa for their OpenGL implementation. Ubuntu 13.10 only provides Mesa 9.2 (implementing OpenGL 3.1) "out of the box". It is however possible to install Mesa 10.1 (implementing OpenGL 3.3) from this PPA as explained in this thread:
StackOverflow: OpenGL & GLSL 3.3 on an HD Graphics 4000 under Ubuntu 12.04
I used the exact same steps as explained there:
1.) Add the PPA Repository
$ sudo add-apt-repository ppa:oibaf/graphics-drivers
2.) Update sources
$ sudo apt-get update
3.) Dist-upgrade (rebuilds many packages)
$ sudo apt-get dist-upgrade
4.) Then I rebooted.
Mesa 10.1 was successfully installed. However, glxinfo, while it now reports that Mesa 10.1 is in use, still reports only OpenGL 3.0 (compat profile) and OpenGL 3.1 (core profile):
$ glxinfo | grep OpenGL
OpenGL vendor string: X.Org
OpenGL renderer string: Gallium 0.4 on AMD CAYMAN
OpenGL core profile version string: 3.1 (Core Profile) Mesa 10.1.0-devel (git-7f57408 saucy-oibaf-ppa+curaga)
OpenGL core profile shading language version string: 1.40
OpenGL core profile context flags: (none)
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 10.1.0-devel (git-7f57408 saucy-oibaf-ppa+curaga)
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
Why is that? How can I enable OpenGL 3.3? As can be seen by comparison in the StackOverflow thread that I mentioned, it is possible to have glxinfo report OpenGL 3.3. I am aware that glxinfo may report the wrong version numbers as per the Mesa 10.1 Release Notes, however the rendering engine I'm trying to run fails because of this.
I use the following code to spawn a window:
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, 0);
if(GL_TRUE != glfwOpenWindow(
_windowDimensions.x, _windowDimensions.y,
0, 0, 0, 0, 32, 0, GLFW_WINDOW))
{
THROW("GLFW error: failed to create window.");
}
When I try to run the rendering engine using this setup, the above exception gets thrown as OpenGL 3.3 is not supported. I can set GLFW_OPENGL_VERSION_MINOR to 0 and then the window opens fine, but an exception will be thrown later as GLSL 3.3 shaders are required.
Also note that the rendering engine runs fine when I use the proprietary fglrx drivers (and then glxinfo reports OpenGL version 4.2), so the application itself really is not the problem, but the supported OpenGL is.
So what am I doing wrong? Why doesn't Mesa 10.1 support OpenGL 3.3 for me? My graphics card certainly supports it.
Here's some additional information that may be useful.
$ apt-cache policy libgl1-mesa-glx
libgl1-mesa-glx:
Installed: 10.1~git1402041945.7f5740+curaga~gd~s
Candidate: 10.1~git1402041945.7f5740+curaga~gd~s
Version table:
*** 10.1~git1402041945.7f5740+curaga~gd~s 0
500 http://ppa.launchpad.net/oibaf/graphics-drivers/ubuntu/ saucy/main amd64 Packages
100 /var/lib/dpkg/status
9.2.1-1ubuntu3 0
500 http://archive.ubuntu.com/ubuntu/ saucy/main amd64 Packages
$ lspci -vv
...snip...
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cayman PRO [Radeon HD 6950] (prog-if 00 [VGA controller])
Subsystem: Hightech Information System Ltd. Device 2307
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin A routed to IRQ 53
Region 0: Memory at c0000000 (64-bit, prefetchable) [size=256M]
Region 2: Memory at fe620000 (64-bit, non-prefetchable) [size=128K]
Region 4: I/O ports at e000 [size=256]
Expansion ROM at fe600000 [disabled] [size=128K]
Capabilities: <access denied>
Kernel driver in use: radeon
...snip...
$ lsmod | egrep 'radeon|fglrx'
radeon 1402995 3
i2c_algo_bit 13413 1 radeon
ttm 84169 1 radeon
drm_kms_helper 52710 1 radeon
drm 297056 5 ttm,drm_kms_helper,radeon
$ modinfo radeon
filename: /lib/modules/3.11.0-15-generic/kernel/drivers/gpu/drm/radeon/radeon.ko
license: GPL and additional rights
description: ATI Radeon
author: Gareth Hughes, Keith Whitwell, others.
...snip...
firmware: radeon/CAYMAN_smc.bin
firmware: radeon/CAYMAN_rlc.bin
firmware: radeon/CAYMAN_mc.bin
firmware: radeon/CAYMAN_me.bin
firmware: radeon/CAYMAN_pfp.bin
...snip...
srcversion: D174B1E4686391B33437915
alias: pci:v00001002d000099A4sv*sd*bc*sc*i*
alias: pci:v00001002d000099A2sv*sd*bc*sc*i*
...snip...
depends: drm,drm_kms_helper,ttm,i2c-algo-bit
intree: Y
vermagic: 3.11.0-15-generic SMP mod_unload modversions
parm: no_wb:Disable AGP writeback for scratch registers (int)
parm: modeset:Disable/Enable modesetting (int)
parm: dynclks:Disable/Enable dynamic clocks (int)
parm: r4xx_atom:Enable ATOMBIOS modesetting for R4xx (int)
parm: vramlimit:Restrict VRAM for testing (int)
parm: agpmode:AGP Mode (-1 == PCI) (int)
parm: gartsize:Size of PCIE/IGP gart to setup in megabytes (32, 64, etc) (int)
parm: benchmark:Run benchmark (int)
parm: test:Run tests (int)
parm: connector_table:Force connector table (int)
parm: tv:TV enable (0 = disable) (int)
parm: audio:Audio enable (1 = enable) (int)
parm: disp_priority:Display Priority (0 = auto, 1 = normal, 2 = high) (int)
parm: hw_i2c:hw i2c engine enable (0 = disable) (int)
parm: pcie_gen2:PCIE Gen2 mode (-1 = auto, 0 = disable, 1 = enable) (int)
parm: msi:MSI support (1 = enable, 0 = disable, -1 = auto) (int)
parm: lockup_timeout:GPU lockup timeout in ms (defaul 10000 = 10 seconds, 0 = disable) (int)
parm: fastfb:Direct FB access for IGP chips (0 = disable, 1 = enable) (int)
parm: dpm:DPM support (1 = enable, 0 = disable, -1 = auto) (int)
parm: aspm:ASPM support (1 = enable, 0 = disable, -1 = auto) (int)
$ dpkg -S /lib/modules/3.11.0-15-generic/kernel/drivers/gpu/drm/radeon/radeon.ko
linux-image-extra-3.11.0-15-generic: /lib/modules/3.11.0-15-generic/kernel/drivers/gpu/drm/radeon/radeon.ko
$ apt-cache policy linux-image-extra-3.11.0-15-generic
linux-image-extra-3.11.0-15-generic:
Installed: 3.11.0-15.25
Candidate: 3.11.0-15.25
Version table:
*** 3.11.0-15.25 0
500 http://archive.ubuntu.com/ubuntu/ saucy-updates/main amd64 Packages
500 http://archive.ubuntu.com/ubuntu/ saucy-security/main amd64 Packages
100 /var/lib/dpkg/status
What they do not tell you, but indirectly imply ("Some drivers don't support all the features required in OpenGL 3.3."), is that in the last official release of Mesa (10.0), GL 3.3 only works on Intel hardware. This is one of the joys of Intel's close involvement with the Mesa project. If you want reliable GL 3.3 support in any form on AMD hardware, you should use fglrx (the proprietary AMD driver) for the time being.
The development release of Mesa 10.1 may implement GL 3.3 on radeon drivers, but you need to request a 3.3 core profile. You are not doing this currently.
This:
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, 0);
Actually needs to be this:
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
Also, there is no such thing as a GL 3.0 compatibility profile or 3.1 core profile. Profiles were not introduced into OpenGL until 3.2. There is a concept of GL_ARB_compatibility in GL 3.1, but that is not the same thing as a profile; glxinfo is giving misleading information.
I answered the thread OP mentions regarding "OpenGL & GLSL 3.3 on an HD Graphics 4000 under Ubuntu 12.04" but I thought I would give the same answer here too considering info seems so scarce. This works for those using freeglut and glew:
so Ive seen a lot of threads surrounding this and I thought here would be a good place to respond. Im running Ubuntu 15.04 with intel ivybridge. After using the "Intel Graphics installer for linux" application, glxinfo gives the following info regarding openGl:
OpenGL core profile version string: 3.3 (Core Profile) Mesa 10.6.0
OpenGL core profile shading language version string: 3.30
OpenGL version string: 3.0 Mesa 10.6.0
OpenGL shading language version string: 1.30
Now from this you can see that the core profile and glsl version are 3.3,but compatible openGl is only 3.0 thus if you want your code to run with 3.3 you need to specify both an opengl core profile and a glsl core profile. The following steps should work if youre using freeglut and glew:
-the glsl #version should specify that you want the core profile:
#version 330 core
-specify you want opengl 3.3:
glutInitContextVersion (3, 3);
-and finally set glewExperimental to true before glewInit():
glewExperimental = GL_TRUE;
hope this helps some people get started :)

How do I use Shader Model 5.0 compiled shaders in Visual Studio 11?

I am trying to have the .hlsl files in my project pre-/ offline/ buildtime compiled for my C++ DirectX 11 project. Visual Studio compiles the .hlsl files into .cso files. I am now having trouble reading .cso files of SM 5.0 into my program to create a shader.
Here is the code I have to create a Pixel Shader from a precompiled .cso file:
ID3D11VertexShader* PS;
ID3DBlob* PS_Buffer;
D3DReadFileToBlob(L"PixelShader.cso", &PS_Buffer);
d3d11Device->CreatePixelShader(PS_Buffer->GetBufferPointer(), PS_Buffer->GetBufferSize(), NULL, &PS);
d3d11DevCon->PSSetShader(PS, 0, 0);
The pixel shader is simply supposed to return blue pixels. It has been compiled to Shader Model 5.0 by changing its compile settings in Visual Studio 11. However upon running the program the triangle I am trying to render doesn't show up despite their being no build or runtime error messages.
When compiling at runtime using:
D3DCompileFromFile(L"PixelSHader.hlsl", 0, 0, "main", "ps_5_0", 0, 0, &PS_Buffer, 0);
instead of D3DReadFileToBlob(..), there is still no triangle rendered.
However when using Shader Model 4.0, whether buildtime or runtime compiled, a blue triangle does show signifying that it works.
Here is the code in my PixelShader.hlsl file. By any chance could it be too outdated for Shader Model 5.0 and be the root of my problem?
float4 main() : SV_TARGET
{
return float4(0.0f, 0.0f, 1.0f, 1.0f);
}
I am using the Windows SDK libraries and not the June 2010 DirectX SDK.
My Graphics Card is a nVidia 8800 GTS.
Shader model 5 is supported by the GeForce 400 series and newer for nVidia cards, and the Radeon 5000 series and newer for AMD cards. Your GeForce 8800 only supports shader model 4, which is why the shader doesn't run. To get around this you can use the reference device (by using D3D_DRIVER_TYPE_REFERENCE) but this will be quite slow. Additionally you can use the WARP device if you have the Windows 8 preview (which adds WARP support for D3D_FEATURE_LEVEL_11_0), which will perform better than the reference device but still nowhere near running on the hardware device.

Opengl version trouble glew.h

I am developing an OpenGL application and need to use the glew library. I am using Visual Studio C++ 2008 Express.
I compiled a program using gl.h, glu.h, and glut.h just fine and it does what it's supposed to do. But after including glew.h it still compiles just fine, but when I try:
glewInit();
if (glewIsSupported("GL_VERSION_2_0"))
printf("Ready for OpenGL 2.0\n");
else {
printf("OpenGL 2.0 not supported\n");
}
It keeps printing:
"OpenGL 2.0 not supported".
I tried to change it to glewIsSupported("GL_VERSION_1_3") or even glewIsSupported("GL_VERSION_1_0") and it still returns false meaning that it does not support OpenGL version whatever.
I have a Radeon HD 5750 so, it should support OpenGL 3.1 and some of the features of 3.2. I know that all the device drivers are installed properly since I was able to run all the programs in the Radeon sdk provided by ATI.
I also installed Opengl Extensions Viewer 3.15 and it says OpenGL Version 3.1 Ati Driver 6.14.10.9116. I tired all of them GLEW_VERSION_1_1, GLEW_VERSION_1_2, GLEW_VERSION_1_3, GLEW_VERSION_2_0, GLEW_VERSION_2_1, GLEW_VERSION_3_0 and all of these return false.
Any other suggestioms? I even tried GLEW_ARB_vertex_shader && GLEW_ARB_fragment_shader and this is returning false as well.
glewIsSupported is meant to check and see if the specific features are supported. You want something more like...
if (GLEW_VERSION_1_3)
{
/* Yay! OpenGL 1.3 is supported! */
}
there may be some lack of necessary initialization. I encounter the same question. And here is how I solve the question: you need to include the glCreateWindow() ahead. Include this function and try again.
Firstly, you should check whether glew has initialized properly:
if(glewInit() != GLEW_OK)
{ // something is wrong };
Secondly, you need to create the context before calling glewInit()
Thirdly, you can also try:
glewExperimental=true;
Before calling glewInit()
I encountered the same problem when running a program through windows RDP, then I noticed that my video card may not working properly when using RDP, so I tried teamviewer instead, both glewinfo.exe and my program start to work normally then.
The OP's problem may be solved for a long time, just for others' infomation.