I am testing some OpenGL code.Using OpenGL 4.3.Nvidia GTX960M GPU.Windows10 64bit.Driver version:364.72
Trying to debug with NVidia NSight (version:4.7).The app is running fine untill I open NSight graphics debuggig.(Running GL debug output,zero errors).But when NSight is being launched I am getting this error:
Source:OpenGL,type: Error, id: 1282, severity: High Message:
GL_INVALID_OPERATION error generated. Object is owned by another
context and may not be bound here.
Then NSight debug windows is freezing and the app crashes on the close.
Before that, I had also CUDA code there with some OpenGL resources mapped to CUDA context.So I assumed that maybe some CUDA resource still holds OpenGL texture or buffer.I deleted everything,wrote from scratch only GL part.Not a line of CUDA.The error still persists.
The rendering is just simple quad with texture mapping.Can it be a bug in NSight?
Related
I am using mayavi to do some visualization task on my remote server with GPUs.When my code run mlab.show(),the following error occurred
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (8 8 8 0)
...
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 1 0)
ERROR: In /work/standalone-x64-build/VTK-source/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 797
vtkXOpenGLRenderWindow (0x559c336fd4e0): GL version 2.1 with the gpu_shader4 extension is not supported by your graphics driver but is required for the new OpenGL rendering backend. Please update your OpenGL driver. If you are using Mesa please make sure you have version 10.6.5 or later and make sure your driver in Mesa supports OpenGL 3.2.
I am using Ubuntu16.04 and here is some info about my remote server.
(base) zz#SYS-4028GR-TR:~$ glxinfo | grep OpenGL
OpenGL vendor string: Mesa project: www.mesa3d.org
OpenGL renderer string: Mesa GLX Indirect
OpenGL version string: 1.3 Mesa 4.0.4
OpenGL extensions:
(base) zz#SYS-4028GR-TR:~$ glxinfo | grep render
direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose)
GLX_MESA_multithread_makecurrent, GLX_MESA_query_renderer,
OpenGL renderer string: Mesa GLX Indirect
Does anyone have some ideas about this situation?I try to found some ways to update Mesa in Ubuntu but failed.If there is any way to deal with this kind of problem, it would be very helpful.
I am using mayavi to do some visualization task on my remote server with GPUs.
"Remote Server", that's your problem right there. If you log in via SSH forwarding the X11 connection, all OpenGL commands are serialized as GLX commands and tunneled through the X11 connection over the network to your computer to be executed on your local graphics system.
If you have a GPU on the remote system, your best option these days is to use Xpra, configuring so that it launches its backing X server on the GPU and not with a virtual framebuffer device.
What this comes down to is to install the regular Xorg server. Modify /etc/X11/Xwrapper to allow start by a regular user. You can then start the X server with Xpra being the first client with the command line
startx /usr/bin/Xpra start :100 --use-display --daemon=no -- :100
If you don't want to fix your display, then create a executable file /usr/local/bin/xpra_display
#!/bin/sh
exec xpra start $DISPLAY --use-display --daemon=no
which you can then launch with
startx /usr/local/bin/xpra_display
without further arguments
I am trying to launch a ITK/VTQ project with Qt. The project runs on Windows 10, but not on Ubuntu.
I have the following error during launching the project:
X Error: BadColor (invalid Colormap parameter) 12
Major opcode: 1 (X_CreateWindow)
Resource id: 0x4a00001
How I can correct this error message ?
Have you tried running other OpenGL applications/games? There could be a problem with 3D graphics drivers on your Ubuntu. Also, without source code fragment for window creation, I doubt anyone can help you much further.
Upon adding SDL_ttf (2.0.10), DrMemory refuses to work anymore. The console went from printing out the messages to outputting nothing and sending the following to stdout:
~~Dr.M~~ WARNING: unable to locate results file: can't open D:\DrMemory
\drmemory\logs/resfile.6188 (code=2). Dr. Memory failed to start the
target application, perhaps due to interference from invasive security
software. Try disabling other software or running in a virtual machine.
Is there any way around this with some command line flag for Dr Memory or will I have to forego using Dr Memory?
Note: It works perfectly fine with other SDL stuff until I add the TTF Library and add a TTF_Font *font somewhere. The code I have works fine and there is no loading errors or anything wrong with it, it's at a very primitive level and fresh/new. I just cannot get Dr Memory to work as soon as any TTF element is added to the source code.
It works with a 32-bit build, but not a 64-bit build... so I switched to using 32-bits.
Since this is not a full answer as to why. However if anyone finds it breaking for them, try 32-bit.
I am trying to run an OpenCV with DirectX example: d3dsample.cpp from [here]. But it crashes on
initializeContextFromD3D11Device
The error is "Access violation executing ..." (look like it is not possible to initialize the context from D3D to OpenCL. I have no idea to solve with this problem.
I found some possible duplicate at OpenCV question board [here], but they are no any progress through last year.
If some one success to build this sample code, please give a suggestion.
PS: I am using OpenCV 3.1 on Visual Studio 2013 (vc120), Nvidia GTX980 graphics card.
Update:
I am trying to debug with both d3d10_interop and d3d11_interop.
The d3d10_interop gave me a exeption:
C:\OpenCV\3.1\sources\modules\core\src\directx.cpp:449: error:
(-222) OpenCL: Can't create context for DirectX interop in function
cv::directx::ocl::initializeContextFromD3D10Device\n" ...} ...}
I am developing an OpenGL application and need to use the glew library. I am using Visual Studio C++ 2008 Express.
I compiled a program using gl.h, glu.h, and glut.h just fine and it does what it's supposed to do. But after including glew.h it still compiles just fine, but when I try:
glewInit();
if (glewIsSupported("GL_VERSION_2_0"))
printf("Ready for OpenGL 2.0\n");
else {
printf("OpenGL 2.0 not supported\n");
}
It keeps printing:
"OpenGL 2.0 not supported".
I tried to change it to glewIsSupported("GL_VERSION_1_3") or even glewIsSupported("GL_VERSION_1_0") and it still returns false meaning that it does not support OpenGL version whatever.
I have a Radeon HD 5750 so, it should support OpenGL 3.1 and some of the features of 3.2. I know that all the device drivers are installed properly since I was able to run all the programs in the Radeon sdk provided by ATI.
I also installed Opengl Extensions Viewer 3.15 and it says OpenGL Version 3.1 Ati Driver 6.14.10.9116. I tired all of them GLEW_VERSION_1_1, GLEW_VERSION_1_2, GLEW_VERSION_1_3, GLEW_VERSION_2_0, GLEW_VERSION_2_1, GLEW_VERSION_3_0 and all of these return false.
Any other suggestioms? I even tried GLEW_ARB_vertex_shader && GLEW_ARB_fragment_shader and this is returning false as well.
glewIsSupported is meant to check and see if the specific features are supported. You want something more like...
if (GLEW_VERSION_1_3)
{
/* Yay! OpenGL 1.3 is supported! */
}
there may be some lack of necessary initialization. I encounter the same question. And here is how I solve the question: you need to include the glCreateWindow() ahead. Include this function and try again.
Firstly, you should check whether glew has initialized properly:
if(glewInit() != GLEW_OK)
{ // something is wrong };
Secondly, you need to create the context before calling glewInit()
Thirdly, you can also try:
glewExperimental=true;
Before calling glewInit()
I encountered the same problem when running a program through windows RDP, then I noticed that my video card may not working properly when using RDP, so I tried teamviewer instead, both glewinfo.exe and my program start to work normally then.
The OP's problem may be solved for a long time, just for others' infomation.