I started using glslDevil to debug my OpenGL engine shaders. When I start the program from inside glslDevil all I get in GLTrace windows is the calls to:
wglGetPixelFormatAttribivARB()
which is printed like 1002 times and after that the debugged app freezes and that is it. No debug info or anything else. Maybe glslDevil doesn't support newer OpenGL versions?
I am using OpenGL 4.2 compatibility mode (but fully programmable pipeline), running Win7 64 bit. The tested software is 32 bit.
wglGetPixelFormatAttribivARB is a function of the Window OpenGL subsystem that retrieves a particular Pixel Format configuration. There can be a lot of different configurations available (several hundreds to thousands) and those 1002 calls to it are most likely caused by a loop that enumerates all of them to find a best match.
Other than giving you that information I can't help you much, though.
Related
I am working on a small OpenGL project using the GLFW library. Everything was fine until from one day, I can't get it to render anything except the background. So I loaded up an older version, which I am sure worked just fine and the same thing happens. I then went back in time and every version I tried did the same thing.
You can change the glClearColor() and this color is used, but thats the only thing you see. So I tried to strip the project down until I ended with a very basic program (a hard-coded colored cube) which still doesn't render anything.
I think the possible causes may have to do something with driver updates or me downloading newer versions of libraries, but I don't think that could have such a massive effect, given that the code worked just fine previously.
I am running 64-bit Windows 7 on GIGABYTE B75M-D3H motherboard with Intel Core i5-3350P CPU and Radeon HD 7750 graphics(Currently Catalyst 14.12, have had some updates since the problem first appeared though). I use mingw-w64 as my compiler of choice with posix threads and sjlj exceptions. I tried it on a different machine (Dell Inspiron with Windows 8.1) and got the same results.
I am using GLFW, GLEW and GLM.
The original project: https://github.com/GenaBitu/OpenStrategia
A stripped-down version: https://gist.github.com/GenaBitu/852dc4c4db6d72c945d1 (Quite messy, still not working)
Can a driver update cause this? Am I stupid and forgot something which suddenly broke the whole program?
Turns out I was using the Core OpenGL profile, which requires you to use Vertex Array Objects, which I didn't. Up until ~February, the graphics didn't mind, but after a certain driver update, it refused to render the object (Which I believe is the correct behaviour).
Now, I realize there is already a "solution" to this problem, but that solution doesn't work for me.
My setup is very close to the one in this post : Can't debug CUDA: CUDA dynamic parallelism debugging is not supported in preemption mode . I'm also cognizant of this link : https://devtalk.nvidia.com/default/topic/536202/debugging-dynamic-parallelism-and-preemption-mode/
I'm on VS2012, Win 7 64bit, drivers are version 331.65, 2 GTX Titans (Device 0 driving display, Device 1 headless) and Nsight 3.2. I've followed the instructions in this post and turned off the forcing of SW preemption for Desktop & Headless GPUs. I've done a deviceQuery and both my Titans are showing up. Additionally, I've got my monitors plugged into the top Titan on the mobo, which I'm pretty sure is Device 0. Thus I've specified cudaSetDevice(1); in my code. I've disabled Windows Aero and...
...have no idea what else to do to prevent this from happening. I am toying with putting yet another GPU in my system, a GTX580 to drive the display, but I don't feel that should be necessary. I've tried changing the cudaSetDevice argument to 0 - same error, and 2 - can't find a CUDA device. Can anyone help me out here? I've got some beastly debugging to do.
Following the instructions listed in the link that I mentioned, I found that CUDA debugging would eventually work and work well. I don't really know what changed since I posted this question, but follow the previous solution's guidance and it should work.
I've been trying to research why certain compatibility features differ based on operating system so I can program a patch. I'm using the compatibility settings in the registry for Windows 95 to run a game (that of which the game was produced on) in each system. In Windows XP, the game runs perfectly. None of the scenes lag, and the sound works just as well as the scenes. I'm unsure of how it runs in Windows Vista, but in Windows 7 & 8 the compatibility feature breaks the game. I used a VM to run XP, but that doesn't effect the game's playability; real XP users have tested it. Whenever I play the game using the Win95 setting for compatibility in 7 & 8, everything lags. The music doesn't slow down during gameplay, but the graphics do. During cutscenes, they literally break. Everything pixelates, white noise and static increases volume, and the video lags every two seconds.
I therein tested it in Ubuntu Linux via WINE, and it runs better than it does in XP. I just had to use the alsa sound driver. What changed? If so, is it programmatically fixable? I'm using an amalgamation of C++, Batch and Java.
If it is necessary, the video game is entitled "The Neverhood."
Thanks.
The compatibility feature available in the shell is just scratching the surface of the "Application Compatibility" subject in Windows.
There is a tool called "Microsoft Application Compatibility Toolkit (ACT)" (that exist since Windows XP exist I believe) that has much more to offer, so maybe that can help.
For example here are some compatibility settings for Graphics Control Issues
I currently play "The Neverhood" on Win7 x64 without any visual problem, you are right when I played on Win7 for first time (4 years ago) was a headache and a little tricky to do the correct compatibility flags for each win version but finally I wrote this reg code for Win7 and worked for me while 4 years, sure it will work for you too:
Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers]
"C:\\Folder\\nhc.exe"="# WIN95 256COLOR 640X480 DISABLEDWM"
Where "C:\\Folder\\nhc.exe" of course is the path to your Neverhood. (Notice the double backslashes)
that flags means: Change Display color to 256 colors, change display resolution to 640x480, disable Themes service (DWM Service).
I hope this help you.
This may not answer the question directly, but if you want to improve performance of The Neverhood, change the compatibility to run in Windows 95 - then switch all other options ON, except the bottom three. This helps to make the game as fast and smooth as possible.
I have just recently installed Windows 8, and I tried to compile and build a simple c++ game project in VS 2010, but when I did, it was running at 5 fps. On windows 7, it runs at a solid 60 fps. Nothing has been changed in the code, but there is just horrible slow down.
I have updated my video drivers, but there is still horrible lag. I thought the problem was to do with compatibility issues with windows 8 and OpenGL, but I can't find anything to confirm this. I was wondering if anyone else has had this problem, and if you have solved it.
I would recommend you test your graphics card / drivers first. All sorts of driver issues could arise when you upgrade operating systems. One of the best tests would be to download Cinebench and see how it performs. Cinebench will evaluate your OpenGL performance. If you get poor results, then you know it's a hardware / driver issue and not an issue with your application.
If the Cinebench results are good, then you can move on to the recommendations made by #Robert Rouhani (comments).
http://www.maxon.net/products/cinebench/overview.html
What sort of video card do you have in the Win8 machine?
If it's a laptop you might be battling against nVidia Optimus (or an equivalent technology?). Basically programs have to tell the OS in advance that they want to use the video card or they get defaulted to using the low power GPU embedded in the CPU (note: over-simplification).
If this is the case, there's some options in the nVidia control panel to let you create a profile telling the OS to run your app with the discrete GPU, rather than the embedded one.
I have created 16 Direct3D devices with size approximately 320x200 pixels. I invoke IDirect3DDevice9::Present for each device in a separate thread every 40 ms. On laptops with Windows XP and integrated Intel GMA945 graphics part of devices is not updated if system tooltip or Start menu are shown. IDirect3DDevice9::Present doesn't return any error codes at that moment, in program everything looks fine, but user can see that move on several of devices freezes. What could be a reason for that?
This works fine on Windows 7 with the same hardware and on Windows XP with different hardware, so the problem only with this combination. I should support this since my customers are use this combination of the hardware and OS. MSDN says nothing about that I should create only one D3D device (at least I can't find it) so problem should be elsewhere.
What I'm trying to find is that possibly there's some combination of flags that could solve my problem. At the moment I use the following:
D3DPRESENT_PARAMETERS param = {};
param.Windowed = TRUE;
param.SwapEffect = D3DSWAPEFFECT_DISCARD;
param.hDeviceWindow = GetSafeHwnd();
param.BackBufferCount = 1;
param.BackBufferFormat = D3DFMT_UNKNOWN;
param.BackBufferWidth = m_szDevice.Width;
param.BackBufferHeight = m_szDevice.Height;
param.MultiSampleType = D3DMULTISAMPLE_NONMASKABLE;
param.Flags = D3DPRESENTFLAG_VIDEO;
param.PresentationInterval = D3DPRESENT_INTERVAL_IMMEDIATE;
param.MultiSampleType = D3DMULTISAMPLE_NONE;
param.MultiSampleQuality = 0;
Don't do that. The device is supposed to map basically 1-to-1 to a GPU. Create one device, and use it to draw to 16 different windows, in whichever way works for you. (Multiple swap chains is the usual approach, afaik)
Creating 16 devices and trying to get them to render in parallel is just asking for trouble.
D3D is designed around the assumption that only one device will be doing serious rendering at any time.
In theory, the difference should only be a matter of performance, but in your case, trying to run 16 devices in parallel on a crappy Intel GPU, it wouldn't surprise me if it causes rendering errors such as you'er seeing.
I've distributed DirectX software for a couple of years and along the way learnt that Intel graphics chipsets have incredibly crap drivers. Once I even saw a driver revision that couldn't render a quad properly. So when you have a problem with an Intel chipset, if you're on the latest driver version, you pretty much have to accept your solution is going to be "start shotgun hacking things until it works".
Sorry to give you a lame answer, but Intel chipsets are not well engineered at all. They're solely there to get something - anything - on the screen, probably for office worker type use. Beyond "does it do aero glass" Intel probably don't give a hoot what it does or how well it works. An alternative "solution" is to distribute your application anyway, state that Intel chipsets are not supported due to glitches in the hardware/driver support, and contact Intel and see if you can get a fix from them.
And people say OpenGL has bad drivers...
First of all, when you say "doesn't return any error codes at that moment", are you running the D3D9 debug version at max debugging level?
Second, everytime you create a new device and it gains focus, the surfaces of all the existing devices are lost. Are you calling reset on all of them after creation?
Other than that, it's like the other answers state: don't create many devices from a single application. Device creation might start throwing errors after 9 or 10 devices, you are really pushing it with 16. Use a single device with multiple swap chains in stead, see for instance this DirectX 8 tutorial.
Intel graphics chips, particularly the integrated GMA ones, have never been known for their capabilities. They can report caps they don't have and later fail, with or without errors codes (had bug reports of this, supposedly supported shader models later failed to compile). It is possible that you're running into a similar problem with their chips/drivers. Does it work on other hardware or with different drivers?
I assume, from having multiple devices, they are windowed? Have you checked the window handles, or tried explicitly passing the handle/viewport when presenting? Do any of the devices get reset?
It is possible the display drivers are not properly repainting the window after the tooltip or start menu are shown (more likely if its a window under the tooltip/menu). Have you checked the window for focus, made sure it gets painted, etc?