I just picked up a new Lenovo Thinkpad that comes with Intel HD Graphics 3000. I'm finding that my old freeglut apps, which use GLUT_MULTISAMPLE, are running at 2 or 3 fps as opposed to the expected 60fps. Even the freeglut example 'shapes' runs this slow.
If I disable GLUT_MULTISAMPLE from shapes.c (or my app) things run quickly again.
I tried multisampling on glfw (using GLFW_FSAA - or whatever that hint is called), and I think it's working fine. This was with a different app (glgears). glfw is triggering Norton Internet Security, which things it's malware so keeps removing .exes... but that's another problem... my interest is with freeglut.
I wonder if the algorithm that freeglut uses to choose a pixel format is tripping up on this card, whereas glfw is choosing the right one.
Has anyone else come across something like this? Any ideas?
That glfw triggeres Norton is a bug in Nortons virus definition. If it's still the case with the latest definitions, send them your glfw dll/app so they can fix it. Same happens on Avira and they are working on it (have already confirmed that it's a false positive).
As for the HD3000, that's quite a weak GPU, what resolution is your app and how many samples are you using? Maybe the amount of framebuffer memory gets to high for the little guy?
Related
I'm having a problem with a directx 11 game I'm developing on laptops with two video cards. The normal case I'm running into (and I have this on my own laptop) is a weak intel card and a powerful nvidia card. Obviously I want the nvidia one and I've already got it enumerating the adapters and figuring out the correct one to create the device interface for.
The problem is nvidia one doesn't have an output. When you call EnumOutputs on the IDXGIAdapter interface you don't find any. And this makes sense because the laptop only has one screen and its attached to the intel adapter (you can find it by calling EnumOutputs on the intel IDXGIAdapter interface).
But this seemingly makes it impossible to create a fullscreen swap chain for that device (IDXGIFactory::CreateSwapChain fails when given the nvidia device and fullscreen settings even when I'm certain the other mode parameters are valid).
It seems like other games are figuring out a way around this. Off of my steam list for example Half-Life 2 seems to be running in fullscreen mode. However stardew valley is running in borderless windowed mode which I could do but has its own issues.
I'm aware that its possible to change the laptop's settings so the nvidia card is the dominate one. But I need this to work on customer's laptops where I can't expect them to deal with all that.
One potential solution might be create a device for both adapters and then create a swap chain on the intel one as a device shared resource https://learn.microsoft.com/en-us/windows/desktop/api/d3d11/nf-d3d11-id3d11device-opensharedresource I'm not even sure if that's possible though. The docs are vague.
Before I go down a difficult potentially dead end though I'm wondering if anyone knows the solution.
I'm having some unexpected performance issues with my haxeflixel game when building a windows (cpp) target with the following settings
<window if="cpp" width="480" height="270" fps="60" background="#000000"
hardware="false" vsync="true" />
I notice that when I'm re-sizing the window to bigger resolutions, or going full-screen up to 1920x1080p, the game becomes slower and lagging. However according to the flixel debug console, the frame rate is the same for all the resolutions.
Something even more interesting is that my flash export runs much more fluid, while I expected the cpp target to run faster.
It's a 2d platform game with about 6 tilemaps (The biggest tilemap is 1600x1440) and 32x32 or 16x16 sprites. I did not expect to have performance issues on any modern system. So I'm concerned that I'm doing something wrong like missing an obvious setting.
Is this normal? Are there any key rendering performance factors I should check? Please fell free to ask me for any details if you think this would help.
using Haxeflixel 3.3.12
I think this may be a common problem among all the C++ targets. I experienced this with the Linux native target for my game as well. My solution was to disable anti-aliasing via
<window antialiasing="0" />
Of course, this works best with pixel art and not 3D or HD stuff. And then there's still the potential problem of performance dipping at higher resolutions (retina displays and whatnot). But this might be sufficient as a stopgap solution.
I've been testing my app settings with different configurations, when I've finally found out that turning off the vsync option would make the biggest impact. There is some vertical jittering, but the game runs fast finally, and windows target is faster than flash.
Turns out that my current laptop has an IntelHD GPU, and the vsync feature seems to be broken. I remember that my previous PC, equipped with a low end AMD GPU didn't have this issue.
I will consider adding an in-game option to switch vsync, so that non-intel users could benefit from vsync.
Other things that seem to have helped are:
Switching off antialiasing as #Jon O suggested
Turning hardware on
For reference, my current setting is
<window if="cpp" width="960" height="540" fps="60" background="#000000" hardware="true" vsync="false" antialiasing="0" />
As you probably know, the DK2 supports a new mode called Direct Mode that reduces latency and hence improves the VR experience. When I run the DK2 samples that come with the currently latest 0.8.0(beta) SDK, the DiredctX11 version of the OculusTinyRoom runs fine.
My problem: The OpenGL (using 3.3 profile) version uses a function called ovrHmd_CreateSwapTextureSetGL() that returns a textureset with zero textures (but calls glGenBuffer as fallback), and the return value is -1006 (ovrServiceError).
I've seen many reports on the problematic OpenGL support on the Oculus Developer Forum. For earlier versions of the SDK the OpenGL support was neglected from 0.2.4+ and seem to have been resolved from versions 0.5 and up (all in Client Rendering Mode). Nothing said about the newer Direct Mode, except that for some people it failed to work at all if they had their second screen attached, even in DirectX11. This is not the case for me.
I've also seen people suggest to uninstall the 3D Vision drivers from NVidia, because they may conflict with the Oculus Rift drivers. They report dramatic framerate improvements, although I get a 10% improvement myself. Apparently NVidia's GameWorks VR bites driver performance just by installing it. Unfortunately, uninstalling them does not fix the problem.
The latest driver (361.34) update suggests improved Oculus and OpenGL support in , GameWorks VR OpenGL, as well as Direct Mode support (even for SLI setups, which seems to have pretty impressive results). But that's an NVidia-only solution. AMD has LiquidVR as an alternative. But I'd still like to use the Oculus SDK stack.
I am using both a Geforce 480 and Titan X.
I went back to the second screen issue that some seem to have had. Since it worked for me in DX11, I figured my problem would not be similar.
During my research I found a few interesting forum post on reddit suggesting that part of the problem might stem from using multiple monitors. It turns out it seems to have been fixed for DX11 but not for OpenGL in the meantime.
So I can confirm that turning off any secondary screens connected to secondary cards fixes the problem. For OpenGL, you have to connect ALL your output devices to the SAME card.
I did some more testing:
What worked:
Primary screen AND oculus both connected to the TitanX (The 480 isn't connected).
Connecting both screens and the Oculus to the Titan worked(!)
What did not work:
Connecting the primary to the Titan and the Oculus to the 480 does not work.
Connecting the primary to the 480 and the Oculus to the Titan also does not work.
So it seems to be a driver issue with the graphics device enumeration.
Note: This was AFTER I removed the NVidia 3D Vision drivers and updated to build 361.43, so it might also still have been related to having them installed. If someone can confirm this, would be nice to know.
I have an OpenGL test application that is producing incredibly unusual results. When I start up the application it may or may not feature a severe graphical bug.
It might produce an image like this:
http://i.imgur.com/JwPoDrh.jpg
Or like this:
http://i.imgur.com/QEYwhBY.jpg
Or just the correct image, like this:
http://i.imgur.com/zUJbwCM.jpg
The scene consists of one spinning colored cube (made of 12 triangles) with a simple shader on it that colors the pixels based on the absolute value of their model space coordinates. The junk faces appear to spin with the cube as though they were attached to it and often junk triangles or quads flash on the screen briefly as though they were rendered in 2D.
The thing I find most unusual about this is that the behavior is highly inconsistent, starting the exact same application repeatedly without me personally changing anything else on the system will produce different results, sometimes bugged, sometimes not, the arrangement of the junk faces produced isn't consistent either.
I can't really post source code for the application as it is very lengthy and the actual OpenGL calls are spread out across many wrapper classes and such.
This is occurring under the following conditions:
Windows 10 64 bit OS (although I have observed very similar behavior under Windows 8.1 64 bit).
AMD FX-9590 CPU (Clocked at 4.7GHz on an ASUS Sabertooth 990FX).
AMD 7970HD GPU (It is a couple years old and occasionally areas of the screen in 3D applications become scrambled, but nothing on the scale of what I'm experiencing here).
Using SDL (https://www.libsdl.org/) for window and context creation.
Using GLEW (http://glew.sourceforge.net/) for OpenGL.
Using OpenGL versions 1.0, 3.3 and 4.3 (I'm assuming SDL is indeed using the versions I instructed it to).
AMD Catalyst driver version 15.7.1 (Driver Packaging Version listed as 15.20.1062.1004-150803a1-187674C, although again I have seen very similar behavior on much older drivers).
Catalyst Control Center lists my OpenGL version as 6.14.10.13399.
This looks like a broken graphics card to me. Most likely some problem with the memory (either the memory itself, or some soldering problem). Artifacts like those you see can happen if for some reason setting the address for a memory operation does not fully settle or happen at all, before starting the read; that can happen due to a bad connection between the GPU and the memory (solder connections failed) or because the memory itself failed.
Solution: Buy new graphics card. You may try out what happens if you resolder it using a reflow process; there are some tutorials on how to do this DIY, but a proper reflow oven gives better results.
I'm having a slight issue with the GLFW library and VSync. I'm testing a very basic GLFW program on both my integrated processor and my "high performance NVIDIA processor".
When running the program on the integrated processor with the VSync callglfwSwapInterval(1), I get around 16 ms/frame (~60 FPS), as expected. However, when running the same program on the NVIDIA processor with the same VSync call, the frame rate drops to around 30 ms/frame (~30 FPS). I tried testing the program without the glfwSwapInterval call, and it behaved as expected when run on the integrated processor (less than 1 ms/frame). When I tested this on the NVIDIA processor, I was getting around 24 ms/frame, which definitely isn't correct. When running the program with the call glfwSwapInterval(0), both processors run as expected at less than 1 ms/frame.
At first I figured maybe this might be a GLFW issue, but I'm not quite sure anymore. I checked the settings for the NVIDIA processor, and they state that the VSync option is controlled by the application, as it should be.
Again this is a basic GLFW program with no draw calls whatsoever. Any insight into what could be causing the issue would be much appreciated. I can provide more information if needed.