Can't go fullscreen under DirectX12 - c++

I have an app which uses DirectX12. I wanted to support fullscreen mode in my app.
However, each time I called IDXGISwapChain::SetFullscreenState(), I got this error:
DXGI ERROR: IDXGISwapChain::GetContainingOutput: The swapchain's adapter
does not control the output on which the swapchain's window resides.
The error code returned by IDXGISwapChain::SetFullscreenState() was
0x887a0004
My computer has two GPU:
Intel(R) HD Graphics 630 and
NVIDIA GeForce GTX 1060
the latter was the adapter used to create d3d12 device when the error occurred.
And if the adapter was the former, there would be no error.
The code flow used to create a IDXGISwapChain3 was
//variables used in the code example
ID3D12Device *pDevice;
IDXGIFactory4 *pDXGIFactory;
IDXGIAdapter *pAdapter;
ID3D12CommandQueue *pCommandQueue;
IDXGISwapChain1 *pTempSwapchain;
IDXGISwapChain3 *pSwapchain;
//the code flow
CreateDXGIFactory2();
pDXGIFactory->EnumAdapter();
D3D12CreateDevice(pAdapter, ...);
pD3DDevice->CreateCommandQueue();
pDXGIFactory->CreateSwapChainForHwnd(pCommandQueue, ..., &pTempSwapchain);
pTempSwapchain->QueryInterface(IID_PPV_ARGS(&pSwapChain));
The IDXGISwapChain::SetFullscreenState() should succeed, but it failed.

Note that the problem is a specific combination of a quirk of the DXGI debug layer and the way that the "hybrid graphics" solutions are implemented depending on which device you are using at the time.
The best option here is to just suppress the error:
#if defined(_DEBUG)
// Enable the debug layer (requires the Graphics Tools "optional feature").
//
// NOTE: Enabling the debug layer after device creation will invalidate the active device.
{
ComPtr<ID3D12Debug> debugController;
if (SUCCEEDED(D3D12GetDebugInterface(IID_PPV_ARGS(debugController.GetAddressOf()))))
{
debugController->EnableDebugLayer();
}
else
{
OutputDebugStringA("WARNING: Direct3D Debug Device is not available\n");
}
ComPtr<IDXGIInfoQueue> dxgiInfoQueue;
if (SUCCEEDED(DXGIGetDebugInterface1(0, IID_PPV_ARGS(dxgiInfoQueue.GetAddressOf()))))
{
dxgiInfoQueue->SetBreakOnSeverity(DXGI_DEBUG_ALL, DXGI_INFO_QUEUE_MESSAGE_SEVERITY_ERROR, true);
dxgiInfoQueue->SetBreakOnSeverity(DXGI_DEBUG_ALL, DXGI_INFO_QUEUE_MESSAGE_SEVERITY_CORRUPTION, true);
DXGI_INFO_QUEUE_MESSAGE_ID hide[] =
{
80 /* IDXGISwapChain::GetContainingOutput: The swapchain's adapter does not control the output on which the swapchain's window resides. */,
};
DXGI_INFO_QUEUE_FILTER filter = {};
filter.DenyList.NumIDs = _countof(hide);
filter.DenyList.pIDList = hide;
dxgiInfoQueue->AddStorageFilterEntries(DXGI_DEBUG_DXGI, &filter);
}
}
#endif
I use this in all my templates on GitHub.

I have found a solution. Use IDXGIFactory6::EnumAdapterByGpuPreference() method instead of IDXGIFactory::EnumAdapter() method then the error will disappear.

Related

SDL 2 Cannot Open Controller but Joystick Recognized

I am writing a C++ program using SDL 2 for the platform layer and opengl for graphics and rendering. I have a full working prototype with keyboard and mouse input. Now I am now trying to use SDL's game controller API to connect a gamepad (to replace or supplement keyboard controls). Unfortunately the controller does not seem to be recognized despite the fact that it works perfectly with other software. It's a Sony Dualshock 4 (for the Playstation 4 system). My system is Mac OS 10.9.5, and I am using SDL 2.0.5 with the official community controller database for SDL 2.0.5, which contains ps4 controller mappings:
030000004c050000c405000000000000,PS4 Controller,a:b1,b:b2,back:b8,dpdown:h0.4,dpleft:h0.8,dpright:h0.2,dpup:h0.1,guide:b12,leftshoulder:b4,leftstick:b10,lefttrigger:a3,leftx:a0,lefty:a1,rightshoulder:b5,rightstick:b11,righttrigger:a4,rightx:a2,righty:a5,start:b9,x:b0,y:b3,platform:Mac OS X,
4c05000000000000c405000000000000,PS4 Controller,a:b1,b:b2,back:b8,dpdown:h0.4,dpleft:h0.8,dpright:h0.2,dpup:h0.1,guide:b12,leftshoulder:b4,leftstick:b10,lefttrigger:a3,leftx:a0,lefty:a1,rightshoulder:b5,rightstick:b11,righttrigger:a4,rightx:a2,righty:a5,start:b9,x:b0,y:b3,platform:Mac OS X
I also added a new mapping using one of the official tools. That also loads successfully according to the relevant function call.
The following is my code, and it's about as close to a minimal example as I can get:
// in main
// window and graphics context initialization here
// initialize SDL
if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_GAMECONTROLLER | SDL_INIT_HAPTIC) < 0) {
fprintf(stderr, "%s\n", "SDL could not initialize");
return EXIT_FAILURE;
}
// load controller mappings, I tested this and 35 mappings load successfully, which is expected
SDL_GameControllerAddMappingsFromFile("./mapping/gamecontrollerdb_205.txt");
// the controller handle
SDL_GameController* controller = nullptr;
// max_joysticks is 1, which means that the device connects at least
int max_joysticks = SDL_NumJoysticks();
if (max_joysticks < 1) {
return EXIT_FAILURE;
}
// this returns, which means that the joystick exists, but it isn't recognized as a game controller.
if (!SDL_IsGameController(0)) {
return EXIT_FAILURE;
}
// I never get passed this.
controller = SDL_GameControllerOpen(0);
fprintf(stdout, "CONTROLLER: %s\n", SDL_GameControllerName(controller));
Has anyone encountered this problem? I've done some preliminary searching as I mentioned, but it seems that usually either the number of joysticks is 0, or everything is recognized.
Also, SDL_CONTROLLERDEVICEADDED isn't firing when I connect the controller.
The controller is connected via USB before I start the program. Also, this is one of the new controllers, and I'm not sure whether the mappings work with that new one. I assume so considering that there are two distinct entries.
Thank you.
EDIT:
I double checked and the PS4 controller works fine as a joystick, but it isn't recognized as a controller, which means that the mapping is incorrect or non-existent. This may be because my controller is "version 2" of the dualshock 4, and I'm not sure whether a 2.0.5-compatible mapping was added. hmmm
The controller was recognized as a joystick but not as a controller, meaning that none of the available mappings I could find (in 2.0.5 controller mapping format) corresponded with the controller. Updating from SDL 2.0.5 to 2.0.8 also updated available mappings it seems, and now the controller is recognized as a game controller.
Note: normally it is a terrible idea to upgrade tools mid-project, but in this case it was safe to do.

OpenGL supports only the default graphics adapter

For some reason I can no longer run the OculusRoomTiny sample program anymore because I keep getting this pop up "OpenGL supports only the default graphics adapter."
It's being triggered by this code shown below in main.cpp:
if (Compare(luid, GetDefaultAdapterLuid())) // If luid that the Rift is on is not the default adapter LUID...
{
VALIDATE(false, "OpenGL supports only the default graphics adapter.");
}
and
static ovrGraphicsLuid GetDefaultAdapterLuid()
{
ovrGraphicsLuid luid = ovrGraphicsLuid();
#if defined(_WIN32)
IDXGIFactory* factory = nullptr;
if (SUCCEEDED(CreateDXGIFactory(IID_PPV_ARGS(&factory))))
{
IDXGIAdapter* adapter = nullptr;
if (SUCCEEDED(factory->EnumAdapters(0, &adapter)))
{
DXGI_ADAPTER_DESC desc;
adapter->GetDesc(&desc);
memcpy(&luid, &desc.AdapterLuid, sizeof(luid));
adapter->Release();
}
factory->Release();
}
#endif
return luid;
}
I've never had this issue before, haven't changed any code, reinstalled the SDK, and I still get the same problem - did something happen to my headset - why isn't the luid the same? I'm using the DK2 and SDK 1.9.0
When I comment out the VALIDATE statement, the program runs, but the oculus just gets stuck in the "please wait" screen forever.
Thanks for your help in advance!
I had the same problem.
I noticed that the app was trying to use my onboard Intel graphics card.
I solved the problem by changing the NVidia driver in windows to make it the default graphics card.
Hope that helps.

What is the correct way to get OpenCL to play nice with OpenGL in Qt5?

I have found several unofficial sources for how to get OpenCL to play nice with OpenGL and Qt5, each with different levels of complexity:
https://github.com/smistad/Qt-OpenGL-OpenCL-Interoperability
https://github.com/petoknm/QtOpenCLGLInterop
http://www.krazer.com/?p=109
Having these examples is nice, however they don't answer the following question: What exact steps are the minimum required to have a Qt5 widgets program display the result of a calculation made in OpenCL kernel and then transferred directly to attached OpenGL context initiated by Qt5?
Sub-questions include:
What is the correct way to expose the OpenGL context in Qt5 to OpenCL?
How do I initiate my Qt5 app in the first place to make sure that the OpenGL context is correctly set up for use with OpenCL?
How should OpenCL be initiated to be compatible with the OpenGL context in Qt5?
What quirks must I look out for to have this working across the platforms that Qt5 supports?
Is there such a thing as an "official" way to do this, or is Digia working on one?
Note, I am primarily interested in using OpenGL as a widget as opposed to a window/full-screen.
I have used Qt5 and OpenCL together on mac, linux and windows with the following strategy:
Create a QGLWidget and GL context (this example creates two GL context, one for Qt/visualization in the QGLwidget and one for OpenCL called mainGLContext, useful when doing multi-threading. These two context will be able to share data.)
QGLWidget* widget = new QGLWidget;
QGLContext* mainGLContext = new QGLContext(QGLFormat::defaultFormat(), widget);
mainGLContext->create();
Create an OpenCL context using the OpenGL context. This is platform specific. For linux you use glx, for windows wgl, and mac cgl sharegroups. Below is the function I use to create the context properties for interoperability. The display variable is used on linux and windows and you can get it with glXGetCurrentDisplay() and wglGetCurrentDC().
cl_context_properties* createInteropContextProperties(
const cl::Platform &platform,
cl_context_properties OpenGLContext,
cl_context_properties display) {
#if defined(__APPLE__) || defined(__MACOSX)
CGLSetCurrentContext((CGLContextObj)OpenGLContext);
CGLShareGroupObj shareGroup = CGLGetShareGroup((CGLContextObj)OpenGLContext);
if(shareGroup == NULL)
throw Exception("Not able to get sharegroup");
cl_context_properties * cps = new cl_context_properties[3];
cps[0] = CL_CONTEXT_PROPERTY_USE_CGL_SHAREGROUP_APPLE;
cps[1] = (cl_context_properties)shareGroup;
cps[2] = 0;
#else
#ifdef _WIN32
// Windows
cl_context_properties * cps = new cl_context_properties[7];
cps[0] = CL_GL_CONTEXT_KHR;
cps[1] = OpenGLContext;
cps[2] = CL_WGL_HDC_KHR;
cps[3] = display;
cps[4] = CL_CONTEXT_PLATFORM;
cps[5] = (cl_context_properties) (platform)();
cps[6] = 0;
#else
cl_context_properties * cps = new cl_context_properties[7];
cps[0] = CL_GL_CONTEXT_KHR;
cps[1] = OpenGLContext;
cps[2] = CL_GLX_DISPLAY_KHR;
cps[3] = display;
cps[4] = CL_CONTEXT_PLATFORM;
cps[5] = (cl_context_properties) (platform)();
cps[6] = 0;
#endif
#endif
return cps;
}
Often you want to do multi-threading, having one thread do the Qt event handling, while doing some OpenCL processing in another thread. Remember to make the GL context "current" in each thread. Use the makeCurrent and moveToThread function on the QGLContext object for this. You can find details on how I have done this here: https://github.com/smistad/FAST/blob/master/source/FAST/Visualization/Window.cpp
I don't know of a Qt OpenCL wrapper to create the OpenCL context.
After working on this some more I felt compelled to add some more info. Erik Smistad's answer is correct and will remain accepted, however it is only one of several ways to do this.
Based on this article there are at least 3 ways we can have OpenGL vs. OpenCL interop:
Sharing an OpenGL texture directly with OpenCL. PROS: Fastest route since everything is zero-copy. CONS: Severely limits available supported data formats.
Sharing an OpenGL PBO with OpenCL and copy from this into a Texture. Second quickest, but will force the need for a memory copy.
Mapping output buffer in OpenCL to host memory and uploading texture from there. Slowest when using OpenCL on GPU. Fastest when using OpenCL on CPU. Forces data to be copied to host memory and back. Most flexible with respect to available data formats.

GLX/GLEW order of initialization catch-22: GLXEW_ARB_create_context, glXCreateContextAttribsARB, glXCreateContext

Currently I'm working on an application that uses GLEW and GLX (on X11).
The logic works as follows...
glewInit(); /* <- needed so 'GLXEW_ARB_create_context' is set! */
if (GLXEW_ARB_create_context) {
/* opengl >= 3.0*/
.. get fb_config ..
context = glXCreateContextAttribsARB(...);
}
else {
/* legacy context */
context = glXCreateContext(...);
}
The problem I'm running into, is GLXEW_ARB_create_context is initialized by glew, but initializing glew calls glGetString, which crashes if its called before (glXCreateContextAttribsARB / glXCreateContext).
Note that this only happens with Mesa's software rasterizer, (libGL.so compiled with swrast). So its possibly a problem with Mesa too.
Correction, this works on Mesa-SWRast and NVidia's propriatry OpenGL drivers, but segfaults with Intel's OpenGL.
Though its possible this is a bug in the Intel drivers. Need to check how other projects handle this.
The cause in this case is the case of intel is glXGetCurrentDisplay() returns NULL before glx is initialized (another catch-22).
So for now, as far as I can tell, its best do avoid glew before glx context is created, and instead use glx directly, eg:
if (glXQueryExtension(m_display, NULL, NULL)) {
const char *glx_ext = glXGetClientString(display, GLX_EXTENSIONS);
if (... check_string_for_extension(glx_ext, "GLX_SOME_EXTENSION")) {
printf("We have the extension!\n");
}
}
Old answer...
Found the solution (seems obvious in retrospect!)
First call glxewInit()
check GLXEW_ARB_create_context
create the context with glXCreateContextAttribsARB or glXCreateContext.
call glewInit()

Qt for iOS locks up when app is launched face up. (qiosscreen.mm assertion)

I'm migrating a Qt for iOS project to Qt 5.5. In iOS 5.1.1 at least, the app fails to start if you launch it with the device face up. An assertion displays an error in qiosscreen.mm at line 344. Here's the Qt source code function that is failing:
Qt::ScreenOrientation QIOSScreen::orientation() const
{
// Auxiliary screens are always the same orientation as their primary orientation
if (m_uiScreen != [UIScreen mainScreen])
return Qt::PrimaryOrientation;
UIDeviceOrientation deviceOrientation = [UIDevice currentDevice].orientation;
// At startup, iOS will report an unknown orientation for the device, even
// if we've asked it to begin generating device orientation notifications.
// In this case we fall back to the status bar orientation, which reflects
// the orientation the application was started up in (which may not match
// the physical orientation of the device, but typically does unless the
// application has been locked to a subset of the available orientations).
if (deviceOrientation == UIDeviceOrientationUnknown)
deviceOrientation = UIDeviceOrientation([UIApplication sharedApplication].statusBarOrientation);
// If the device reports face up or face down orientations, we can't map
// them to Qt orientations, so we pretend we're in the same orientation
// as before.
if (deviceOrientation == UIDeviceOrientationFaceUp || deviceOrientation == UIDeviceOrientationFaceDown) {
Q_ASSERT(screen());
return screen()->orientation();
}
return toQtScreenOrientation(deviceOrientation);
}
It displays an assertion at
Q_ASSERT(screen());
screen() must be returning 0, so screen()->orientation() is attempting to deference a null pointer. That screen() function is defined in the parent class, QPlatformScreen:
QScreen *QPlatformScreen::screen() const
{
Q_D(const QPlatformScreen);
return d->screen;
}
The constructor for that class initializes d->screen to 0:
QPlatformScreen::QPlatformScreen()
: d_ptr(new QPlatformScreenPrivate)
{
Q_D(QPlatformScreen);
d->screen = 0;
}
From the comments, I infer that d->screen is set at some point when the orientation is portrait or landscape, and then they fall back to that when it becomes face up / down. Since it is starting as face up, there is no prior good value to fall back to.
Has anyone else encountered this, or have a solution? Btw, I am not building from qt source and do not want to, so changing their code is not a solution for me if I can possibly avoid that.
I found that this only seems to be occurring when I launch the app from XCode or QtCreator. If I launch the app on the device the way it normally runs, this bug seems to be averted.