Windows.Gaming.Input.Gamepads doesn't work? - c++

I've got Windows 10 Creators Update and am writing a UWP C++ app. I've tried with both C++/WinRT and C++/WRL and it seems that Windows.Gaming.Input.Gamepads just refuses to see my wired XBOX 360 controller and my Logitech XBOX 360 style gaming controller. Both are old, but they do show up in device manager as XBOX 360 controllers, and both work with other games. I've XInput, but XInput could only see the Logitech, and not the Microsoft controller. I would have thought the newer API would have picked them up. But here's the crazy part. Both controllers work fine in Microsoft.Minecraft, which is a Windows store application, so, by the store rules, I thought only XInput and Windows.Gaming.Input.Gamepads was supported. So the question is, really, what API could they be using, that recognizes both joysticks, and why wouldn't Microsoft Windows.Gaming.Input work with a Microsoft joystick?

Related

Integrating ogre3d with hololens

I would like to integrate ogre3d with directx and c++ using hololens.
is it that possible to do so ?
what are the steps to convert the rendering engine, what's rendered to the frame buffer to the hololens buffer?
As RCYR mentioned, to run on the Hololens you are currently required to use UWP.
Running Ogre in an UWP app
There is a wiki entry which shows how to get an OGRE app running in UWP. First you can try to build a simple UWP app without any calls to the Hololens api at all. Note that you can run usual 2d- UWP apps which are not only made for hololens on the device in a windowed view (see mixed reality documentation for more detailes about 2d-views vs. immersive views).
You can check your UWP app by using the Hololens-Emulator. It is integrated with visual studio.
If you just wanted to create a windowed app running on the Hololens you are done by now.
Setting up an immersive view
But more likely you wanted to create an immersive view to display holograms. There are really helpful samples available at the UWP samples repository. I would recommend you look at the HolographicSpatialMapping sample.
Basically the sample shows how to:
create a HolographicSpace (core class for immersive views)
initalize direct3d device (can be done by Ogre as long as the adapter supports Hololens)
register to camera added/ remove events and create resources (buffers and views) for each camera
use the SpatialCoordinateSystem and locate the viewer
get a HolographicFrame, render some content and present
There is a lot of basic functions that you can just copy&paste in the CameraResources and DeviceResources classes in the sample.
For development you should use the Hololens Emulator (mentioned above) and the Visual Studio Graphics Debugger which is fully supported with the Hololens Emulator such that you can easily debug what is going on in direct3d.

Windows 10 UI in WinAPI

I'm trying to attempt to build a UI similar to the ones coming out of Microsoft these days. Particularly those targeting the Windows 10 operating system (a la Office 2016).
Currently I use WinAPI, but all of the controls provided by Windows.h and CommCtrl.h appear to be legacy/old style UI elements. I'm particularly looking for the titlebar/menu/status bar elements (the main clientarea will consist of a GDI/Direct2D context, so nothing special necessary there).
I found some information pointing to XAML, but I don't think that's what I want. WPF seems to be a more likely candidate, but I'm not sure if that's the case either.
I would like for this to be 100% native (WinAPI/C&C++), but if there's absolutely no other option I can use C# for the UI and stub in the native code.
You use XAML and either C++, C# or JavaScript to write a Windows Store (previously Metro) app. If you use C++, the app is 100% native, but if you use C# or JavaScript, of course the required virtual machine is used.
The API that your code calls is WinRT, which looks like Silverlight. In addition, your app can also call some, but not all, Win32 API's similar to how .NET apps can call Win32 (e.g. By using P/Invoke). However, even if you use C++ and thus your app is 100% native, it is still sandboxed like a browser. Meaning it cannot do things like access the entire disk or write to HKLM in the registry. This is for security; a Windows Store app needs to be safe, and thus more limited, like a mobile app you buy from the Apple AppStore. This means that you can't call e.g. CreateFile. This says:
Minimum supported client
Windows XP [desktop apps only]
When MS mentions 'Desktop Apps' as above, they mean Win32 apps. This excludes Windows Store Apps. But this is confusing, because on Win 8/8.1, these Windows Store apps are full screen, but on Windows 10 they are resizeable and overlapping, appearing next to, and mixed in with traditional Win32 apps like Explorer and Task Manager. So even though they appear on the same desktop as Desktop apps, they are not Desktop apps.
I believe if a Windows Store app also targets Windows Phone 10, Windows IoT, etc. then it is called a Windows Universal app.

How to make GUI with kinect SDK Application SkeletonBasics-D2D?

I have made a project using SkeletonBasics-D2D of Kinect XBOX 360 in C++ on gesture recognition. I have also used OpenCV in this project. Now I want to make GUI of this project for better representation. But I am not able to do this using Windows Form Application..I am new to Visual Studio 2010 and kinect.Kindly help me out of this problem.
Is there a reason you're not using WPF instead of a Windows Form Application? The Kinect for Microsoft SDK samples use WPFs due to its stronger capabilities with displaying/dealing with visual elements. The learning curve isn't that huge to go from Windows Form to WPF imo, and there are plenty of blogs that can help you get started or answer most questions you'll have to start off:
Here's a list of WPF blogs: http://blogs.msdn.com/b/jaimer/archive/2009/03/12/wpf-bloggers.aspx
My favorite is the 2000 things blog.
In WPF Kinect applications, there's typically an Image element in the .XAML side whose source is set to a WriteableBitmap property in the back-end code. On each Color Stream ready event (or any stream for that matter) you can write the new set of pixels to the WriteableBitmap and the WPF image element is instantly updated. I haven't tried using Windows Forms but I think it's a lot more work for a less-clean product.

Windows Phone 8 touch positions in Direct3D

I can't seem to find any documentation on how to handle touches in a purely native C++ Direct3D application in Windows Phone 8. Has anybody managed to get touch input in to their game? Everything I read online is either related to XAML/Silverlight or desktop metro apps. i have been told by many people at Microsoft that this feature is supported so i know that it can be done
I think at some level your C++ Direct3D app on Windows Phone 8 (or Windows 8) will be using CoreWindow.
CoreWindow has pointer events:
http://msdn.microsoft.com/en-us/library/windows/apps/windows.ui.core.corewindow.pointerpressed.aspx?cs-save-lang=1&cs-lang=cpp#code-snippet-1
Which are available from C++ as well.

Windows Mobile 6.5 Change the camera focus

I have a project to scan some QR-code or bar-code with camera on windows mobile. (phone x01t)
Programing in C++ and using DirectShow.
Tired to change focus with IAMCameraControl interface, but return the error like "...request is not supported".
Are there any way else?
Thanks
Most (if not all) Windows Mobile phones I've used so far used custom camera drivers, which means OEMs decide which functionalities to implement/support. IAMCameraControl is most likely not one of them.
However, you might want to look for OEM-specific SDKs. For instance, Samsung provides custom APIs enabling to change such parameters as camera focus or ISO. Maybe such APIs exist for your device.