Integrating ogre3d with hololens - c++

I would like to integrate ogre3d with directx and c++ using hololens.
is it that possible to do so ?
what are the steps to convert the rendering engine, what's rendered to the frame buffer to the hololens buffer?

As RCYR mentioned, to run on the Hololens you are currently required to use UWP.
Running Ogre in an UWP app
There is a wiki entry which shows how to get an OGRE app running in UWP. First you can try to build a simple UWP app without any calls to the Hololens api at all. Note that you can run usual 2d- UWP apps which are not only made for hololens on the device in a windowed view (see mixed reality documentation for more detailes about 2d-views vs. immersive views).
You can check your UWP app by using the Hololens-Emulator. It is integrated with visual studio.
If you just wanted to create a windowed app running on the Hololens you are done by now.
Setting up an immersive view
But more likely you wanted to create an immersive view to display holograms. There are really helpful samples available at the UWP samples repository. I would recommend you look at the HolographicSpatialMapping sample.
Basically the sample shows how to:
create a HolographicSpace (core class for immersive views)
initalize direct3d device (can be done by Ogre as long as the adapter supports Hololens)
register to camera added/ remove events and create resources (buffers and views) for each camera
use the SpatialCoordinateSystem and locate the viewer
get a HolographicFrame, render some content and present
There is a lot of basic functions that you can just copy&paste in the CameraResources and DeviceResources classes in the sample.
For development you should use the Hololens Emulator (mentioned above) and the Visual Studio Graphics Debugger which is fully supported with the Hololens Emulator such that you can easily debug what is going on in direct3d.

Related

C++ Project Types for 3D Modelling and Game Control Program

I am a freshman programmer. I am trying to develop control software for a VR controller. It will use data from an array of rotary potentiometers to move a wireframe. The modeling software will apply textures and set the collision mask. These two programs are meant to be as fast as possible while running, integrate directly into games, and if possible exist as one program. I am using Visual Studio 2019 and was wondering which project type I should use.
Edit 1: I should clarify that this will be a game development tool. 1 part of it will only be used in the game and the other will be used to develop controls and create character skins and textures. The platform this will be used on is Windows.
I suggest you could try to use DirectX for 3D game development. You could create a windows desktop application or a UWP application.
To build DirectX desktop games, choose the “Game development with C++” workload under the “Mobile & Gaming” category.
To build a DirectX desktop app, you can start with the Win32 Project template in the New Project dialog, or download a Win32 game template, or download a sample from DirectX11 samples or DirectX12 samples as a starting point.
You could also build a DirectX game for UWP.
For more details I suggest you could refer to the link:https://devblogs.microsoft.com/cppblog/directx-game-development-with-c-in-visual-studio/
you should use Empty Project if you want to run your program in windows command prompt. if you want to run it like any other windows software with graphical interface then you need to use Windows Desktop Application.
according to me you should use empty project for this program.

Options for Camera Enumeration and HAL for UWP App (WPF/C++ CLI DLL)

I am creating video processing application. The application is written using a mixture of WPF and C++/CLI (a DLL). I currently connect to a machine vision camera and use a few functions in the camera's native driver e.g. I grab image data, I set hardware region-of-interest (roi).
I am currently using windows 10. The application is currently converted to UWP with the Desktop bridge.
What I would like is to use some sort of Hardware-Abstraction-Layer to connect to a range of cameras and to access image data and ROI functions (if available).
I was wondering if someone experienced in this could take me through the options (if they exist) and what are the main considerations.
When I web-search I get lost in the search results (for example, is Windows Media Foundation a possibility, if not why not etc.). Much of the web results are pretty old.
So really I would like someone to give me a few pointers so I can feel sure I am on the right track.
It is impossible use DirectShow cameras from UWP - in MSDN Win32 and COM for Universal Windows Platform (UWP) apps (multimedia). You can use DirectShow cameras from direct calling as COM object, but it is workable only on Desktop Windows with full supporting of COM. Universal Windows Platform (UWP) is a platform for programming on Desktop and Mobile - these are Windows with different architecture and UWP is an abstract layer for simple deploying on different platforms - it leads to limit functionality.

DirectX11 Desktop duplication not working with NVIDIA

I'm trying too use DirectX desktop duplication API.
I tried running exmaples from
http://www.codeproject.com/Tips/1116253/Desktop-Screen-Capture-on-Windows-via-Windows-Desk
And from
https://code.msdn.microsoft.com/windowsdesktop/Desktop-Duplication-Sample-da4c696a
Both of these are examples of screen capture using DXGI.
I have NVIDIA GeForce GTX 1060 with Windows 10 Pro on the machine. It has Intel™ Core i7-6700HQ processor.
These examples work perfectly fine when NVIDIA Control Panel > 3D Settings is selected to Auto select processor.
However if I set the setting manually to NVIDIA Graphics Card the samples stop working.
Error occurs at the following line.
//IDXGIOutput1* DxgiOutput1
hr = DxgiOutput1->DuplicateOutput(m_Device, &m_DeskDupl);
Error in hr(HRESULT) is DXGI_ERROR_UNSUPPORTED 0x887A0004
I'm new to DirectX and I don't know the issue here, is DirectX desktop duplication not supported on NVIDIA ?
If that's the case then is there a way to select a particular processor at the start of program so that program can run with any settings ?
#Edit
After looking around I asked the developer (Evgeny Pereguda) of the second sample project on codeproject.com
Here's a link to the discussion
https://www.codeproject.com/Tips/1116253/Desktop-Screen-Capture-on-Windows-via-Windows-Desk?msg=5319978#xx5319978xx
Posting the screenshot of the discussion on codeproject.com in case original link goes down
I also found an answer on stackoverflow which unequivocally suggested that it could not be done with the desktop duplication API referring to support ticket at microsoft's support site https://support.microsoft.com/en-us/help/3019314/error-generated-when-desktop-duplication-api-capable-application-is-ru
Quote from the ticket
This issue occurs because the DDA does not support being run against
the discrete GPU on a Microsoft Hybrid system. By design, the call
fails together with error code DXGI_ERROR_UNSUPPORTED in such a
scenario.
However there are some applications which are efficiently duplicating desktop on windows in both modes (integrated graphics and discrete) on my machine. (https://www.youtube.com/watch?v=bjE6qXd6Itw)
I have looked into the installation folder of the Virtual Desktop on my machine and can see following DLLs of interest
SharpDX.D3DCompiler.dll
SharpDX.Direct2D1.dll
SharpDX.Direct3D10.dll
SharpDX.Direct3D11.dll
SharpDX.Direct3D9.dll
SharpDX.dll
SharpDX.DXGI.dll
SharpDX.Mathematics.dll
Its probably an indication that this application is using DXGI to duplicate desktop, or may be the application is capable of selecting a specific processor before it starts.
Anyway the question remains, is there any other efficient method of duplicating desktop in both modes?
The likely cause is certain internal limitation for Desktop Duplication API, described in Error generated when Desktop Duplication API-capable application is run against discrete GPU:
... when the application tries to duplicate the desktop image against the discrete GPU on a Microsoft Hybrid system, the application may not run correctly, or it may generate one of the following errors:
Failed to create windows swapchain with 0x80070005
CDesktopCaptureDWM: IDXGIOutput1::DuplicateOutput failed: 0x887a0004
The article does not suggest any other workaround except use of a different GPU (without more specific detail as for whether it is at all achievable programmatically):
To work around this issue, run the application on the integrated GPU instead of on the discrete GPU on a Microsoft Hybrid system.
Microsoft introduced a registry value that can be set programmatically to control which GPU an application runs on. Full answer here.

How to make GUI with kinect SDK Application SkeletonBasics-D2D?

I have made a project using SkeletonBasics-D2D of Kinect XBOX 360 in C++ on gesture recognition. I have also used OpenCV in this project. Now I want to make GUI of this project for better representation. But I am not able to do this using Windows Form Application..I am new to Visual Studio 2010 and kinect.Kindly help me out of this problem.
Is there a reason you're not using WPF instead of a Windows Form Application? The Kinect for Microsoft SDK samples use WPFs due to its stronger capabilities with displaying/dealing with visual elements. The learning curve isn't that huge to go from Windows Form to WPF imo, and there are plenty of blogs that can help you get started or answer most questions you'll have to start off:
Here's a list of WPF blogs: http://blogs.msdn.com/b/jaimer/archive/2009/03/12/wpf-bloggers.aspx
My favorite is the 2000 things blog.
In WPF Kinect applications, there's typically an Image element in the .XAML side whose source is set to a WriteableBitmap property in the back-end code. On each Color Stream ready event (or any stream for that matter) you can write the new set of pixels to the WriteableBitmap and the WPF image element is instantly updated. I haven't tried using Windows Forms but I think it's a lot more work for a less-clean product.

Writing a 3D rendering browser plugin

I understand that it's possible to write a plugin for a browser which lets you render to the browser window, so you can effectively run a normal app within the browser. NOT using JS or client technology, but a plugin which basically wraps your application - in our case C++ which does 3D rendering using DirectX or OpenGL.
I know that we'd have to have versions for both IE and other browsers but how does this work - in Windows-speak do we get a HWND through the plugin architecture or is it more complex?
Do you have to write a version of the plugin compiled for each platform - Win/Mac/Linux, since a plugin is a binary I assume this is the case, so you have one version for IE and then multiple versions for FF, Chrome, Safari (which share the same plugin setup IIRC)
With FF - is this an example of a plugin or an extension specifically?
An example of what I mean is QuakeLive - proper 3D rendering within the browser. We're actually using Ogre (cross-platform C++) but this uses Direct3D/OpenGL so it's the same thing.
Things like QuakeLive can be done rather quite simply with Google's NativeClient SDK. It abstracts away the whole plugin architecture so that you can focus on writing your software, and provides support for nearly all plugin-capable browsers on Windows, Mac OS X, and Linux, portably. The user installs the NaCl plugin (which is included in some versions of Chrome and Chromium), and your software runs inside NaCl, seamlessly on all supported platforms, from a single binary.
Note that you can use OpenGL portably from within NaCl, but not DirectX. Future versions will also support ARM and x86_64 with technology from the LLVM project.
FireBreath is a great cross-platform, cross-browser library for developing C++ browser plugins.
Flash Player 11 provides true 3D support via Stage API over DirectX, OpenGL or whatever available at the device:
http://techzoom.org/adobe-flash-player-11-air-3-beta-stage3d-and-64bit-support-on-linux-mac-and-windows/
Its in beta now, so user needs to install it manually, but when Adobe release it then majority of browsers will provide true 3D support instantly. Latest Away3D beta already supports Stage API.
I have a need to get some of this done soon, so if anyone here is an expert on this please look me up.
Steve Bell
Archiform 3D animation studio