Azure Kinect and Hololens - c++

Is Azure-Kinect-Sensor-SDK supported on Lattepanda (Windows OS) , Raspberry Pi 4 (Windows ARM OS) and Intel Compute Stick ?
Does Microsoft hololens azure kinect is same as new azure kinect camera?
thank you in advance.

Hololense has exactly the same depth sensors that Azure Kinect has,
but it is attached to a moving person.
By having multiple Azure Kinects, looking at the same environments,
you can get much precise depth sensing. Especially if you are
covering a bigger area, it's a great idea to have multiple stable
devices to get the same data information.
Also, Azure Kinect has a great API called Skelton SDK, you can use
this to create applications that could be used for educational
purposes, and other examples for sports, industrial training, etc, or
to detect injuries before they happen from repetitive actions.
Coming to the one of the platform supportability question: Please refer to this thread.
no news about a support for Pi 4 ?
It doesn't meet our GPU requirements

Related

Will Oculus Rift work with Quadro M1000M for non-gaming purposes?

On the website of Oculus Rift is is stated that the minimum system requirements for the Oculus Rift are a NVIDIA GTX 970 / AMD R9 290 equivalent or greater. I am aware that the Quadro M1000M does not meet those requirements.
My intention is to use the Oculus Rift for developing educational applications (visualization of molecular structures) which in terms of computational demand does not even come close to modern games.
For the above-mentioned kind of purpose, would the Oculus Rift run fine on less powerful GPUs (i.e. the Quadro M1000M) or is the driver developed in such a way that it simply "blocks" cards that do not meet the required specifications?
Further information:
I intent on developing my application in Linux using GLFW in combination with LibOVR as mentioned in this guide: http://www.glfw.org/docs/3.1/rift.html.
edit
It was pointed out that the SDK does not support Linux. So as an alternative option, I could also use Windows / Unity.
Any personal experiences on the topic are highly appreciated!
Consumer Oculus Rift hardware has not been reverse engineered to the point where you can use it without the official software, which currently only supports Windows based desktop systems running one of a specific number of supported GPUs. It will not function on any mobile GPU, nor on any non-Windows OS. Plugging the HMD into the display port on systems where the Oculus service isn't running will not result in anything appearing on the headset.
The Oculus DK2 and DK1 can both be made to function on alternative operating systems and with virtually any graphics card, since when connected they are detected by the OS as just another monitor.
Basically your only path is to either use older HMD hardware, wait for Oculus to support other platforms, or wait for someone to reverse engineer the interaction with the production HMD hardware.
To answer my own question (I hope that's ok), I bought an Oculus Rift CV1. It turns out it runs smoothly on my HP Zbook G3 which has an Quadro M1000M card in it. Admittedly, the Oculus Desktop application gives a warning that my machine does not meet the required specifications. Indeed, if I render a scene with lots of complicated graphics in it and turn my head, the visuals tend to 'stutter' a bit.
I tested a couple of very simple scenes in Unity 5 and these run without any kind of problems. I would say that the above mentioned hardware is perfectly suitable for any kind of educational purposes I had in mind, just nothing extremely fancy.
As #SurvivalMachine mentioned in the comments, Optimus can be a bit problematic, but this is resolved by turning hybrid graphics off in the bios (which I heard is possible for the HP Zbook series, but not for all laptops). Furthermore, the laptop needs to be connected to a power outlet (i.e. not run on its battery) for the graphical card to function properly with the Oculus Rift.

How to use external HD video camera as input for a Visual Studio, OpenCV project?

I am doing a project regarding image processing and multiple person counting and was wondering, how exactly can I plug my ION AIR PRO PLUS video recording device (similar to a goPro), and use it as my 'webcam'? Basically, I want to plug it in and then access it via a live feed using Microsoft Visual Studio 2010 and OpenCV, and then do real time tracking of people walking.
What I am struggling with is accessing the external camera from my program. Anyone know how to do this?
The video camera has no wifi, only an hdmi output, RGB cable output and a USB.
Attach the USB cable and instantiate cv::VideoCaputre(0). In Linux local cameras have number indices I think in Windows should be the same.
Peter, In the past I have worked on camera products on Windows XP & 7. On windows usb cameras can be accessed using directshow.
You can implement a directshow filter for people tracking algorithm and fit it in the direct show pipeline soon after your camera plugin.
Here is a link to an application stack that may suit your use case(to give you an idea):
http://www.e-consystems.com/blog/camera/?p=1302
The recent windows operating systems that run WinRT, use a latest framework called MediaFoundation. As its very new there are some limitations we found when we tried to build similar applications.
To quickly see a preview out of your camera, pls google for "graphedit" and install it on a Windows 7 pc. Its a fairly simple tool. You can drag and drop your camera, if directshow is supported, then render its output pin and play for preview.
I do not think this is a standard webcam, it appears to work only as a mass storage device. One thing you can try is removing the micro sd card and connect it to the computer. This works on some cameras.
Assuming this does not work, one option would be to purchase an HDMI capture card. The YK762H PCI-E card costs around $40, and will allow you to use the camera with OpenCV, the videoInput library, or DirectShow.
The other option is to use the WiFi live preview. You would have to figure out the commands sent to the camera. This has already been done with the SJCam wifi cameras, the GoPro models, and Sony cameras such as the QX-10 and HDR-AS100V.

How to connect two kinect v.2 sensor to one computer

I'm updating an application which use 3 kinect v1 with sdk 1.8 connected to the same computer.
Actually i am updating my application with kinect v2, to improve the performance of my system. The last version of microsoft sdk 2.0 does not support multi sensor connection.
The only solution that i tried which works is to use three different pc,
each for kinect v.2, and exchange data through Ethernet connection.
The problem of this solution is that is too expensive.
The minimum specs of kinect 2 require expensive pc, while i was considering to use this solution just with smart small computer like raspberry 2.
My questions are:
Do you know any hack solution to provide mulitple kinect v2 sensor connection to the same computer?
Do you know any low cost, raspberry likes, solution, which respect the minimum kinect v2 requirements? (http://www.microsoft.com/en-us/kinectforwindows/purchase/sensor_setup.aspx)
When you only need the video and depth data, perhaps you could investigate to use https://github.com/OpenKinect/libfreenect2
Here I can understand if the maximum framerate could be a bit lower than what you get on an intel i5 system with USB 3.0.
The rest of the high requirements is also necessary for skeleton tracking. So this won't be available then, also as this is not present in the libfreenect2.

Does stage3d use OpenGL? or Direct3D when on Windows

WebGl is based on OpelGL ES 2.0.
Is it correct to say that Stage3d is also based OpenGL? I mean does it call OpenGL functions? Or ot calles Direct3D when runs on Windows?
If no, could you explain me, what API does Stage3d use for hardware acceleration?
The accepted answer is incorrect unfortunately. Stage 3D uses:
DirectX on Windows systems
OpenGL on OSX systems
OpenGL ES on mobile
Software Renderer when no hardware acceleration is available. (Due to
older hardware or no hardware at all.)
Please see: http://www.slideshare.net/danielfreeman779/adobe-air-stage3d-and-agal
Good day, Stage3D isn't based on anything, it may share similar methodology/terminology. It is another rendering pipeline, this is why Adobe is soo pumped about it.
Have a look at this: http://www.adobe.com/devnet/flashplayer/articles/how-stage3d-works.html
You can skip down to this heading "Comparing the advantages and restrictions of working with Stage3D" to get right down to it.
Also, take a peak at this: http://www.adobe.com/devnet/flashplayer/stage3d.html, excerpt:
The Stage3D APIs in Flash Player and Adobe AIR offer a fully
hardware-accelerated architecture that brings stunning visuals across
desktop browsers and iOS and Android apps enabling advanced 2D and 3D
capabilities. This set of low-level GPU-accelerated APIs provide
developers with the flexibility to leverage GPU hardware acceleration
for significant performance gains in video game development, whether
you’re using cutting-edge 3D game engines or the intuitive, lightning
fast Starling 2D framework that powers Angry Birds.

Hardware to use for 3D Depth Perception

I am planning on giving a prebuilt robot 3D vision by integrating a 3D depth sensor such as a Kinect or Asus Xtion Pro. These are the only two that I have been able to find yet I would imagine that a lot more are being built or already exist.
Does anyone have any recommendations for hardware that I can use or which of these two is better for an open source project with integration to ROS (robot operating system).
I would vote for the Kinect for Windows over the Asus Xtion Pro based on the hardware (Kinect has better range), but depending on your project there's a chance neither will work out well for you. I'm not familiar with a Robot Operating System, but the Kinect will only run on Windows 7, kind of Windows 8, and supposedly Windows Server 2008. The Asus Xtion Pro seems to have SDKs available for Linux distros, so if your robot is running something similar it might work.
Depending on exactly what you need to be able to do, you might want to go with a much simpler depth sensor. For example, buy a handful of these and you'll still spend a lot less than you would for a Kinect. They might also be easier to integrate with your robot; hook them up to a microcontroller, plug the microcontroller into your robot through USB, and life should be easy. Or just plug them straight into your robot. I have no idea how such things work.
edit: I've spent too much time working with the Kinect SDK, I forgot there are third party SDKs available that might be able to run on whatever operating system you've got going. Still, it really depends. The Kinect has a better minimum depth, which seems important to me, but a worse FOV than the Xtion. If you just need the basics (is there a wall in front of me?) definitely go with the mini IR sensors which are available all over the web and probably at stores near you.
Kinect + Linux + ROS + PCL (http://pointclouds.org/) is a very powerful (and relatively cheap) combination. I am not sure what you are trying to do with the system, but there are enough libraries available with this combination to do lots. Your hardware will be constrained by what you can put linux on and what will be fast enough to run some point cloud processing. although there are ports of linux and ROS for embedded devices like gumstix, i would go for something closer to a standard PC like a mini-ITX. You will have fewer headaches in the long run.