How to connect two kinect v.2 sensor to one computer - computer-vision

I'm updating an application which use 3 kinect v1 with sdk 1.8 connected to the same computer.
Actually i am updating my application with kinect v2, to improve the performance of my system. The last version of microsoft sdk 2.0 does not support multi sensor connection.
The only solution that i tried which works is to use three different pc,
each for kinect v.2, and exchange data through Ethernet connection.
The problem of this solution is that is too expensive.
The minimum specs of kinect 2 require expensive pc, while i was considering to use this solution just with smart small computer like raspberry 2.
My questions are:
Do you know any hack solution to provide mulitple kinect v2 sensor connection to the same computer?
Do you know any low cost, raspberry likes, solution, which respect the minimum kinect v2 requirements? (http://www.microsoft.com/en-us/kinectforwindows/purchase/sensor_setup.aspx)

When you only need the video and depth data, perhaps you could investigate to use https://github.com/OpenKinect/libfreenect2
Here I can understand if the maximum framerate could be a bit lower than what you get on an intel i5 system with USB 3.0.
The rest of the high requirements is also necessary for skeleton tracking. So this won't be available then, also as this is not present in the libfreenect2.

Related

Azure Kinect and Hololens

Is Azure-Kinect-Sensor-SDK supported on Lattepanda (Windows OS) , Raspberry Pi 4 (Windows ARM OS) and Intel Compute Stick ?
Does Microsoft hololens azure kinect is same as new azure kinect camera?
thank you in advance.
Hololense has exactly the same depth sensors that Azure Kinect has,
but it is attached to a moving person.
By having multiple Azure Kinects, looking at the same environments,
you can get much precise depth sensing. Especially if you are
covering a bigger area, it's a great idea to have multiple stable
devices to get the same data information.
Also, Azure Kinect has a great API called Skelton SDK, you can use
this to create applications that could be used for educational
purposes, and other examples for sports, industrial training, etc, or
to detect injuries before they happen from repetitive actions.
Coming to the one of the platform supportability question: Please refer to this thread.
no news about a support for Pi 4 ?
It doesn't meet our GPU requirements

Effective motion detection with OpenCV with stream received from IP Camera

I have two questions which I was struggling finding answers on the net for more than a week.
I'm writing a Windows service on Visual C++ 2017 which connects to Axis IP Cameras on our network and queries MJPEG streams using regular sockets. It successfully parses the streams and decodes JPEG images. Decoding done with OpenCV;
frame = cv::imdecode(data, IMREAD_GRAYSCALE)).
Q1. Although OpenCV uses a performance JPEG library as it claims: build-libjpeg-turbo (ver 1.5.3-62), decoding performance is surprisingly slower than .Net's System.Drawing.Image.FromStream(ms). Do you have any recommendation for a really fast JPEG decompression?
Q2. All I need to do with the received JPEG's is to check "regions of interest" if there is motion in there. These are production lines in a factory actually. The factory runs 24 hours and six days a week so there will be changing lighting conditions. Sometimes there won't be light at all so JPEG's will be with plenty of noise on them. Which OpenCV operations and algorithms you would recommend applying on the frames to have an understanding of if there is a motion at the ROI? Of course you can use plenty of operations on your matrices one after another but I need the shortest and most effective way to keep the resource requirements low as it will be doing this operation for plenty cameras and ROI's at the same time.
My system is with NVIDIA Video Card (I can use CUDA), Intel i7-7700, 16GB Ram.
Thank you!
This is not exactly an answer to your question, but it may even be a better approach.
Axis IP cameras have since long time an on-board motion detection engine than can be configured both via the camera web UI (on old camera models/firmware version it may require using Internet Explorer and the use of an embedded ActiveX control to do that) and via the VAPIX Axis HTTP camera API.
The same VAPIX HTTP API also has commands to receive motion levels and threshold for each configured motion area/window on the camera.
If you don't have a recent model that supports VAPIX version 3, you may still rely on VAPIX version 2, you can try issuing an HTTP GET request such as:
http:///axis-cgi/motion/motiondata.cgi?group=0,1
to get a HTTP multipart stream of the motion level and threshold data (i.e. for motion area 0 and 1).
For more detailed information, you can download the relevant VAPIX PDF documentation from the Axis website (may require an account and login).

hardware accelerated scaling MFT in windows7

I am searching hardware accelerated(GPU) based video scaling.I found a extensive discussion in following threads
How to use hardware video scalers? and
Hardware Accelerated Image Scaling in windows using C++
I try to stick with MFT based scaling because i also using H.264 Encoder MFT in my application.
We have two option for MFT based solution-
1. Video Resizer DSP
2. Video Processor MFT
But both these methods used MF_SA_D3D_AWARE. As mentioned below:
A video MFT has the attribute MF_SA_D3D_AWARE.aspx which can be used to query whether it supports DirectX 3D hardware acceleration, and this can be enabled by sending it the MFT_MESSAGE_SET_D3D_MANAGER message.
and MF_SA_D3D_AWARE is supported in Windows 8 onwards.
Is their any MFT for scaling which uses Hardware acceleration in windows 7 ?
I haven't investigated other two options(MFCreateVideoRenderer and IDirectXVideoProcessor::VideoProcessBlt) mentioned in How to use hardware video scalers? that it will support in windows 7 or not. But I am actively looking for MFT option on priority.
Under Windows 7, i will recommand you to use IDXVAHD_VideoProcessor
You have a sample here : DXVA-HD Sample
But i think that if you use a simple DirectXDevice9 with a DirectXTexture9, the scaling result will be the same. There is no reason that dedicated scaling process only apply to video file process. I think they are the same for both (game and video file).
The lonely thing i saw, is that you can setup the constriction mode : DXVAHD_BLT_STATE_CONSTRICTION_DATA , wich apply to downscaling, not really to upscaling.

How to use external HD video camera as input for a Visual Studio, OpenCV project?

I am doing a project regarding image processing and multiple person counting and was wondering, how exactly can I plug my ION AIR PRO PLUS video recording device (similar to a goPro), and use it as my 'webcam'? Basically, I want to plug it in and then access it via a live feed using Microsoft Visual Studio 2010 and OpenCV, and then do real time tracking of people walking.
What I am struggling with is accessing the external camera from my program. Anyone know how to do this?
The video camera has no wifi, only an hdmi output, RGB cable output and a USB.
Attach the USB cable and instantiate cv::VideoCaputre(0). In Linux local cameras have number indices I think in Windows should be the same.
Peter, In the past I have worked on camera products on Windows XP & 7. On windows usb cameras can be accessed using directshow.
You can implement a directshow filter for people tracking algorithm and fit it in the direct show pipeline soon after your camera plugin.
Here is a link to an application stack that may suit your use case(to give you an idea):
http://www.e-consystems.com/blog/camera/?p=1302
The recent windows operating systems that run WinRT, use a latest framework called MediaFoundation. As its very new there are some limitations we found when we tried to build similar applications.
To quickly see a preview out of your camera, pls google for "graphedit" and install it on a Windows 7 pc. Its a fairly simple tool. You can drag and drop your camera, if directshow is supported, then render its output pin and play for preview.
I do not think this is a standard webcam, it appears to work only as a mass storage device. One thing you can try is removing the micro sd card and connect it to the computer. This works on some cameras.
Assuming this does not work, one option would be to purchase an HDMI capture card. The YK762H PCI-E card costs around $40, and will allow you to use the camera with OpenCV, the videoInput library, or DirectShow.
The other option is to use the WiFi live preview. You would have to figure out the commands sent to the camera. This has already been done with the SJCam wifi cameras, the GoPro models, and Sony cameras such as the QX-10 and HDR-AS100V.

Hardware to use for 3D Depth Perception

I am planning on giving a prebuilt robot 3D vision by integrating a 3D depth sensor such as a Kinect or Asus Xtion Pro. These are the only two that I have been able to find yet I would imagine that a lot more are being built or already exist.
Does anyone have any recommendations for hardware that I can use or which of these two is better for an open source project with integration to ROS (robot operating system).
I would vote for the Kinect for Windows over the Asus Xtion Pro based on the hardware (Kinect has better range), but depending on your project there's a chance neither will work out well for you. I'm not familiar with a Robot Operating System, but the Kinect will only run on Windows 7, kind of Windows 8, and supposedly Windows Server 2008. The Asus Xtion Pro seems to have SDKs available for Linux distros, so if your robot is running something similar it might work.
Depending on exactly what you need to be able to do, you might want to go with a much simpler depth sensor. For example, buy a handful of these and you'll still spend a lot less than you would for a Kinect. They might also be easier to integrate with your robot; hook them up to a microcontroller, plug the microcontroller into your robot through USB, and life should be easy. Or just plug them straight into your robot. I have no idea how such things work.
edit: I've spent too much time working with the Kinect SDK, I forgot there are third party SDKs available that might be able to run on whatever operating system you've got going. Still, it really depends. The Kinect has a better minimum depth, which seems important to me, but a worse FOV than the Xtion. If you just need the basics (is there a wall in front of me?) definitely go with the mini IR sensors which are available all over the web and probably at stores near you.
Kinect + Linux + ROS + PCL (http://pointclouds.org/) is a very powerful (and relatively cheap) combination. I am not sure what you are trying to do with the system, but there are enough libraries available with this combination to do lots. Your hardware will be constrained by what you can put linux on and what will be fast enough to run some point cloud processing. although there are ports of linux and ROS for embedded devices like gumstix, i would go for something closer to a standard PC like a mini-ITX. You will have fewer headaches in the long run.