OpenNI2 freezes when no camera is connected - openni

I try to build a framework using Unity that allows other developers to add motion cameras like Kinect or the Asus Xtion into their project. For that I use OpenNI2 and a modified version of Zigfu Development Kit to make ONI work with Unity.
My problem now is that when I try to enable Zig which in itself seems to load the ONI driver the program will be stuck in an infinite loop if no camera is connected to the computer. After some extensive debugging it seems the source of this loop is the ONI driver itself while waiting for a stream from any motion camera caused by this line:
OniStatus rc = waitForStreams(&stream, 1, &streamIndex, ONI_TIMEOUT_FOREVER);
What I'm looking for is a simple method to prevent ONI from freezing my program. I already tried to wrap the code that causes Zig to start the ONI driver inside a Thread but that didn't work. I could also just add a warning to my framework preventing other devs to at least know what might go wrong when their program freezes, but that might really not be the best choice, I guess.

Related

Toggling NvAPI_Stereo_Deactivate/NvAPI_Stereo_activate crashes the unity application

I'm currently working on external plugin in Unity3d which uses NVAPI & 3D Vision. In NVAPI there are two API calls to turn on/off active stereo.
NvAPI_Stereo_Deactivate
NvAPI_Stereo_Activate
So whenever I try to toggle on/off stereo it crashes at random time with following exception:
Unity Player [version: Unity 2017.1.0f3 (472613c02cf7)]
nvwgf2umx.dll caused an Access Violation (0xc0000005) in module nvwgf2umx.dll at 0033:6f9981d8.
The crash can happen at third try or any try later sometimes. What I'm assuming currently is it has to do something with some value accessed by the dll. Problem is since its NVIDIA internal I have no access to it.
I have already tried other simple methods such as Vsync off, Change Quality settings to max in Manage 3d settings but all failing.
I did come across similer issue in NVDIA dev forums but there is not answer to it seems. Any suggestions or help regarding this would be greatly appreciated.
Also here is the link to error log
I have managed to fix this above issue using a roundabout way. Instead of using
NvAPI_Stereo_Deactivate
NvAPI_Stereo_Activate
functions to turn on & off 3d vision I'm passing the render texture to mono eye in NvAPI_Stereo_SetActiveEye to mono camera while in active mode I pass it to Left Eye & Right Eye respectively. Toggling seems to work properly although I have also noted using NvAPI_Stereo_IsActivated in a loop seems to cause also same access violation so rather only user NvAPI_Stereo_SetActiveEye function to set eye and not to mess around with NVAPI native functions. One downside of using this is 3d emitter will be kept on unitil the exit of application(for my project this seems ok). Hope this helps anyone in future coming across this problem. Do update the answer if anyone has a better solution. That would be nice.

Recommended hardware for opencv multiple cameras

I apologize if I simply am failing with google-fu but I am unable to find the answer to my problem.
I have been working on a project that uses two cameras (and eventually four) to take and analyze data using open cv. The project is updating each frame and does things like movement tracking and object recognition. However, I am using my custom built desktop with a hex core i7-5820k and GTX 980 ti... I can't determine what hardware I need to build a dedicated machine for this project. If someone could reccomend a processor or number of logistical cores needed for something like this, that would be much appreciated!
Thank you!

How to use external HD video camera as input for a Visual Studio, OpenCV project?

I am doing a project regarding image processing and multiple person counting and was wondering, how exactly can I plug my ION AIR PRO PLUS video recording device (similar to a goPro), and use it as my 'webcam'? Basically, I want to plug it in and then access it via a live feed using Microsoft Visual Studio 2010 and OpenCV, and then do real time tracking of people walking.
What I am struggling with is accessing the external camera from my program. Anyone know how to do this?
The video camera has no wifi, only an hdmi output, RGB cable output and a USB.
Attach the USB cable and instantiate cv::VideoCaputre(0). In Linux local cameras have number indices I think in Windows should be the same.
Peter, In the past I have worked on camera products on Windows XP & 7. On windows usb cameras can be accessed using directshow.
You can implement a directshow filter for people tracking algorithm and fit it in the direct show pipeline soon after your camera plugin.
Here is a link to an application stack that may suit your use case(to give you an idea):
http://www.e-consystems.com/blog/camera/?p=1302
The recent windows operating systems that run WinRT, use a latest framework called MediaFoundation. As its very new there are some limitations we found when we tried to build similar applications.
To quickly see a preview out of your camera, pls google for "graphedit" and install it on a Windows 7 pc. Its a fairly simple tool. You can drag and drop your camera, if directshow is supported, then render its output pin and play for preview.
I do not think this is a standard webcam, it appears to work only as a mass storage device. One thing you can try is removing the micro sd card and connect it to the computer. This works on some cameras.
Assuming this does not work, one option would be to purchase an HDMI capture card. The YK762H PCI-E card costs around $40, and will allow you to use the camera with OpenCV, the videoInput library, or DirectShow.
The other option is to use the WiFi live preview. You would have to figure out the commands sent to the camera. This has already been done with the SJCam wifi cameras, the GoPro models, and Sony cameras such as the QX-10 and HDR-AS100V.

c++ OpenCV capture easycap usb cam

I am trying to get video stream from analog camera connected to usb easycap - in OpenCV C++.
using MATLAB, I can get stream the same approach like for laptop webcam (with changing the index from 1 to 2).
with OpenCV, I can get stream from laptop webcam with index 0.
but when I am trying to get with the camera connected to the easycap (using index 1) , the laptop crashes and get blue screen.
Anyone have done this before?
Thanks
I work on the same device and I also have some BSOD with it.
Do you plug it with the USB extension provided ? If yes, try don't use it.
If your problem is still hapening, it's probably because like me, you use a low quality chinese fake EasyCap. I bought a real one and I haven't problems anymore
If you want to keep your device, you can use it with VideoCapture in python, it works very well and there is no more BSOD
Try using Linux. I tested my code with a fake EasyCAP in windows and I got many BSOD then I built and executed the same code in Linux and it worked.
Linux is driver friendly.

freeglut GLUT_MULTISAMPLE very slow on Intel HD Graphics 3000

I just picked up a new Lenovo Thinkpad that comes with Intel HD Graphics 3000. I'm finding that my old freeglut apps, which use GLUT_MULTISAMPLE, are running at 2 or 3 fps as opposed to the expected 60fps. Even the freeglut example 'shapes' runs this slow.
If I disable GLUT_MULTISAMPLE from shapes.c (or my app) things run quickly again.
I tried multisampling on glfw (using GLFW_FSAA - or whatever that hint is called), and I think it's working fine. This was with a different app (glgears). glfw is triggering Norton Internet Security, which things it's malware so keeps removing .exes... but that's another problem... my interest is with freeglut.
I wonder if the algorithm that freeglut uses to choose a pixel format is tripping up on this card, whereas glfw is choosing the right one.
Has anyone else come across something like this? Any ideas?
That glfw triggeres Norton is a bug in Nortons virus definition. If it's still the case with the latest definitions, send them your glfw dll/app so they can fix it. Same happens on Avira and they are working on it (have already confirmed that it's a false positive).
As for the HD3000, that's quite a weak GPU, what resolution is your app and how many samples are you using? Maybe the amount of framebuffer memory gets to high for the little guy?