Recommended hardware for opencv multiple cameras - c++

I apologize if I simply am failing with google-fu but I am unable to find the answer to my problem.
I have been working on a project that uses two cameras (and eventually four) to take and analyze data using open cv. The project is updating each frame and does things like movement tracking and object recognition. However, I am using my custom built desktop with a hex core i7-5820k and GTX 980 ti... I can't determine what hardware I need to build a dedicated machine for this project. If someone could reccomend a processor or number of logistical cores needed for something like this, that would be much appreciated!
Thank you!

Related

How to distinguish OpenCV cameras?

I am writing C++ class for managing multiple cameras and reading frames from them. Let's say it is wrapper for OpenCV. Currently I am finding cameras by trying to create devices from 0-10 range and If there is output I know that I've found working camera. I can always save internal IDs of those cameras to distinguish them but what If another camera is plugged in? It may break the order of IDs. So is there any way to distinguish OpenCV cameras for example by getting their hardware IDs?
I know this doesn't help you much, but the short answer is "No, OpenCV doesn't currently provide that capability."
According to the doc, any hardware ids are not properties you can retrieve using the get method or any other.
Having said that, if you're very intent on using OpenCV, I would still test the behavior of OpenCV 2.4.10 on various platforms and using various middleware and see how it behaves. If you get a consistent behavior, then you can run with it, but be somewhat ready for it to break in the future. What would work for you is that OpenCV is using various middleware in the backend, such as V4L, Qt, etc., and these are well-maintained and more-or-less consistent.
In retrospect, I would stay away from OpenCV's video interface altogether right now for commercial software, unless you're okay with the situation I described. Beware that OpenCV 3.0 videoio library is unstable at this point and has open bug reports.

Does OpenGL display image faster than OpenCV?

I am using OpenCV to show image on the projector. But it seems the cv::imshow is not fast enough or maybe the data transfer is slow from my CPU to GPU then to projector, so I wonder if there is a faster way to display than OpenCV?
I considered OpenGL, since OpenGL directly uses GPU, the command may be faster than from CPU which is used by OpenCV. Correct me if I am wrong.
OpenCV already supports OpenGL for image output by itself. No need to write this yourself!
See the documentation:
http://docs.opencv.org/modules/highgui/doc/user_interface.html#imshow
http://docs.opencv.org/modules/highgui/doc/user_interface.html#namedwindow
Create the window first with namedWindow, where you can pass the WINDOW_OPENGL flag.
Then you can even use OpenGL buffers or GPU matrices as input to imshow (the data never leaves the GPU). But it will also use OpenGL to show regular matrix data.
Please note:
To enable OpenGL support, configure OpenCV using CMake with
WITH_OPENGL=ON . Currently OpenGL is supported only with WIN32, GTK
and Qt backends on Windows and Linux (MacOS and Android are not
supported). For GTK backend gtkglext-1.0 library is required.
Note that this is OpenCV 2.4.8 and this functionality has changed quite recently. I know there was OpenGL support in earlier versions in conjunction with the Qt backend, but I don't remember when it was introduced.
About the performance: It is a quite popular optimization in the CV community to output images using OpenGL, especially when outputting video sequences.
OpenGL is optimised for rendering images, so it's likely faster. It really depends if the OpenCV implementation uses any GPU acceleration AND if the bottleneck is on rendering side of things.
Have you tried GPU accelerated OpenCV? - http://opencv.org/platforms/cuda.html
How big is the image you are displaying? How long does it take to display the image using cv::imshow now?
I know it's an old question, but I happened to have exactly the same problem. And from my observations I've concluded that the root of the problem is the projector's own latency, especially if one is using an older model.
How have I concluded it?
I displayed the same video sequence with cv::imshow() on the laptop monitor and on the projector. Then I waved my hand. It was obvious, that projector introduces significant latency.
To double-check, I've opended a webcam video, waved my hand in front of it and observed the difference on the monitor and on the projector. Webcam does no processing, no opencv operations, so in my understanding the only thing that would explain the latency would be the projector itself.

Hardware to use for 3D Depth Perception

I am planning on giving a prebuilt robot 3D vision by integrating a 3D depth sensor such as a Kinect or Asus Xtion Pro. These are the only two that I have been able to find yet I would imagine that a lot more are being built or already exist.
Does anyone have any recommendations for hardware that I can use or which of these two is better for an open source project with integration to ROS (robot operating system).
I would vote for the Kinect for Windows over the Asus Xtion Pro based on the hardware (Kinect has better range), but depending on your project there's a chance neither will work out well for you. I'm not familiar with a Robot Operating System, but the Kinect will only run on Windows 7, kind of Windows 8, and supposedly Windows Server 2008. The Asus Xtion Pro seems to have SDKs available for Linux distros, so if your robot is running something similar it might work.
Depending on exactly what you need to be able to do, you might want to go with a much simpler depth sensor. For example, buy a handful of these and you'll still spend a lot less than you would for a Kinect. They might also be easier to integrate with your robot; hook them up to a microcontroller, plug the microcontroller into your robot through USB, and life should be easy. Or just plug them straight into your robot. I have no idea how such things work.
edit: I've spent too much time working with the Kinect SDK, I forgot there are third party SDKs available that might be able to run on whatever operating system you've got going. Still, it really depends. The Kinect has a better minimum depth, which seems important to me, but a worse FOV than the Xtion. If you just need the basics (is there a wall in front of me?) definitely go with the mini IR sensors which are available all over the web and probably at stores near you.
Kinect + Linux + ROS + PCL (http://pointclouds.org/) is a very powerful (and relatively cheap) combination. I am not sure what you are trying to do with the system, but there are enough libraries available with this combination to do lots. Your hardware will be constrained by what you can put linux on and what will be fast enough to run some point cloud processing. although there are ports of linux and ROS for embedded devices like gumstix, i would go for something closer to a standard PC like a mini-ITX. You will have fewer headaches in the long run.

How to make rgbdemo working with non-kinect stereo cameras?

I was trying to get RGBDemo(mostly reconstructor) working with 2 logitech stereo cameras, but I did not figure out how to do it.
I noticed that there is a opencv grabber in nestk library and its header file is included in the reconstructor.cpp. Yet, when I try "rgbd-viewer --camera-id 0", it keeps looking for kinect.
My questions:
1. Is RGBDemo only working with kinect so far?
2. If RGBDemo can work with non-kinect stereo cameras, how do I do that?
3. If I need to write my own implementation for non-kinect stereo cameras, any suggestion on how to start?
Thanks in advance.
if you want to do it with non-kinect cameras. You don't even need stereo. There are algorithms now that are able to determine whether two images' viewpoints are sufficiently different that they can be used as if they were taken by a stereo camera. In fact, they use images from different cameras that are found on the internet and reconstruct 3D models of famous places. I can write you a tutorial on how to get it working. I've been meaning to do so. The software is called Bundler. Along with Bundler, people often also use CMVS and PMVS. CMVS preprocesses the images for PMVS. PMVS generates dense clouds.
BUT! I highly recommend that you don't go this route. It makes a lot of mistakes because there is so much less information in 2D images. It makes it very hard to reconstruct the 3D model. So, it ends up making a lot of mistakes, or not working. Although Bundler and PMVS are awesome compared to previous software, the stuff you can do with kinect is on a whole other level.
To use kinect will only cost you $80 for the kinect off of ebay or $99 off of amazon and another $5 for the power adapter off of amazon. So, I'd highly recommend this route. Kinect provides much more information for the algorithm to work with than 2D images do, making it much more effective, reliable and fast. In fact, it could take hours to process images with Bundler and PMVS. Whereas with kinect, I made a model of my desk in just a few seconds! It truly rocks!

3D scene file format & viewer

I am looking for a cross-platform solution for saving and viewing 3D scenes (visualizations of engineering simulation models and results) but there (still) doesn't seem to be much out there.
I looked into this almost 10 years ago and settled on VRML then (and started the project that eventually turned in OpenVRML). Unfortunately, VRML/X3D has not become anywhere near ubiquitous in the past decade.
Ideally a solution would offer a C++ library that could be plugged in to a 3D rendering pipeline at some level to capture the 3D scene to a file; and a freely redistributable viewer that allowed view manipulation, part hiding, annotation, dimensioning, etc. At least linux, mac, and windows should be supported.
3D PDFs would seem to meet most of the viewer requirements, but the Adobe sdk is apparently only available on Windows.
Any suggestions ?
The closest thing that I'm aware of is Collada.
Many 3D engines can read it, and most 3D design tools can read and write it.
I believe the Ogre engine has pretty good support.
If you are using OpenGL, GLIntercept will save all OpenGL calls (with the data they were called with) to a XML file. It's only half the solution, though, but it shouldn't be hard to parse it and recreate the scene yourself.
Take a look at Ogre3d.org. Its just an engine, you must program with it. But OGRE is probably the better (free/open) platform to develop 3D right now.