I am working with opencv these days and I am capable of doing 99% of stuff explained in opencv official tutorials. And I managed to do motion tracking manually with background substraction, where some users claimed as impossible.
However, right now I am working with object detection, where I need to track the hand and want to find whether the hand is moved to left or right. Can this be done by following steps? (used in motion detection)
Get camera 2 instances of camera video (real time)
blur it to reduce noise
theresold it to find hand (or leave it if blur is enough)
find the absolute deference between 2 images
Get PSR
find pixel position of motion
However, it seems like it is not 100% same as motion detection, because I read some stuff about Kalman Filter, Block-matching, etc which I did not use in motion detection. However, I found this tutorial
http://homepages.cae.wisc.edu/~ece734/project/s06/lintangwuReport.pdf
But, I really need your advice. Is there any tutorial which teach me how to do this? I am interested in learning core theory with opencv explanation (c++).
Since I am not good at maths( I am working on it - I didnt go to the university , they found me and invited me to join the final year for free because of my programming skills, so I missed math) , full of math stuff will not work.
Please help. Thank you.
Related
How to recognize rain on camera vision using with OpenCV in C++?
Or if somebody stick a sticker on a camera how recognize it with OpenCV in C++?
Or if somebody throw color to the camera how can i detect it with OpenCV in C++?
Detect these on camera vision:
Rain
Sticker
Color
Here is an example video of sticker!
Camera Vision-Sticker
In case of a sticker, you're just looking for a large dark area that doesn't change in time.
In case of color, analyze image color stats - if somebody sprays some paint on a camera (is that what you mean by "throwing color"?), some color is going to be dominant over all the others.
You can also try to handle both cases by subtracting frames and detecting image areas that don't change in time that way.
You may want to use machine learning for finding threshold values (e.g. area size, its shape properties, such as width/length ratio, continuousness etc.) used to decide when to consider something to be a sticker/color or something else.
As for the rain, I guess there's no simple answer that can be given in a few sentences. There are some articles available in the web though. That said, I would guess it would be simpler and cheaper to detect rain by just installing external rain sensors (like the ones activating wipers in a car) rather than trying to do it by developing your own computer vision algorithm for that purpose.
This sounds like an interesting project, where a camera can automatically detect obstruction (paint, sticker, rain). It will most likely be necessary for the camera to be mounted without obstructions so that the expected image can be learned. If the usage scenario allows that, it won't be very hard.Both sticker and rain result in strong permanent deviations from the expected image, while rain will result in noisy images.
OpenCV with C++ or Python can help solve this kind of problems, because complicated computer vision algorithms are already implemented there. It takes some time to get started with, but after that OpenCV is not hard.
I'm new to Kalman tracking so I've got no idea how to start. I have program to detect faces, after a face has been detected, i want to send the center x.y of the face to the Kalman filter to draw a line showing the direction of movement. How do i start? Thanks in advance.
You will need to understand the math for formulating the problem, the link offered by William is good place to experiment with the code. If you want to follow the math there are a few good places to check:
http://home.hit.no/~hansha/documents/control/theory/kalmanfilter.pdf
http://www.cl.cam.ac.uk/~rmf25/papers/Understanding%20the%20Basis%20of%20the%20Kalman%20Filter.pdf
http://old.shahed.ac.ir/references/kalman_filter_notes.pdf
and ofcourse
http://en.wikipedia.org/wiki/Kalman_filter
has some excellent references to go through. Also...
Check out the Udacity course:
https://www.udacity.com/course/cs373
This has a section on Kalman filters programming using python.
I'm trying to build a simple traffic motion monitor to estimate average speed of moving vehicles, and I'm looking for guidance on how to do so using an open source package like OpenCV or others that you might recommend for this purpose. Any good resources that are particularly good for this problem?
The setup I'm hoping for is to install a webcam on a high-rise building next to the road in question, and point the camera down onto moving traffic. Camera altitude would be anywhere between 20 ft and 100ft, and the building would be anywhere between 20ft and 500ft away from the road.
Thanks for your input!
Generally speaking, you need a way to detect cars so you can get their 2D coordinates in the video frame. You might want to use a tracker to speed up the process and take advantage of the predictable motion of the vehicles. You, also, need a way to calibrate the camera so you can translate the 2D coordinates in the image to depth information so you can approximate speed.
So as a first step, look at detectors such as deformable parts model DPM, and tracking by detection methods. You'll probably need to port some code from Matlab (and if you do, please make it available :-) ). If that's too slow, maybe do some segmentation of foreground blobs, and track the colour histogram or HOG descriptors using a Particle Filter or a Kalman Filter to predict motion.
I was trying to get RGBDemo(mostly reconstructor) working with 2 logitech stereo cameras, but I did not figure out how to do it.
I noticed that there is a opencv grabber in nestk library and its header file is included in the reconstructor.cpp. Yet, when I try "rgbd-viewer --camera-id 0", it keeps looking for kinect.
My questions:
1. Is RGBDemo only working with kinect so far?
2. If RGBDemo can work with non-kinect stereo cameras, how do I do that?
3. If I need to write my own implementation for non-kinect stereo cameras, any suggestion on how to start?
Thanks in advance.
if you want to do it with non-kinect cameras. You don't even need stereo. There are algorithms now that are able to determine whether two images' viewpoints are sufficiently different that they can be used as if they were taken by a stereo camera. In fact, they use images from different cameras that are found on the internet and reconstruct 3D models of famous places. I can write you a tutorial on how to get it working. I've been meaning to do so. The software is called Bundler. Along with Bundler, people often also use CMVS and PMVS. CMVS preprocesses the images for PMVS. PMVS generates dense clouds.
BUT! I highly recommend that you don't go this route. It makes a lot of mistakes because there is so much less information in 2D images. It makes it very hard to reconstruct the 3D model. So, it ends up making a lot of mistakes, or not working. Although Bundler and PMVS are awesome compared to previous software, the stuff you can do with kinect is on a whole other level.
To use kinect will only cost you $80 for the kinect off of ebay or $99 off of amazon and another $5 for the power adapter off of amazon. So, I'd highly recommend this route. Kinect provides much more information for the algorithm to work with than 2D images do, making it much more effective, reliable and fast. In fact, it could take hours to process images with Bundler and PMVS. Whereas with kinect, I made a model of my desk in just a few seconds! It truly rocks!
Are there any open source code which will take a video taken indoors (from a smart phone for example of a home or office buildings, hallways) and superimpose that on a 2D picture showing the path traveled? This can be a handr drawn picture or a photo of a floor layout.
First I thought of doing this using the accelerometer and compass sensors but thought that perhaps one can get better accuracy with the visual odometer approach. I only need 0.5 to 1 meter accuracy. The phone will also collect important information indoors (no gps) for superimposing that data on the path traveled (this is the real application of this project and we know how to do this part). The post processing of the video can be done later on a stand alone computer so speed and cpu power is not a issue.
Challenges -
The user will simply hand carry the smart phone so the video taker is moving (walking) and not fixed
limit the video rate to keep the file size small (5 frames/sec? is that ok?). Typically need perhaps a full hour of video
Will using inputs from the phone sensors help the visual approach?
any help or guidance is appreciated Thanks
I have worked in the area for quite some time. There are three points which I'd care to make.
Vision only is hard
Vision based navigation using just a cellphone camera is very difficult. Most of the literature with great results show ~1% distance traveled as state-of-the-art but is usually using stereo cameras. Stereo helps a great deal, particularly in indoor environments for coping with scale drift. I've worked on a system which achieves 0.5% distance traveled for stereo but only roughly 5% distance traveled for monocular. While I can't share code, much of our system was inspired by this Sibley and Mei paper.
Stereo code in our case ran at full 60fps on a desktop. Provided you can push data fast enough, it'll be fine. With your error envelope, you can only navigate for 100m or so. Is that enough?
Multi-sensor is way to go. Though other sensors are worse than vision by themselves.
I've heard some good work with accelerometers mounted on the foot to do ZUPT (zero velocity updates) when the foot is briefly motionless on the ground while taking a step in order to zero out drift. This approach has the clear drawback of needing to mount the device on your foot, making a vision approach largely useless.
Compass is interesting but will be distracted by the ton of metal within an office building. Translating few feet around a large metal cabinet might cause 50+ degrees of directional jump.
Ultimately, a combination of sensors is likely to be the best if you can make that work.
Can you solve a simpler problem?
How much control do you have over your environment? Can you slap down fiducial markers? Can you do wifi triangulation? Does it need to be an initial exploration? If you can go through the environment before hand and produce visual bubbles (akin to Google Street View) to match against, you'll be much more accurate.
I'm not aware of any software that does this directly (though it might exist) but stuff similar to what you want to do has been done. A few pointers:
Google for "Vision based robot localization" the problem you state is very similar to the problem robots with a camera have when they enter a new environment. In this field the approach is usually to have the robot map its environment and then use the model for later reference, but the techniques are similar to what you'll need.
Optical flow will roughly tell you in what direction the camera is moving, but it won't tell you the speed because you have no objective reference. This is because you don't know if the things you see moving in the video feed are 1cm away and very small or 1 mile away and very big.
If you know the camera matrix of the camera recording the images you could try partial 3D scene reconstruction techniques to take a stab at the speed. Note that you can do the 3D scene stuff without the camera matrix (this is the "uncalibrated" part you see in the title of a lot of the google results), the camera matrix will let you add real world object sizes (and hence distances) to your reconstruction.
The amount of images/second you need depends on the speed of the camera. More is better, but my guess is that 5/second should be sufficient at walking speeds.
Using extra sensors will help. Probably the robot localization articles talk about this as well.