detecting motion on opencv c++ (moving camera) - c++

I'm doing a project for the university and I'm working with OpenCV (that is really awesome).
Now my problem is:
I have a video (.avi) and I have detected all the information I want to know about the blobs that suddenly appear in the RGB range between red and yellow. After I have realized a matrix that saves all the information about the pixel values, finally I create an image in the scale of red that represents the median pixel values.
The real problem is that the video is not static and the camera moves (not too much but it moves).
Can I calculate the x and y coordinates of the camera motion so I could shift the value of the matrix?

Who cares about your English? Till we understand your problem :) What you could really do is to give a shot at KLT motion detection that is implemented in OpenCV. Here is a link to KLT also known as optical flow If you can filter down the motion vectors limited to the blobs you can certainly get hold of the object you want to track. Even better to give KLT the objects initial coordinates/area to track. Have you checked OpenCV blobs library to get hold of the blobs? Here is the link

Related

Finding regions of higher numbers in a matrix

I am working on a project to detect certain objects in an aerial image, and as part of this I am trying to utilize elevation data for the image. I am working with Digital Elevation Models (DEMs), basically a matrix of elevation values. When I am trying to detect trees, for example, I want to search for tree-shaped regions that are higher than their surrounding terrain. Here is an example of a tree in a DEM heatmap:
https://i.stack.imgur.com/pIvlv.png
I want to be able to find small regions like that that are higher than their surroundings.
I am using OpenCV and GDAL for my actual image processing. Do either of those already contain techniques for what I'm trying to accomplish? If not, can you point me in the right direction? Some ideas I've had are going through each pixel and calculating the rate of change in relation to it's surrounding pixels, which would hopefully mean that pixels with high rates change/steep slopes would signify an edge of a raised area.
Note that the elevations will change from image to image, and this needs to work with any elevation. So the ground might be around 10 meters in one image but 20 meters in another.
Supposing you can put the DEM information into a 2D Mat where each "pixel" has the elevation value, you can find local maximums by applying dilate and then substract the result from the original image.
There's a related post with code examples in: http://answers.opencv.org/question/28035/find-local-maximum-in-1d-2d-mat/

Take a image of a tube that alway spin around in openCV C++

First of all, sorry for my bad English,
I have an object like following picture, the object always spin around a horizontal axis. Anybody can recommend me how to I can take a photo that's full label of tube when the tube is spinning ? I can take a image from my camera via OpenCV C++, but when I'm trying to spin the tube around, I can't take a perfect photo (my image is blurry, not clearly).
My tube is perfectly facing toward camera. Its rotating speed is about 500 RPM.
Hope to get your help soon,
Thank you very much!
this is my object:
Some sample images:
Here my image when I use camera of Ip5 with flash:
Motion blur
this can be improved by lowering the exposure time but you need to increase light conditions to compensate. Most modern compact cameras can not set the exposure time directly (so the companies can sold the expensive profi cameras) even if it is just few lines of GUI code but if you increase the light the automatic exposure should lower on its own.
In industry this problem is solved by special TDI cameras like
HAMAMATSU TDI Line Scan Cameras
The TDI means Time delay integration which means the camera CCD pixels are passing its charge to the next pixel synchronized with the motion. This results in effect like you would move the camera synchronously with your object surface. The blur is still present but much much smaller (only a fraction of real exposure time)
In computer vision and DIP you can de-blur the image by deconvolution process if you know the movement properties (which you know) It is inversion of gaussian blur filter with use of FFT and optimization process to find the inverse filter.
Out of focus blur
This is due the fact your surface is curved and camera chip is not. So outer pixels have different distance to chip then the center pixels. Without special optics you can handle this by Line cameras. Of coarse I do not expect you got one so you can use your camera for this too.
Just mount your camera so one of the camera axis is parallel to you object rotation axis (surface) for example x axis. Then sample more images with constant time step and use only the center line/slice of the image (height of the line/slice depends on your exposure time and the object speed, they should overlap a bit). then just combine these lines/slices from all the sampled images to form the focused image .
[Edit1] home made TDI setup
So mount camera so its view axis is perpendicular to surface.
Take burst shots or video with constant frame-rate
The shorter exposure time (higher frame-rate) the more focused whole image will be (due to optical blur) and the bigger area dy from motion blur. And the higher the rotation RPM the smaller the dy will be. So find the best option for your camera,RPM and lighting conditions (usually adding strong light helps if you do not have reflective surfaces on the tube).
For correct output you need to compromise each parameter so:
exposure time is as short as it can
focused areas are overlapping between the shots (if not you can sample more rounds similar to old FDD sector reading...)
extract focused part of shots
You need just the focused middle part of all shots so empirically take few shots from your setup and choose the dy size. Then use that as a constant latter. So extract the middle part (slice) from the shots. In my example image it is the red area.
combine slices
You just copy (or average overlapped part) the slices together. They should overlap a bit so you do not have holes in final image. As you can see my final image example has smaller slices then acquired to make that more obvious.
Your camera image can be off by few pixels due to vibrations so If that is a problem in final image then you can use SIFT/SURF + RANSAC for auto-stitching for higher precision output.

Dealing with noisy movement positions

I have positions of an object moving around an image. I think I'm detecting it the best I can where most of the time I'm detecting the center of the object. However, I'm still getting the odd detection of around the center caused by the frame rate not being fast enough, and the frame containing two positions of the object.
As I can't control the frame rate, how can I minimise the effects of the noise in the jittery positions.
As this is a common issue in computer vision, are there any filters in opencv to deal with noisy position data?
I asked for the comment by Berak to be made into an answer, but my request was "declined". So, yes, the answer that I found most useful was to use the Kalman filter which is implemented in opencv.

Detecting camera motion with opencv

I am working on drone stabilization close to walls using a camera. For this to work I need to extract the motion the camera makes relative to the wall. For now I used an expanded OpenCV example which uses the goodFeaturesToTrack command to find feature points in every frame. These feature points are then tracked into the next frame using calcOpticalFlowPyrLK which uses the Lucas-Kanade method. I then subtract the point locations to calculate the displacement. Adding all displacements together gets me the total displacement from the first frame. (in between I did some averaging and filtering).
The results I get do not look like the motion of the camera at all. The motion goes in any direction. Does anybody have any idea what's going wrong? Am I using the wrong algorithm for a problem like this?

Opencv C++ finding movement in a thresholded image

I am using openCv with C++ and I am trying to find a moving ball under different lighting conditions. So far I am able to filter an image by thresholding it using HSV color space. The problem with this is that it will pick up other object that have a similar color. It is very tedious to figure out the exact hsv range everytime there is a ball with different color/background.
Is there a way for me to apply any filter on the thresholded binary image to detect only the objects that are moving? This way I will only find the ball and not other objects since they are usually stationary.
Thank you,
Varun
Simplest approach would be frame differencing / background learning in an image sequence.
frame differencing: substract two successive frames, the result is the moving part (you will probably only get the edges of moving objects)
background learning: e.g. build an average over 50 frames, this would be your learned background, then substract the current frame, again the difference is the moving part