Remove moving objects to get the background model from multiple images - c++

I want to find the background in multiple images captured with a fixed camera. Camera detect moving objects(animal) and captured sequential Images. So I need to find a simple background model image by process 5 to 10 captured images with same background.
Can someone help me please??

Is your eventual goal to find foreground? Can you show some images?
If animals move fast enough they will create a lot of intensity changes while background pixels will remain closely correlated among most of the frames. I won’t write you real code but will give you a pseudo-code in openCV. The main idea is to average only correlated pixels:
Mat Iseq[10];// your sequence
Mat result, Iacc=0, Icnt=0; // Iacc and Icnt are float types
loop through your sequence, i=0; i<N-1; i++
matchTemplate(Iseg[i], Iseq[i+1], result, CV_TM_CCOEFF_NORMED);
mask = 1 & (result>0.9); // get correlated part, which is probably background
Iacc += Iseq[i] & mask + Iseq[i+1] & mask; // accumulate background infer
Icnt += 2*mask; // keep count
end of loop;
Mat Ibackground = Iacc.mul(1.0/Icnt); // average background (moving parts fade away)
To improve the result you may reduce mage resolution or apply blur to enhance correlation. You can also clean every mask from small connected components by erosion, for example.

If
each pixel location appears as background in more than half the frames, and
the colour of a pixel does not vary much across the subset of frames in which it is background,
then there's a very simple algorithm: for each pixel location, just take the median intensity over all frames.
How come? Suppose the image is greyscale (this makes it easier to explain, but the process will work for colour images too -- just treat each colour component separately). If a particular pixel appears as background in more than half the frames, then when you take the intensities of that pixel across all frames and sort them, a background-coloured pixel must appear at the half-way (median) position. (In the worst case, all background-coloured pixels get pushed to the very front or the very back in this order, but even then there are enough of them to cover the half-way point.)

If you only have 5 images it's going to be hard to identify background and most sophisticated techniques probably won't work. For general background identification methods, see Link

Related

Can I balance an extremely bright picture in python? This picture is a result of thousands of pictures stitched together to form a panorama

My aim is to stitch 1-2 thousand images together. I find the key points in all the images, then I find the matches between them. Next, I find the homography between the two images. I also take into account the current homography and all the previous homographies. Finally, I warp the images based on combined homography. (My code is written in python 2.7)
The issue I am facing is that when I overlay the warped images, they become extremely bright. The reason is that most of the area between two consecutive images is common/overalapping. So, when I overlay them, the intensities of the common areas increase by a factor of 2 and as more and more images are overalid the moew bright the values become and eventually I get a matrix where all the pixels have the value of 255.
Can I do something to adjust the brightness after every image I overlay?
I am combining/overlaying the images via open cv function named cv.addWeighted()
dst = cv.addWeighted( src1, alpha, src2, beta, gamma)
here, I am taking alpha and beta = 1
dst = cv.addWeighted( image1, 1, image2, 1, 0)
I also tried decreasing the value of alpha and beta but here a problem comes that, when around 100 images have been overlaid, the first ones start to vanish probably because the intensity of those images became zero after being multiplied by 0.5 at every iteration. The function looked as follows. Here, I also set the gamma as 5:
dst = cv.addWeighted( image1, 0.5, image2, 0.5, 5)
Can someone please help how can I solve the problem of images getting extremely bright (when aplha = beta = 1) or images vanishing after a certain point (when alpha and beta are both around 0.5).
This is the code where I am overlaying the images:
for i in range(0, len(allWarpedImages)):
for j in range(1, len(allWarpedImages[i])):
allWarpedImages[i][0] = cv2.addWeighted(allWarpedImages[i][0], 1, allWarpedImages[i][j], 1, 0)
images.append(allWarpedImages[i][0])
cv2.imwrite('/root/Desktop/thesis' + 'final.png', images[0])
When you stitch two images, the pixel values of overlapping part do not just add up. Ideally, two matching pixels should have the same value (a spot in the first image should also has the same value in the second image), so you simply keep one value.
In reality, two matching pixels may have slightly different pixel value, you may simply average them out. Better still, you adjust their exposure level to match each other before stitching.
For many images to be stitched together, you will need to adjust all of their exposure level to match. To equalize their exposure level is a rather big topic, please read about "histogram equalization" if you are not familiar with it yet.
Also, it is very possible that there is high contrast across that many images, so you may need to make your stitched image an HDR (high dynamic range) image, to prevent pixel value overflow/underflow.

cv::detail::MultiBandBlender strange white streaks at the end of the photo

I'm working with OpenCV 3.4.8 with C++11 and I'm trying to blend images together.
In this example I have 2 images (thiers mask shown in the screen belowe). I have georeference, so I can easy calculate corners of this images in the final image.
The data outside the masks are black.
My code looks like something like that:
std::vector<cv::UMat> inputImages;
std::vector<cv::UMat> masks;
std::vector<cv::Point> corners;
std::vector<cv::Size> imgSizes;
/*
here is code where I load images, create thier masks
(like in the screen above) and calculate corners.
*/
cv::Ptr<cv::detail::SeamFinder> seamFinder = new cv::detail::DpSeamFinder();
seamFinder->find(inputImages, corners, masks);
cv::Ptr<cv::detail::Blender> blender = new cv::detail:: MultiBandBlender(false);
blender->prepare(corners, imgSizes);
for(size_t i = 0; i < inputImages.size(); i++)
{
blender->feed(inputImages[i], masks[i], corners[i]);
}
cv::UMat blendedImg, outMask;
blender->blend(blendedImg, outMask);
SeamFinder gives me result like in the screen above. Finded seam lines looks good and Im very satisied form them. But the other problem occurs in the next step. The MultiBandBlender is making strange white streaks when the seam line goes on the end of the data.
This is an example:
When I don't use blender, but just use masks to cut the oryginal images and just add (cv::add()) images together with additional alpha channel (made from masks) I get very good results without any holes and strange colors, but I need to have more smoothed transition :/
Can anyone help me? When I create MultiBand Blender with smaller num_bands the white streaks are smaller, and with the num_bands = 0 the results looks like with just adding images.
I looked at feed() and blend() methods in the MultiBandBlender and I think that it is connected with Gaussian or Laplacian pyramid and the final restoring images from Laplacian pyramid in the blend() method.
EDIT1:
When Gaussian and Laplacian pyramids are created the copyMakeBorder(), which prevents the MultiBandBlender from making this white streaks when images are fully filled with the data. So in my case I think that I need to create my blender almost the same like MultiBandBlender, but copyMakeBorder() method in the feed() method change to the something that will "extend" my image inside the mask, like #AlexanderKondratskiy suggested.
Now I don't know how to achive correct "extend" similar to BORDER_REFLECT or BORDER_REFLECT_101.
I suspect your input images contain white pixels outside those masks. The white banding occurs around the areas where the seam follows the mask exactly. For Laplacian for example, pixels outside the mask do influence the final result, as each layer of a pyramid is essentially some blurring kernel on the image.
If you have some kind of good data outside the mask, keep it. If you do not, I suggest "extending" your image beyond the mask to maintain a smooth transition.
Edit:
Here's two things you could try, unless someone with more experience with OpenCV comes along.
To prove/disprove my hypothesis, fill the black region with just the average or median color within the mask. This should make the transition to the outside region less sharp, and hopefully reduce the artefacts. If that does not happen, my answer is wrong.
In terms of what is probably a good generalization of "BORDER_REFLECT" when the edge is arbitrary, you could try something like this:
Find the centroid c of the mask polygon
For each pixel p outside the mask, think of the line between it and c
Calculate point p' along this line that is the same distance inside the mask area, as p is from the mask edge. (i.e. you're reflecting along the mask edge)
Linearly interpolate the color of from the neighbors of p' (as it's position may not fall exactly in the middle of a pixel). That's the color of pixel p

How to project IR image on a 2D plane using OpenCV and PCL

I have a Kinect and I'm using OpenCV and point cloud library. I would like to project the IR Image onto a 2D plane for forklift pallet detection. How would I do that?
I'm trying to detect the pallet in the forklift here is an image:
Where are the RGB data? You can use it to help with the detection. You do not need to project the image onto any plane to detect a pellet. There are basically 2 ways used for detection
non-deterministic based on neural network, fuzzy logic, machine learning, etc
This approach need a training dataset to recognize the object. Much experience is needed for proper training set and classifier architecture/topology selection. But other then that you do not need to program it... as usually some readily available lib/tool is used just configure and pass the data.
deterministic based on distance or correlation coefficients
I would start with detecting specific features like:
pallet has specific size
pallet has sharp edges and specific geometry shape in depth data
pallet has specific range of colors (yellowish wood +/- lighting and dirt)
Wood has specific texture patterns
So compute some coefficient for each feature how close the object is to real pallet. And then just treshold the distance of all coefficients combined (possibly weighted as some features are more robust).
I do not use the #1 approach so I would go for #2. So combine the RGB and depth data (they have to be matched exactly). Then segmentate the image (based on depth and color). After that for each found object classify if it is pallet ...
[Edit1]
Your colored image does not correspond to depth data. The aligned gray-scale has poor quality and the depth data image is also very poor. Is the depth data processed somehow (loosing precision)? If you look at your data from different sides:
You can see how poor it is so I doubt you can use depth data for detection at all...
PS. I used my Align already captured rgb and depth images for the visualization.
The only thing left is the colored image and detect areas with matching color only. Then detect the features and classify. The color of your pallet in the image is almost white. Here HSV reduced colors to basic 16 colors (too lazy to segmentate)
You should obtain range of colors of the pallets possible by your setup to ease up the detection. Then check those objects for the features like size, shape,area,circumference...
[Edit2]
So I would start with Image preprocessing:
convert to HSV
treshold only pixels close to pallet color
I chose (H=40,S=18,V>100) as a pallet color. My HSV ranges are <0,255> per channel so Hue angle difference can be only <-180deg,+180deg> max which corresponds to <-128,+128> in my ranges.
remove too thin areas
Just scan all Horizontal an Vertical lines count consequent set pixels and if too small size recolor them to black...
This is the result:
On the left the original image (downsized so it fits to this page), In the middle is the color treshold result and last is the filtering out of small areas. You can play with tresholds and pallet color to change behavior to suite your needs.
Here C++ code:
int tr_d=10; // min size of pallet [pixels[
int h,s,v,x,y,xx;
color c;
pic1=pic0;
pic1.pf=_pf_rgba;
pic2.resize(pic1.xs*3,pic1.ys); xx=0;
pic2.bmp->Canvas->Draw(xx,0,pic0.bmp); xx+=pic1.xs;
// [color selection]
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
{
// get color from image
c=pic0.p[y][x];
rgb2hsv(c);
// distance to white-yellowish color in HSV (H=40,S=18,V>100)
h=c.db[picture::_h]-40;
s=c.db[picture::_s]-18;
v=c.db[picture::_v];
// hue is cyclic angular so use only shorter angle
if (h<-128) h+=256;
if (h>+128) h-=256;
// abs value
if (h< 0) h=-h;
if (s< 0) s=-s;
// treshold close colors
c.dd=0;
if (h<25)
if (s<25)
if (v>100)
c.dd=0x00FFFFFF;
pic1.p[y][x]=c;
}
pic2.bmp->Canvas->Draw(xx,0,pic1.bmp); xx+=pic1.xs;
// [remove too thin areas]
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;)
{
for ( ;x<pic1.xs;x++) if ( pic1.p[y][x].dd) break; // find set pixel
for (h=x;x<pic1.xs;x++) if (!pic1.p[y][x].dd) break; // find unset pixel
if (x-h<tr_d) for (;h<x;h++) pic1.p[y][h].dd=0; // if too small size recolor to zero
}
for (x=0;x<pic1.xs;x++)
for (y=0;y<pic1.ys;)
{
for ( ;y<pic1.ys;y++) if ( pic1.p[y][x].dd) break; // find set pixel
for (h=y;y<pic1.ys;y++) if (!pic1.p[y][x].dd) break; // find unset pixel
if (y-h<tr_d) for (;h<y;h++) pic1.p[h][x].dd=0; // if too small size recolor to zero
}
pic2.bmp->Canvas->Draw(xx,0,pic1.bmp); xx+=pic1.xs;
See how to extract the borders of an image (OCT/retinal scan image) for the description of picture and color. Or look at any of my DIP/CV tagged answers. Now the code is well commented and straightforward but just need to add:
You can ignore pic2 stuff it is just the image posted above so I do not need to manually print screen and merge the subresult in paint... To improve robustness you should add enhancing of dynamic range (so the tresholds have the same conditions for any input images). Also you should compare to more then just single color (if more wood types of pallet is present).
Now you should segmentate or label the areas
loop through entire image
find first pixel set with the pallet color
flood fill the area with some distinct ID color different from set pallet color
I use black 0x00000000 space and white 0x00FFFFFF as pallete pixel color. So use ID={1,2,3,4,5...}. Also remember number of filled pixels (that is your area) so you do not need to compute it again. You can also compute bounding box directly while filling.
compute and compare features
You need to experiment with more then one image. To find out what properties are good for detection. I would go for circumference length vs area ratio. and or bounding box size... The circumference can be extracted by simply selecting all pixels with proper ID color neighboring black pixel.
See also similar Fracture detection in hand using image proccessing
Good luck and have fun ...

Match object between different video frames

Am trying to use OPENCV to detect the shift in consecutive video frames when the camera is unstable and moving real time as shown in the picture.. To compensate the effect of shaking or changing in the angle I want to match some objects in the image as example the clock and from the center of the same object in the consecutive frames I can detect the shift value and compensate its effect. I don't know the way to do this real time or how many ways are available and accurate to do this.
Thank you in advance and I hope my question is clear.
This is a fairly standard operation, as it's actively used in MPEG-4 compression. It's called "motion estimation" and you don't do it on objects (too hard, requires image segmentation). In OpenCV, it's covered under Video Stabilization
If you want to try writing code yourself then one method is to first of all crop the frame to produce a sub image of your actual image slightly smaller than your actual image along each dimension. This will give you some room to move.
Next you want to be able to find and track shapes in OpenCV - an example of code is here - http://opencv-srf.blogspot.co.uk/2011/09/object-detection-tracking-using-contours.html - Play around until you get a few geometric primitive shapes coming up on each frame.
Next you want to build some vectors between the centres of each shape - these are what will determine the movement of the camera - if in the next frame most of the vectors are displaced but parallel that is a good indicator that the camera has moved.
The last step is to calculate the displacement, which should is matter of measuring the distance between detected parallel vectors. If this is smaller than your sub-image cropping then you can crop the original image to negate the displacement.
The pseudo code for each iteration would be something like -
//Variables
image wholeFrame1, wholeFrame2, subImage, shapesFrame1, shapesFrame2
vectorArray vectorsFrame1, vectorsFrame2; parallelVectorList
vector cameraDisplacement = [0,0]
//Display image
subImage = cropImage(wholeFrame1, cameraDisplacement)
display(subImage);
//Find shapes to track
shapesFrame1 = findShapes(wholeFrame1)
shapesFrame2 = findShapes(wholeFrame2)
//Store a list of parallel vectors
parallelVectorList = detectParallelVectors(shapesFrame1, shapesFrame2)
//Find the mean displacement of each pair of parallel vectors
cameraDisplacement = meanDisplacement(parallelVectorList)
//Crop the next image accounting for camera displacement
subImage = cropImage(wholeFrame1, cameraDisplacement)
There are better ways of doing it but this would be easy enough for someone doing their first attempt at image stabilisation with experience of OpenCV.

Scanning and Detecting Object Color in Image

I'm developing a software that detects boxers punching motion. At the moment i used color based segmentation using inRange function and set it to detect blue Minimum value and Blue Maximum value. The problem is that the range is quite wide and my cam at times picks out noise and segments objects of no interest. To improve the software i though of scanning image of a boxing glove and establishing exact Blue color Value before further processing.
It would make sens to me to store that value in a Vector and call it in inRange fiction
// My current function which takes the Minimum and Maximum values of Blue Color
Mat range_out;
inRange(blur_out, Scalar(100, 100, 100), Scalar(120, 255, 255), range_out);
So i would image the vector to go somewhere here.
Scan this above image compute the Blue value
Store this value in an array
recall the array in a inRange function
Could someone suggest a solution to this problem or direct me to a source of information where I can look for answers ?
since you are detecting the boxer gloves in motion so first use motion to separate it from other elements in the scene...use frame differentiation or optical flow to separate the glove and other moving areas from non moving areas...now in those moving area try for some colour detection...
Separe luminosity and cromaticity - your fixed range will not work very well in different light conditions. Your range is wide probably because you are trying to see "blue" in dark and on light at the same time. Convert your image to HSV (or La*b*) and discard V (or L), keeping H and S (or a* and b*).
Learn a color distribution instead a simple range - take some samples and compute a 2D
color histogram on H and S (a* or b*) for pixels on the glove. This histogram will be a model for the color distribution of your object. Then, use c2.calcBackProjection to detect the pixels of interest in your scene.
Clean the result using morphological close operation
Important: on step 2, play a little with different quantization values (ie, different numbers of bins).