OpenCV/C++ - Convert a gray picture to BGR picture after some images process - c++

I'm working on a project which consist to stabilize a video.
Therefore, during the process of stabilization, i had to convert my frames to gray in order to use some methods like goodFeaturesToTrack() or opticalFlow().
But at the end of my process, after applying my last transformation using warpAffine(), i would like to recover the colour information of the frame but i'm not able to do this. I tried some things.
I try cvtColor(outFrame,outFrame,CV_GRAY2BGR) but not working (obviously). Still back and white
At the beginning of the loop, i picked up the three colour channels B G R of my original picture like that:
Mat channel[3];
split(frameColor, channel);
And then at the end of the process, i'm doing that:
merge(channel,3,outFrame);
So i have the colour of my frame but not stabilized , that is to say it's like merging channels has removed all the transformation.
I also try to use the warpAffine() function with the colour picture but i have the same result of above.
Please help me.
Thank.

I solved my problem.
Actually when you apply a transformation like warpAffine(), you have to apply it on the previous frame and not the current. I didn't notice that i applied it to my current color frame and not the previous colour frame. And therefore, there was no changement.
By applying it on my previous colour frame, the image is in colour and stabilized.

Related

cv::detail::MultiBandBlender strange white streaks at the end of the photo

I'm working with OpenCV 3.4.8 with C++11 and I'm trying to blend images together.
In this example I have 2 images (thiers mask shown in the screen belowe). I have georeference, so I can easy calculate corners of this images in the final image.
The data outside the masks are black.
My code looks like something like that:
std::vector<cv::UMat> inputImages;
std::vector<cv::UMat> masks;
std::vector<cv::Point> corners;
std::vector<cv::Size> imgSizes;
/*
here is code where I load images, create thier masks
(like in the screen above) and calculate corners.
*/
cv::Ptr<cv::detail::SeamFinder> seamFinder = new cv::detail::DpSeamFinder();
seamFinder->find(inputImages, corners, masks);
cv::Ptr<cv::detail::Blender> blender = new cv::detail:: MultiBandBlender(false);
blender->prepare(corners, imgSizes);
for(size_t i = 0; i < inputImages.size(); i++)
{
blender->feed(inputImages[i], masks[i], corners[i]);
}
cv::UMat blendedImg, outMask;
blender->blend(blendedImg, outMask);
SeamFinder gives me result like in the screen above. Finded seam lines looks good and Im very satisied form them. But the other problem occurs in the next step. The MultiBandBlender is making strange white streaks when the seam line goes on the end of the data.
This is an example:
When I don't use blender, but just use masks to cut the oryginal images and just add (cv::add()) images together with additional alpha channel (made from masks) I get very good results without any holes and strange colors, but I need to have more smoothed transition :/
Can anyone help me? When I create MultiBand Blender with smaller num_bands the white streaks are smaller, and with the num_bands = 0 the results looks like with just adding images.
I looked at feed() and blend() methods in the MultiBandBlender and I think that it is connected with Gaussian or Laplacian pyramid and the final restoring images from Laplacian pyramid in the blend() method.
EDIT1:
When Gaussian and Laplacian pyramids are created the copyMakeBorder(), which prevents the MultiBandBlender from making this white streaks when images are fully filled with the data. So in my case I think that I need to create my blender almost the same like MultiBandBlender, but copyMakeBorder() method in the feed() method change to the something that will "extend" my image inside the mask, like #AlexanderKondratskiy suggested.
Now I don't know how to achive correct "extend" similar to BORDER_REFLECT or BORDER_REFLECT_101.
I suspect your input images contain white pixels outside those masks. The white banding occurs around the areas where the seam follows the mask exactly. For Laplacian for example, pixels outside the mask do influence the final result, as each layer of a pyramid is essentially some blurring kernel on the image.
If you have some kind of good data outside the mask, keep it. If you do not, I suggest "extending" your image beyond the mask to maintain a smooth transition.
Edit:
Here's two things you could try, unless someone with more experience with OpenCV comes along.
To prove/disprove my hypothesis, fill the black region with just the average or median color within the mask. This should make the transition to the outside region less sharp, and hopefully reduce the artefacts. If that does not happen, my answer is wrong.
In terms of what is probably a good generalization of "BORDER_REFLECT" when the edge is arbitrary, you could try something like this:
Find the centroid c of the mask polygon
For each pixel p outside the mask, think of the line between it and c
Calculate point p' along this line that is the same distance inside the mask area, as p is from the mask edge. (i.e. you're reflecting along the mask edge)
Linearly interpolate the color of from the neighbors of p' (as it's position may not fall exactly in the middle of a pixel). That's the color of pixel p

retain original color of the object after thresholding opencv

I am doing a project where i need to find a red laser dot. After changing to HSV color space model and thresholding individual H,S,V components and merging it , i found a laser dot with several noise as well , now i need to subtract all other image components except for the laser dot and the noise with their respective color so that i can process those frame for further processing like template matching to get only the laser dot reducing the noises. Hope you understand the question and Thank You, any similar help is appreciated.
What you're looking to do is apply a mask to an image. A mask is an image where any positive non-zero value acts as an indicator. What you want to do is use the mask to indicate which pixels in the original image you want.
The easiest way to apply a mask is to use the cv2.bitwise_and() function, with your thresholded image as the mask:
masked_img = cv2.bitwise_and(img, img, mask=thresholded_img)
As an example, if this is my image and this is my mask, then this would be the masked image.

Multiply Images In OpenCv & Apply Laplacian Filter On It

In my previous question here's the link. According to the answer I have obtained the desired image which is white flood filled.
Now after applying the morphological operation of erosion on the white flood filled image, I get the new masked image.
Your answer helped a lot. Now what I am trying to do is that I am multiplying the new masked image with the original grayscaled image in order to get the veins pattern. But it gives me the same image as result which I get after performing erosion on the white flood filled image. After completing this step I have to apply the Laplacian function to get the veins pattern. I am attaching the original image and the result image that I want. I hope you will look into the matter.
Original Image.
Result Image.
If I am right in understanding you, you only want to extract the veins from the grayscale hand image, right? To do something like this, you would obviously multiply both of them as,
finalimg = grayimg * veinmask;
If you have done the above I think it would be more helpful to post a portion of your code so experts here might be able to point out whats wrong, also the output image that you're getting, and the one you want would also help.
I hope I understand you correctly. You have a gray scale image showing a hand (first image in your question)
You create a mask image that looks like the second image you posted.
Multiplication of both results in the mask image?
If that is the case check your values. If you work within a byte image your mask image must contain values 0 and 1, not 0 and 255 as the multiplication results for non-zero mask pixels otherwise exceed 255!

Create mask to select the black area

I have a black area around my image and I want to create a mask using OpenCV C++ that selects just this black area so that I can paint it later. How can i do that without affecting the image itself?
I tried to convert the image to grayscale and then using threshold to convert it to binary, but it affects my image since the result contains black pixels from inside the image.
Another Question : if i want to crop the image instead of paint it, how can i do it??
Thanks in advance,
I would solve the problem like this:
Inverse-binarize the image with a threshold of 1 (i.e. all pixels with the value 0 are set to 1, all others to 0)
use cv::findContours to find white segments
remove segments that don't touch image borders
use cv::drawContours to draw the remaining segments to a mask.
There is probably a more efficient solution in terms of runtime efficiency, but you should be able to prototype my solution quite quickly.

Remove moving objects to get the background model from multiple images

I want to find the background in multiple images captured with a fixed camera. Camera detect moving objects(animal) and captured sequential Images. So I need to find a simple background model image by process 5 to 10 captured images with same background.
Can someone help me please??
Is your eventual goal to find foreground? Can you show some images?
If animals move fast enough they will create a lot of intensity changes while background pixels will remain closely correlated among most of the frames. I won’t write you real code but will give you a pseudo-code in openCV. The main idea is to average only correlated pixels:
Mat Iseq[10];// your sequence
Mat result, Iacc=0, Icnt=0; // Iacc and Icnt are float types
loop through your sequence, i=0; i<N-1; i++
matchTemplate(Iseg[i], Iseq[i+1], result, CV_TM_CCOEFF_NORMED);
mask = 1 & (result>0.9); // get correlated part, which is probably background
Iacc += Iseq[i] & mask + Iseq[i+1] & mask; // accumulate background infer
Icnt += 2*mask; // keep count
end of loop;
Mat Ibackground = Iacc.mul(1.0/Icnt); // average background (moving parts fade away)
To improve the result you may reduce mage resolution or apply blur to enhance correlation. You can also clean every mask from small connected components by erosion, for example.
If
each pixel location appears as background in more than half the frames, and
the colour of a pixel does not vary much across the subset of frames in which it is background,
then there's a very simple algorithm: for each pixel location, just take the median intensity over all frames.
How come? Suppose the image is greyscale (this makes it easier to explain, but the process will work for colour images too -- just treat each colour component separately). If a particular pixel appears as background in more than half the frames, then when you take the intensities of that pixel across all frames and sort them, a background-coloured pixel must appear at the half-way (median) position. (In the worst case, all background-coloured pixels get pushed to the very front or the very back in this order, but even then there are enough of them to cover the half-way point.)
If you only have 5 images it's going to be hard to identify background and most sophisticated techniques probably won't work. For general background identification methods, see Link