OpenCV process parts of an image - c++

I'm trying to use a PNG with an alpha channel to 'mask' the current frame from a video stream.
My PNG has black pixels in the areas that I don't want processed and alpha in others - currently it's saved a 4 colours image with 4 channels, but it might as well be a binary image.
I'm doing background subtraction and contour finding on the image, so I imagine if I copy the black pixels from my 'mask' image into the current then there would be no contours found in the black areas. Is this a good approach? If so, how can I copy the black/non transparent pixels from one cv::Mat on top of the other?

What you're describing sounds to me like the usage of an image mask. It's odd that you'd do it in the alpha channel, when so many methods available in the OpenCV libraries support masking. Rather than use the alpha channel, why not create a separate binary image with non-zero values everywhere you'd like to find contours?
Depending on which algorithms you use, you are correct in your assumption that you would not find contours in the black pixeled areas. Unfortunately, I don't know of any efficient ways of copying pixels from one image to another, selectively, without getting into the nitty-gritty of the Mat structure, and iterating from byte to byte/pixel to pixel. Using the mask idea presented above with your pre-processing functions, and then sending the resulting binary image into findContours or the like, would allow you to both take advantage of the already well-written and optimized code of the OpenCV library, and keep more of your hair on your head, where it belongs ;).

Related

Python: Reduce rectangles on images to their border

I have many grayscale input images which contain several rectangles. Some of them overlap and some go over the border of the image. An example image could look like this:
Now i have to reduce the rectangles to their border. My idea was to make all non-white pixels which are less than N (e.g. 3) pixels away from the border or a white pixel (using the Manhatten distance) white. The output should look like this (sorry for the different-sized borders):
It is not very hard to implement this. Unfortunately the implementation must be fast, because the input may contain extremly many images (e.g. 100'000) and the user has to wait until this step is finished.
I thought about using fromimage and do then everything with numpy, but i did not find a good solution.
Maybe someone has an idea or a hint how this problem may be solved very efficient?
Calculate the distance transform of the image (opencv distanceTrasform http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html)
In the resulted image zero all the pixels that have value bigger than 3

correspond values between two images in opencv / c++

I am new in image processing and opencv. I have two images. I want to find correspond values in the image 2 with the image 1. and then show it. is there any function in opencv to find correspond values between images?
thanks in advance.
Mat corrVals;
bitwise_and(image2, image1>0, corrVals);
image1>0 will create temporary binary image with values 0 and 255. Than the only thing you need is to perform AND operation between pixels of your images, and store result somewhere. This is done by bitwise_and.
This is similar to approach suggested by #Mailerdaimon but uses much cheaper operations.
You can threshold you image1 such that all Values you want are 1 and all other are 0.
Than you multiply image1 with image2.
cv::multiply(image1, image2, result, scale, dtype)
This will return an image with all values greater than zero from image2 that are marked in image1.
It is hard to say without looking at your images. This is a well studied problem in computer vision and OpenCV contains several algorithms for this. The problem you're looking at can be very easy or very hard, depending on:
your images, are the normal images? just shapes? binary?
where on the images lie the corresponding pixels
how fast you need this to run
how much variation there is between images, is it exactly the same pixel value?
is there camera movement?
is there variation in illumination?
You can start by looking at stereo matching and optical flow inside OpenCV.

Dilation Gradient w/ different ROI's (blob optimization) OPENCV

I'm working on a dilation problem in c++ with opencv. I've captured videoframes of a car park and in order to obtain the best blobs I came up with this.
Erosion (5x5 kernel rectangular), 3 iterations
Dilation GRADIENT (think of it like a color gradient along the y-axis)
So what did I do to get this working? First I needed to know 2 points (x,y) and 2 good dilate kernelsizes at those points. With this information one can inter and extrapolate those values over the whole image. So I calculated ROI's (size and dilation kernelsize) from those parameters. So each ROI has its own predefined kernelsize used for dilation. Note that there isn't any space between two consecutive ROI's (opencv rectangles). Everything is working fine, but there are two side effects:
Buldges on the sides of the blobs. The black line is de border of the ROI!
buldges picture
Blobs which are 'cut off' from the main blob. These aren't actually cut off but the ROI under the one of the blob above dilates (gets pixel information from the above ROI, I think) into blobs who are seperated. It should be one massive blob. blob who shoudn't be there picture
I've tried everything on changing the ROI sizes and left some space between them but the disadvantage is that the blob between 2 seperated ROI's is not dilated.
So my questions are:
What causes those side effects exactly?
What do I have to do to make them go away?
EDIT
So I found my solution: when you call the opencv dilate function, one needs to be sure if the same cv::Mat can be used as destination image. If not you'll be using parts of the original and new image. So all I had to do was including a destination cv::Mat.
This doesn't answer your first question (What causes those side effects for sure), but to make them go away, you can do some variant of the following, assuming the ROI parameters are discrete and not continuous (as seems to be the case).
You can compute the dilation for the entire image using every possible kernel size. Then, after all of those binary images are computed, you can combine them together taking the correct samples from the correct image to get the desired output image. This absolutely will waste a good deal of time, but it should work with no artifacts.
Once you've confirmed that the results you've gotten above (which are pretty much guaranteed to be of as-good-as-possible quality) you can start trying to optimize. One thing I'd try is expanding each of the ROI sizes for computing the dilation by the size of the kernel size. This might get around artifacts that can arise from strange boundary conditions.
This leads to my guess as to what causes the artifacts in the first place: Whenever you take a finite image and run a convolution (or morphological operator) you need to choose what you'll do with the edge pixels. Normally, accessing the pixel at (-4, -1) is meaningless, but to perform the operator you'll have to if your kernel overlaps with it. If OpenCV is doing this edge padding for your subregions, it very easily could give you the artifacts you're seeing.
Hope this helps!

Extending a contour in OpenCv

i have several contours that consist of several black regions in my image. Directly adjacent to these black regions are some brighter regions that do not belong to my contours. I want to add these brighter regions to my black region and therefor extend my contour in OpenCv.
Is there a convenient way to extend a contour? I thought about looking at intensity change from my gradient-image created with cv::Sobel and extend until the gradient changes again, meaning the intensity of pixel is going back to the neither black nor bright regions of the image.
Thanks!
Here are example images. The first picture shows the raw Image, the second the extracted Contour using Canny & findContours, the last one the Sobel-Gradient intensity Image of the same area.
I want to include the bright boundaries in the first image to the Contour.
Update: Now i've used some morphological operations on the Sobelgradients and added a contour around them (see Image below). Next step could be to find the adjacent pair of purple & red contours, but it seems very much like a waste of procession time to actually have to search for directly adjacent contours. Any better ideas?
Update 2: My solution for now is to search for morphed gradient (red) contours in a bounding box around my (purple) contours and pick the one with correct orientation & size. This works for gradient contours where the morphological operation closes the "rise" and "fall" gradient areas like in Figure 3. But it is still a bad solution for cases in which the lighted area is wider then in the image above. Any idea is still very much appreciated, thanks!
What you're trying to do is find two different features and merge them. It's not terribly difficult but you have to use multiple copies of the image to make it happen.
Make one copy, and threshold it for the dark portion
Make another copy and threshold it for the light portion
Merge both thresholded images into a new image
Apply a morphological operation like opening or closing (depending on how you threshold) This will connect nearby components
Find contours in the resultant image
Use those contours on your original image. This will work since all the images are the same size and all based off of the original.

Opencv C++ finding movement in a thresholded image

I am using openCv with C++ and I am trying to find a moving ball under different lighting conditions. So far I am able to filter an image by thresholding it using HSV color space. The problem with this is that it will pick up other object that have a similar color. It is very tedious to figure out the exact hsv range everytime there is a ball with different color/background.
Is there a way for me to apply any filter on the thresholded binary image to detect only the objects that are moving? This way I will only find the ball and not other objects since they are usually stationary.
Thank you,
Varun
Simplest approach would be frame differencing / background learning in an image sequence.
frame differencing: substract two successive frames, the result is the moving part (you will probably only get the edges of moving objects)
background learning: e.g. build an average over 50 frames, this would be your learned background, then substract the current frame, again the difference is the moving part