I want to take a video and create a binary from it, I want it so that if the pixel is within a certain range it will be included within the binary. In other words I want an upper and lower bound like in the inRange() function as opposed to a simple cutoff point like in the threshold() function.
I also want to use adaptive thresholding to account for differences in lighting in my video. Is there a way to do this? I know there is inRange() that does the former and adaptiveThreshold() that does the latter, but I don't know if there is a way to do both.
Apply adaptiveThreshold() to the whole original image, then apply inRange() to the original image and use the result of inRange() as a mask:
adaptiveThreshold(original_image, dst_image ... );
inRange(original_image, minArray, maxArray, mask);
Mat output = dst_image.mul(mask);
Related
I'm using OpenCV and I have a gray-scale image that is the result of a smoothing operation on a binary mask:
I would like to apply this mask to a given RGB image, but using the copyTo method with the mask option takes into account all the non-zero pixels of the mask. However, what I'm interested in is to obtain an output image whose RGB pixel values are the input values 'scaled' pixel-wise by the factor given by the gray-scale mask.
I have the feeling that this is possible by using the built-in functions of OpenCV, but so far I couldn't find any way to do what I want.
I would know how to do that from scratch in a brute force fashion, but I'd prefer - if possible - to use built-in functions.
Thank you in advance!
As #api55 pointed out, the solution to my problem is:
Normalize the mask through the function cv::normalize
Multiply the normalized mask with the input image through the function cv::multiply
In particular, the type of the normalized mask must be set to CV_32F (otherwise it won't work). As a consequence, the input image has to be converted as well (e.g., with convertTo).
Example code:
cv::normalize(mask,mask,0.,1.,cv::NORM_MINMAX,CV_32F);
image.convertTo(image,CV_32F);
cv::multiply(image,mask,image);
image.convertTo(image,CV_8U); // Convert back the input image to the original type
I am working on project what detect hematoma from skin. I am having issue with color after convertion from RGB to HSV. My algorithm detect hematoma by its color.
With some images I have good results like here:
Original img: http://imgur.com/WHiOWdj
Result img: http://imgur.com/PujbnHa
But with some images i have bad result like this:
Original img: http://imgur.com/OshB99r
Result img: http://imgur.com/CuNzAId
The same original image after convertion to HSV: http://imgur.com/lkVwtCs
Do you have any ideas how to fix it?
Thanks
Looking at your result image I think that you are only using the H channel of the original image in your algorithm. The false positive detection can inherit from that the some part of the healty skin has quite the same H value than the hematoma has. You can see on the qrey-scale image of H channel that both parts have similar values:
The difference between the two parts is the saturation value. On the following image you can see the S channel of the original image and it shows perfectly that at the hematoma the saturation is much higher than at other the part of the arm:
This was expected because the hematoma has much stronger color than the healty skin has.
So, I suggest you to use both H and S channel in your algorithm that is you have to take into account only that parts of H image where the S image contains high saturation values. A possible and simple solution to do that is that you binarize both H and S images and with an AND operation you can execute this filtering:
H image after binarisation:
S image after binarisation:
Image after H&S operation:
You can see that on the result image only the hematoma part is white (except some noise but you can eliminate easily, for example by size or by morphological filtering).
EDIT
Important to note that binarization is one of most important (and sometimes also very complicated) step in the object detection algorithms namely binarization is the first highlight of the objects to detect.
If the the external conditions (lighting, color of objects etc.) do not change significantly from image to image you can use fix binaraziation thresholds. If this constant environment can not be issured you have to use more complicated methods. There are a lot of possibilies you can use, here you can read some examples:
Wikipedia - Thresholding
Wikipedia - Balanced histogram thresholding
Several solutions are based on the histogram analysis: on the histograms with objects there are always more local maximums which positions can vary depend on the environment and if you find them you can adapt the binarization threshold easily.
For example the histogram of the H channel of the original image is the following:
The first maximum belongs to the background, the second to the skin and the last to the hematome. It can be supposed that these 3 thresholds can be found in each image only their positions vary depend on the lighting or on other conditions. To put a threshold between the 2nd and the 3rd local maximum it can be a good choice to highlight the hematome.
Finally I offer you the read the following articel about thresholding in OpenCV:
OpenCV - Thresholding
I'm tried to use both of the methods but it seems like Adaptive threshold seems to be giving a better result. I used
cvSmooth( temp, dst,CV_GAUSSIAN,9,9, 0);
on the original image then only i used the threshold.
Is there anything I can tweak with the Otsu method to make the image better like adaptive thresholding? And 1 more thing, there are some unwanted fingerprint residue on the side, any idea how i can dispose them off?
I read from a journal that by comparing the percentage of the white pixels in a self-defined square, I can get the ROI. However this method requires me to have a threshold value which can be found using OTSU method but I'm not too sure about AdaptiveThresholding.
cvAdaptiveThreshold( temp, dst, 255,CV_ADAPTIVE_THRESH_MEAN_C,CV_THRESH_BINARY,13, 1 );
Result :
cvThreshold(temp, dst, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
To get rid of the unwanted background, you can do a simple masking operation. The Otsu threshold function provides a threshold value that cuts the foreground image from the background. Use that threshold value in order to create a binary mask by iterating through the entire input image, checking if the current pixel value is greater than the threshold, and setting it to 1 if true or 0 if it is false.
Then, you can apply the binary mask to the original image by a simple matrix multiplication operation or a bitwise shift operation to remove the background.
Try dividing the image into ROIs and apply otsu individually, then merge them back. Dividing strategy can be static or dynamic depending on the max illumination.
Some people in this Q & A site suggested I use findContour to imitate what bwlabel in Matlab. But I am not sure because I think a contour is closed shape of detected edges and element from bwlabel is a connected shape. I guess they might be logically the same. What about them in practice? Are they really same?
Use either of these two library....cvBlobslib or cvblob...you will get many features about the connected components such as size and contour and ellipticity and bounding box...you can filter blobs and add togethar 2 or more blobs...try it..under the hood algo of bwlabel is a two scan connected component where as cvblob or cvBlobslib is a one scan algo...
bwlabel will give you the image connected components, i.e. different label for different connected objects in a background.
Probably what you mean is the combination of im2bw and imcontours provides, i.e. a combination of binarizing the image and trivially finding the single contour (boundaries) per retained object on the output.
Consider the following example:
I = imread('coins.png'); % grayscale
level = graythresh(I); % find thershold
BW = im2bw(I, level); % threshold image
imcontour(BW, 1); % plot single contour
For a grayscale image you can increase the number of requested contours, though findContours operates on binary images.
I found an exact article about this. Quick answer is "Yeah, their eventual output will be the same." So I might go with findContour after all considering cvBlob still using old C-style API and having its own implementation of finding contours.
Does anyone know what is the simplest way to extract the gray-level depth images of Kinect using OpenCV and C++? any source code in this field?
if you use OpenNI SDK, you can simply point to the buffer:
//on setup:
xn::DepthGenerator depthGenerator;
xn::DepthMetaData depthMD;
cv::Mat depthWrapper;
//on update loop,
//after context.WaitAnyUpdateAll();
depthGenerator.GetMetaData(depthMD);
depthWrapper = cv::Mat(depthMD.YRes(), depthMD.XRes(), CV_16UC1, (void*) depthMD.Data());
note that depthWrapper is const so you need to clone it in order to manipulate it
The documentation has everything you need. Can't elaborate better than this.
You need to do two things (apart from reading about context, depth generator and initialization of Kinect):
Create Mat of the type CV_16U a.
context.WaitOneUpdateAll(depth_map); b. Mdepth_original =
Mat(h_depth, w_depth, CV_16U, (void*) depth_map.GetData()) c. copy
the Mat since it will be destroyed during next read:
Mdepth_original.copyTo(depth);
Map depth to gray or color. Color seems like a good idea (256^3 levels) but a human eye is more sensitive to the luminance change. Even with 256 levels you can map 10,000 Kinect levels reasonably well using [histogram equalization][1] technique. A simplest way though is to loose precision and just do I(x, y) = 255.0*z(x, y)/z_range
Here is how histogram equalization is implemented in openNI2:
https://github.com/OpenNI/OpenNI2/blob/master/Samples/Common/OniSampleUtilities.h