I need to apply gradient operator to RGB bitmap image. It works for 8 bit image but having the difficulty in implementing same for 24 bit image. Here is my code. Can anyone see how
to correct the zorizontal gradient operation to RGB image.
if (iBitPerPixel == 24) ////RGB 24 bits image
{
for(int i=0; i<iHeight; i++)
for(int j=1; j<iWidth-4; j++)
{
//pImg_Gradient[i*Wp+j] = pImg[i*Wp+j+1] - pImg[i*Wp+j-1] ;
int level = pImg[i*Wp+j*3+1] - pImg[i*Wp+j*3-1] ;
pImg_Gradient[i*Wp+j*3] = level;
// pImg_Gradient[i*Wp+j*3] = level;
// pImg_Gradient[i*Wp+j*3+1] = level;
// pImg_Gradient[i*Wp+j*3+2]= level;
}
for(int i=0; i<iHeight; i++)
for(int j=0; j<iWidth; j++)
{
// Copy the convetred values to original image.
pImg[i*Wp+j] = (BYTE) pImg_Gradient[i*Wp+j];
}
//delete pImg_Gradient;
}
Unfortunately, it is not clear how to define a gradient of an RGB image. The best way to go is to transform the image into a color space that separates intensity from color, such as HSV, and compute the gradient of the intensity component. Alternatively, you can compute the gradient of each color channel separately, and then combine the results in some way, such as taking the average.
Also see Edge detectors for RGB images?
In order to calculate the Gradient of an image (Which is a vector) you need to calculate both the horizontal and vertical derivative of the image.
Since we're dealing with a discrete image we should use Finitie Difference approximations of the derivative.
There are many ways to approximate, many of them are listed on the Wikipedia Pages:
http://en.wikipedia.org/wiki/Finite_difference
http://en.wikipedia.org/wiki/Finite_difference_method
http://en.wikipedia.org/wiki/Finite_difference_coefficients
Basically those are Spatial Coefficients hence you can define a filter using them and just filter the image.
This would be the most efficient way to calculate the gradient.
So, all you need is to find a library (Such as Open CV) which supports filtering images and you're done.
For color images, usually, you just calculate the Gradient per Color Channel.
Good Luck.
From your code; you are trying to calculate gradient from RGB but there is nothing to indicate how RGB is stored in your image. A complete guess is that in your image you have BGRBGRBGR...etc.
In that case your code is getting the gradient from the green channel, then storing it in the red of the gradient image. You don't show the gradient image being cleared to 0 - if you don't do this then it will probably be full of junk.
My suggestion is to convert to a greyscale image first; then you can use your original code.
Or calculate a gradient for each colour channel.
Related
I have a question for us. I'm a newbe of OpenCV and I need to understand if that lib can help me to reach my goals.
I need to use OpenCV to open a Tiff file (big Tiff file) and split it on two different file with a mask like that Mask, in the end the file 1 have pixel black and the file 2 have the negative - pixel white of the original image.
Any ideas or example for me?
Thank you all!
To read the file, you can use the function imread. This stores it in a cv::Mat object. Since your mask is black and white, I would read the mask-image as a grayscale using IMREAD_GRAYSCALE. This gives you each pixel with a value from 0-255. That should cover the first part of your question.
I have to admit I am having trouble understandig your question, but I expect you want to create two images. The first contains all the pixels where your mask has a black pixel. The second one contains an image where in the mask all the pixels are white.
You could look at this thread. Additionally I would like to give you the way that I would do it.
The problem you would run in to is that your .tiff-image has a different type than your chessboard. Tiff is probably CV_8UC3 and chessboard is probably CV_8UC1. But this should be easily solvable.
I think you would probably want to look at each individual pixel and leave the be if, at that same pixel of the chessboard, your color is white. Then if it is not, make that pixel from your original pixel black. I have not tested this, but it would look something like this.
for (int i = 0; i < originalImage.rows; i++) {
for (int j = 0; j < originalImage.cols; j++) {
if (chessboard.at<uchar>(Point(j, i)) != 255) {
originalImage.at<Vec3b>(Point(j, i)) = Scalar(0, 0, 0);
}
else {
// Do nothing.
}
}
}
Scalar is used, since the originalImage has three channels instead of one. I hope this helps!
Try this to create the mask:
cv::Mat tiff;
cv::Mat maskDark = tiff == 0; // comparison like '< 10' also works
cv::Mat maskDark = tiff == 255;
I am doing a real-time shapes and colors classification system with very high accuracy. It seems like my preprocessing phase is not good enough so that the result is not as accurate as I expected. Here is what I'm doing:
Take data from the Camera can crop it to receive ROI.
Convert ROI Image from RGB to HSV space.
Using a median filter to reduce noise in HSV image.
Threshold the image
Using dilate and erode to remove small holes and small objects in Image
Using findContours and approxPolyDP to detect square objects.
This is my preprocessing phase:
image_cv = cv::cvarrToMat(image_camera);
Mat cropped = image_cv(cv::Rect(0, 190, 640, 110));
imshow("origin", cropped);
Mat croppedCon = CropConveyor(cropped);
cv::cvtColor(croppedCon, croppedCon, CV_RGB2HSV);
medianBlur(croppedCon, croppedCon, 3);
cv::Mat binRect;
cv::inRange(croppedCon, Scalar(iLowH, iLowS, iLowV), Scalar(iHighH, iHighS, iHighV), binRect);
This is the code for detecting squares:
vector<vector<Point>> contours;
findContours(binarizedIm, contours, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
vector<Point> approx;
for (size_t i = 0; i < contours.size(); i++)
{
//double arclength = arcLength(Mat(contours[i]), true);
approxPolyDP(Mat(contours[i]), approx, 3.245 , true); //0.04 for wood
if (approx.size() != 4) continue;
if (isContourConvex(Mat(approx)) && contourArea(Mat(approx)) > 250)
{
double MaxCos = 0;
for (int j = 2; j < 5; j++)
{
double cos = angle(approx[j % 4], approx[j - 1], approx[j - 2]);
MaxCos = MAX(cos, MaxCos);
}
if (MaxCos < 0.2)
squares.push_back(approx);
}
}
I think noise in HSV Image is the main reason. Here is some images illustrating my problems. I saw a lot of noise in HSV Image, that's why I use a media filter to it to reduce noise but preserve the edges becase I think that edges information is very important when using findContours function.
HSV and HSV in separate channels
My question is:
What is the noise in HSV Image, refer to the above Image, how can I
enhance my Image's quality?
The reason for noise in your saturation image is noise in your input image. Caused by a bad camera / optics and further increased by JPEG compression.
That's by far the worst image I have seen in years. You shouldn't invest another second into processing that, unless you live on Mars and need results tomorrow.
Your input image is super noisy, undersampled, defocussed, underexposed, full of aliasing and compression artifacts and pretty much anything else you can do wrong with an image.
First rule of signal processing:
crap in = crap out
You can get much better cameras basically for free. Find and use one.
Part of the problem is that you're doing the noise reduction in HSV space. In your example you can see the V channel is better-behaved than H and S. It would be better to do noise-reduction in RGB (which is more linear and closer, though not identical, to the camera's native colour space where the noise originates; of course there's also gamma-correction).
Maybe consider a stronger edge-preserving noise-reducing filter such as Bilateral Filter.
I don't get it why are you using HSV for segmenting the objects, the RGB image is good enough. Separate the image into 3 channels (r,g,b) and apply an adaptive threshold on them. dilate and erode the images then add (not merging) those 3 binary images to have one binary image. Finally do level 6 of your recipe to extract the objects. If the noise still effects the result, apply a bilateral filter on r,g,b channels before the threshold.
I've wrote a code which detects squares (white) in realtime and draws a frame around it. Each side of length l of the squares is divided in 7 parts. Then I draw a line of length h=l/7 at each of the six points evolving from the deviation perpendicular to the side of the triangle (blue). The corners are marked in red. It then looks something like this:
For the drawing of the blue lines and circles I have a 3 Channel (CV_8UC3) matrix drawing, which is zero everywhere except at the positions of the red, blue and white lines. Then what I do to lay this matrix over my webcam image is using the addWeighted function of opencv.
addWeighted( drawing, 1, webcam_img, 1, 0.0, dst); (Description for addWeighted here).
But then, as you can see I get the effect that the colors for my dashes and circles are wrong outside the black area (probably also not correct inside the black area, but better there). It makes totally sense why it happens, as it just adds the matrices with a weight.
I'd like to have the matrix drawing with the correct colors over my image. Problem is, I don't no how to fix it. I somehow need a mask drawing_mask where my dashes are, sort of, superimposed to my camera image. In Matlab something like dst=webcam_img; dst(drawing>0)=drawing(drawing>0);
Anyone an idea how to do this in C++?
1. Custom version
I would write it explicitly:
const int cols = drawing.cols;
const int rows = drawing.rows;
for (int j = 0; j < rows; j++) {
const uint8_t* p_draw = drawing.ptr(j); //Take a pointer to j-th row of the image to be drawn
uint8_t* p_dest = webcam_img.ptr(j); //Take a pointer to j-th row of the destination image
for (int i = 0; i < cols; i++) {
//Check all three channels BGR
if(p_draw[0] | p_draw[1] | p_draw[2]) { //Using binary OR should ease the optimization work for the compiler
p_dest[0] = p_draw[0]; //If the pixel is not zero,
p_dest[1] = p_draw[1]; //copy it (overwrite) in the destination image
p_dest[2] = p_draw[2];
}
p_dest += 3; //Move to the next pixel
p_draw += 3;
}
}
Of course you can move this code in a function with arguments (const cv::Mat& drawing, cv::Mat& webcam_img).
2. OpenCV "purist" version
But the pure OpenCV way would be the following:
cv::Mat mask;
//Create a single channel image where each pixel != 0 if it is colored in your "drawing" image
cv::cvtColor(drawing, mask, CV_BGR2GRAY);
//Copy to destination image only pixels that are != 0 in the mask
drawing.copyTo(webcam_img, mask);
Less efficient (the color conversion to create the mask is somehow expensive), but certainly more compact. Small note: It won't work if you have one very dark color, like (0,0,1) that in grayscale will be converted to 0.
Also note that it might be less expensive to redraw the same overlays (lines, circles) in your destination image, basically calling the same draw operations that you made to create your drawing image.
i want the hand image to be a black and white shape of the hand. here's a sample of the input and the desired output:
using a threshold doesn't give the desired output because some of the colors inside the hand are the same with the background color. how can i get the desired output?
Adaptive threshold, find contours, floodfill?
Basically, adaptive threshold turns your image into black and white, but takes the threshold level based on local conditions around each pixel - that way, you should avoid the problem you're experiencing with an ordinary threshold. In fact, I'm not sure why anyone would ever want to use a normal threshold.
If that doesn't work, an alternative approach is to find the largest contour in the image, draw it onto a separate matrix and then floodfill everything inside it with black. (Floodfill is like the bucket tool in MSPaint - it starts at a particular pixel, and fills in everything connected to that pixel which is the same colour with another colour of your choice.)
Possibly the most robust approach against various lighting conditions is to do them all in the sequence at the top. But you may be able to get away with only the threshold or the countours/floodfill.
By the way, perhaps the trickiest part is actually finding the contours, because findContours returns an arraylist/vector/whatever (depends on the platform I think) of MatOfPoints. MatOfPoint is a subclass of Mat but you can't draw it directly - you need to use drawContours. Here's some code for OpenCV4Android that I know works:
private Mat drawLargestContour(Mat input) {
/** Allocates and returns a black matrix with the
* largest contour of the input matrix drawn in white. */
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(input, contours, new Mat() /* hierarchy */,
Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
double maxArea = 0;
int index = -1;
for (MatOfPoint contour : contours) { // iterate over every contour in the list
double area = Imgproc.contourArea(contour);
if (area > maxArea) {
maxArea = area;
index = contours.indexOf(contour);
}
}
if (index == -1) {
Log.e(TAG, "Fatal error: no contours in the image!");
}
Mat border = new Mat(input.rows(), input.cols(), CvType.CV_8UC1); // initialized to 0 (black) by default because it's Java :)
Imgproc.drawContours(border, contours, index, new Scalar(255)); // 255 = draw contours in white
return border;
}
Two quick things you can try:
After thresholding you can:
Do a morphological closing,
or, the most straightforward: cv::findContours, keep the largest if it's more than one, then draw it using cv::fillConvexPoly and you will get this mask. (fillConvexPoly will fill the holes for you)
I setup an area of interest somewhere near the center of my image using:
Mat frame;
//frame has been initialized as a frame from a camera input
Rect roi= cvRect(frame.cols*.45, frame.rows*.45, 10, 8);
image_roi= frame(roi);
//I stoped here not knowing what to do next
I'm using a camera and at any time when I grab a frame, the ROI will be anywhere between 30% to 100% filled with my desired color, which is Red in this case. What is the most efficient method to know if Red is present in my current frame?
Solution:
image_roi= frame(roi);// a frame from my camera as a cv::Mat
cvtColor(image_roi, image_roi, CV_BGR2HSV);
thrs= new Mat(image_roi.rows, image_roi.cols, CV_8UC1);//allocate space for new img
inRange(image_roi, Scalar(0,100,100), Scalar(12,255,255), *thrs);//do hsv thresholding for red
for(int i= 0; i < thrs->rows; i++)//sum up
{
for(int j=0; j < thrs->cols; j++)
{
sum= sum+ thrs->data[(thrs->rows)* i + j];
}
}
if(sum> 100)//my application only cares about red
cout<<"Red"<<endl;
else
cout<<"White"<<endl;
sum=0;
This solution should address not only red but any color distribution:
Get a color histogram for your ROI, a bidimensional hue and saturation histogram (follow the example here).
Use calcBackProject to project the histogram back in the full image. You will get larger values in pixels presenting a color near the modes of the histogram (in this case, reds).
Threshold the result to get the pixels that better match the distribution (in this case, the "best reds").
This solution can be used, for example, to get a simple but very functional skin detector.
I'm assuming you just want to know the percentage of red in the ROI. If that's not correct, please clarify.
I'd scan the ROI and convert each pixel into a better color space for color comparison, such as YCbCr, or HSV. I'd then count the number of pixels where the hue is within some delta of red's hue (usually 0 degrees on the color wheel). You might need to deal with some edge cases where the brightness or saturation are too low for a human to think they're red, even though technically they are, depending on what you're trying to achieve.