Vignetting correction on RGB image with OpenCV - c++

First of all: I'm new to opencv :-)
I want to perform a vignetting correction on a 24bit RGB Image. I used an area scan camera as a line camera and put together an image from 1780x2 px parts to get an complete image with 1780x3000 px. Because of the vignetting, i made a white reference picture with 1780x2 px to calculate a LUT (with correction factor in it) for the vignetting removal. Here is my code idea:
Mat white = imread("WHITE_REF_2L.bmp", 0);
Mat lut(2, 1780, CV_8UC3, Scalar(0));
lut = 255 / white;
imwrite("lut_test.bmp", lut*white);
As i understood, what the second last line will (hopefully) do, is to divide 255 with every intensity value of every channel and store this in the lut matrice.
I want to use that lut then to to calculate the “real” (not distorted) intensity
level of each pixel by multiplying every element of the src img with every element of the lut matrice.
obviously its not working how i want to do it, i get a memory exception.
Can anybody help me with that problem?
edit: i'm using opencv 3.1.0 and i solved the problem like this:
// read white reference image
Mat white = imread("WHITE_REF_2L_D.bmp", IMREAD_COLOR);
white.convertTo(white, CV_32FC3);
// calculate LUT with vignetting correction factors
Mat vLUT(2, 1780, CV_32FC3, Scalar(0.0f));
divide(240.0f, white, vLUT);
of course that's not optimal, i will read in more white references and calculate the mean value to optimize.
Here's the 2 lines white reference, you can see the shadows at the image borders i want to correct
when i multiply vLUT with the white reference i obviously get a homogenous image as the result.
thanks, maybe this can help anyone else ;)

Related

How to dedistort an image without pixel loss using opencv

As we all know,we can use the function cv::getOptimalNewCameraMatrix() with alpha = 1 to get a new CameraMatrix. Then we use the function cv::undistort() with the new CameraMatrix can get the image after dedistortion. However, I find the image after distortion is as large as the original image and some part of the image after distortion covered by black.
So my question is :Does this mean that the original image pixel is lost? and is there any way to avoid pixel loss or get the image whose size larger than origin image with opencv?
cv::Mat NewKMatrixLeft = cv::getOptimalNewCameraMatrix(KMatrixLeft,DistMatrixLeft ,cv::Size(image.cols,image.rows),1);
cv::undistort(image, show_image, KMatrixLeft, DistMatrixLeft,NewKMatrixLeft);
The size of image and show_image are both 640*480,however from my point of view,the size of image after distortion should be larger than 640*480 because some part of it is meaningless.
Thanks!
In order to correct distortion, you basically have to reverse the process that caused the initial distortion. This implies that pixels are stretched and squashed along various directions to correct the distortion. In some cases, this would move the pixels away from the image edge. In OpenCV, this is handled by inserting black pixels. There is nothing wrong with this approach. You can then choose how to crop it to remove the black pixels at the edges.

OpenCV Binary Image Mask for Image Analysis in C++

I'm trying to analyse some images which have a lot of noise around the outside of the image, but a clear circular centre with a shape inside. The centre is the part I'm interested in, but the outside noise is affecting my binary thresholding of the image.
To ignore the noise, I'm trying to set up a circular mask of known centre position and radius whereby all pixels outside this circle are changed to black. I figure that everything inside the circle will now be easy to analyse with binary thresholding.
I'm just wondering if someone might be able to point me in the right direction for this sort of problem please? I've had a look at this solution: How to black out everything outside a circle in Open CV but some of my constraints are different and I'm confused by the method in which source images are loaded.
Thank you in advance!
//First load your source image, here load as gray scale
cv::Mat srcImage = cv::imread("sourceImage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
//Then define your mask image
cv::Mat mask = cv::Mat::zeros(srcImage.size(), srcImage.type());
//Define your destination image
cv::Mat dstImage = cv::Mat::zeros(srcImage.size(), srcImage.type());
//I assume you want to draw the circle at the center of your image, with a radius of 50
cv::circle(mask, cv::Point(mask.cols/2, mask.rows/2), 50, cv::Scalar(255, 0, 0), -1, 8, 0);
//Now you can copy your source image to destination image with masking
srcImage.copyTo(dstImage, mask);
Then do your further processing on your dstImage. Assume this is your source image:
Then the above code gives you this as gray scale input:
And this is the binary mask you created:
And this is your final result after masking operation:
Since you are looking for a clear circular center with a shape inside, you could use Hough Transform to get that area- a careful selection of parameters will help you get this area perfectly.
A detailed tutorial is here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
For setting pixels outside a region black:
Create a mask image :
cv::Mat mask(img_src.size(),img_src.type());
Mark the points inside with white color :
cv::circle( mask, center, radius, cv::Scalar(255,255,255),-1, 8, 0 );
You can now use bitwise_AND and thus get an output image with only the pixels enclosed in mask.
cv::bitwise_and(mask,img_src,output);

Best practice, to detect if a Mat is black and White in Opencv

I would like to know if the image I read is a Black and white or colored image.
I use Opencv for all my process.
In order to detect it, I currently read my image, convert it from BGR2GRAY and I compare Histogram of the original(read as BGR) to the histogram of the second (known as B&W).
In pseudo code this looks like that:
cv::Mat img = read("img.png", -1);
cv::Mat bw = cvtColor(img.clone(), bw, CV_BGR2GRAY);
if (computeHistogram(img) == computeHistogram(bw))
cout << "Black And White !"<< endl;
Is there a better way to do it ? I am searching for the lightest algo I can Implement and best practices.
Thanks for the help.
Edit: I forgot to say that I convert my images in HSL in order to compare Luminance Histograms.
Storing grayscale images in RGB format causes all three fields to be equal. It means for every pixel in a grayscale image saved in RGB format we have R = G = B. So you can easily check this for your image.

Noisy hue in OpenCV

this question is regarding opencv with c++ in VS2008 express.
I'm doing very simple thing. Trying to get skin values from camera image.
As you can see in the screenshot camera image looks fairly good. I'm converting it to HSV & separating Hue channel from that to later threshold the skin value. But Hue channel seems too much noisy & grainy. Also HSV image window shows degradation of information. Why is that happening? & how to solve that. If we cant can we remove noise by some kind of smoothing? Code is as below :
#include <opencv2/opencv.hpp>
int main(){
cv::VideoCapture cap(0); // open the default camera
cv::Mat hsv, bgr, skin;//make image & skin container
cap >> bgr;
//cvNamedWindow("Image");
//cvNamedWindow("Skin");
//cvNamedWindow("Hue");
cv::cvtColor(bgr, hsv, CV_BGR2HSV);
std::vector<cv::Mat> channels;
cv::split(hsv, channels);
cv::Mat hue;
hue = channels[0];
cv::imshow("Image", bgr);cvMoveWindow("Image",0,0);
cv::imshow("HSV", hsv);cvMoveWindow("HSV",660,0);
cv::imshow("Hue", hue);cvMoveWindow("Hue",0,460);
cvWaitKey(0);//wait for key press
return 0;
}
Hue channel seems too much noisy & grainy. Why is that happening?
In the real-world colors we see, the portion of information that is represented by "hue" varies. The color red is completely described by hue. Black has no hue information at all.
However, when a color is represented in HSV as you have done, the hue is always one-third of the color information.
So as colors approach any shade of gray, the hue component will be artificially inflated. That's the graininess you're seeing. The closer to gray, the more hue must be amplified, including error in captured hue.
Also HSV image window shows degradation of information. Why is that happening?
There will be rounding errors in the conversion, but it's probably not as much as you think. Try converting your HSV image back to BGR to see how much degradation actually happened.
& how to solve that.
Realistically, you have two options.
Use a higher-quality camera, or don't use HSV format.
If you look at the formulae in the OpenCV documentation for the function cvtColor with parameter CV_BGR2HSV you will have
Now, when you have a shade of gray, i.e. when R is equal to B and R is equal to G, the fraction in the H formula is always undefined since it is zero divided by zero. It seems to me that the documentation does not describe what happens in that case.

Difference between adaptive thresholding and normal thresholding in opencv

I have this gray video stream:
The histogram of this image:
The thresholded image by :
threshold( image, image, 150, 255, CV_THRESH_BINARY );
i get :
Which i expect.
When i do adaptive thresholding with :
adaptiveThreshold(image, image,255,ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY,15,-5);
i get :
Which looks like edge detection and not thresholding. What i expected was black and white areas . So my question is, why does this look like edge detection and not thresholding.
thx in advance
Adaptive Threshold works like this:
The function transforms a grayscale image to a binary image according
to the formulas:
THRESH_BINARY
THRESH_BINARY_INV
where T(x,y) is a threshold calculated individually for each pixel.
Threshold works differently:
The function applies fixed-level thresholding to a single-channel array.
So it sounds like adaptiveThreshold calculates a threshold pixel-by-pixel, whereas threshold calculates it for the whole image -- it measures the whole image by one ruler, whereas the other makes a new "ruler" for each pixel.
I had the same issue doing adaptive thresholding for OCR purposes. (sorry this is Python not C++)
img = cv.LoadImage(sys.argv[1])
bwsrc = cv.CreateImage( cv.GetSize(img), cv.IPL_DEPTH_8U, 1)
bwdst = cv.CreateImage( cv.GetSize(img), cv.IPL_DEPTH_8U, 1)
cv.CvtColor(img, bwsrc, cv.CV_BGR2GRAY)
cv.AdaptiveThreshold(bwsrc, bwdst, 255.0, cv.CV_THRESH_BINARY, cv.CV_ADAPTIVE_THRESH_MEAN_C,11)
cv.ShowImage("threshhold", bwdst)
cv.WaitKey()
The last paramter is the size of the neighborhood used to calculate the threshold for each pixel. If your neighborhood is too small (mine was 3), it works like edge detection. Once I made it bigger, it worked as expected. Of course, the "correct" size will depend on the resolution of your image, and size of the features you're looking at.