Isolate image shadow with OpenCV C++ - c++

I have seen some algorithms on how to remove a shadow from an image using OpenCV with C++. I have looked around but haven't find the way to not just erase the shadow, but store it on a new image alone.
What I am doing with this code is to convert the original image (that I obtained from the Internet) to the HSV color space, change the value of V=180, which somehow removes the shadow, and then converting the image back to the BGR color space. I am clueless on how to 'extract' the removed shadow and save it to a different image...
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc.hpp>
using namespace cv;
using namespace std;
int main()
{
Mat srcImg;
Mat hsvImg;
Mat bgrImg;
srcImg = imread("pcb-2008.jpg");
cvtColor(srcImg, hsvImg, CV_BGR2HSV);
imwrite("1.hsv.jpg", hsvImg);
Mat channel[3];
split(hsvImg, channel);
channel[2] = Mat(hsvImg.rows, hsvImg.cols, CV_8UC1, 180);
merge(channel, 3, hsvImg);
imwrite("2.hsvNoShadow.jpg", hsvImg);
cvtColor(hsvImg, bgrImg, CV_HSV2BGR);
imwrite("3.backToBgr.jpg", bgrImg);
return 0;
}
Sample image of a PCB

Related

why doesn't the following code show the Red channel of an image?

#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
int main()
{
Mat src = imread("image.png", 1);
namedWindow("src", 1);
imshow("src", src);
vector<Mat> rgbChannels(3);
split(src, rgbChannels);
namedWindow("R", 1);
imshow("R", rgbChannels[2]);
waitKey(0);
return 0;
}
.
I was expecting something like the following:
why doesn't the above code show the Red channel? why does it show a grayscale image?
if the image is split into 3 channels, each matrix should show one of the colors of r, g, and b. isn't that so?
Your code is correct; however, OpenCV is showing the channel as grayscale. Mat does not keep the information about "where" the data came from. In other words, it does not know it was a red channel, so when you call imshow, it displays it as a single-channel image.
What you can do is build up an empty image with 2 zero'd channels and the one you want to visualize.

Rotate area to align the major axis horizontally with opencv

Please can someone here who can help me with this. I'm trying to rotate a segmented region of an image to align the major axis horizontally.
I have a segmented region in the center of the image following the steps used herein. Move area of an image to the center using OpenCV
I read this OPENCV: PCA application error in image_proc, but it does not help me solve my problem.
I have this
I want this
Slightly different than how Miki suggested, I used findNonZero, minAreaRect, and WarpAffine.
You can either use 270 or 90 on the getRotationMatrix2D to align the major axis with the horizontal.
#include "stdafx.h"
#include <opencv/cxcore.h>
#include <opencv2\core\mat.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include <opencv/cxcore.h>
#include <opencv/highgui.h>
#include <opencv/cv.h>
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/videoio/videoio.hpp>
using namespace cv;
using namespace std;
int main() {
//getting the image
Mat image = imread("C:/this/is/a/path/to/an/image.png");
//create new image that looks exactly like old image
Mat rot_image = image.clone();
rot_image = Scalar(0);
//showing the image
namedWindow("Image", CV_WINDOW_NORMAL| CV_WINDOW_KEEPRATIO | CV_GUI_EXPANDED);
namedWindow("Rotated Image", CV_WINDOW_NORMAL| CV_WINDOW_KEEPRATIO | CV_GUI_EXPANDED);
imshow("Image", image);
waitKey(0);
imshow("Rotated Image", rot_image);
waitKey(0);
//convert image
Mat img_bw;
inRange(image, Scalar(1,1,1), Scalar(255,255,255), img_bw);
imshow("Rotated Image", img_bw);
waitKey(0);
//find coordinates
Mat nonZeroCoordinates;
findNonZero(img_bw, nonZeroCoordinates);
RotatedRect rect = minAreaRect(nonZeroCoordinates);
rect.center = Point(image.cols/2, image.rows/2);
//get the Rotation Matrix
Mat M = getRotationMatrix2D(rect.center, 270, 1.0);
// perform the affine transformation
warpAffine(image, rot_image, M, image.size(), INTER_CUBIC);
//displaying the image
imshow("Rotated Image", rot_image);
waitKey(0);
//saving the new image
imwrite("C:/this/is/a/path/to/a/rotatedImage.png", rot_image);
}
That code turns this:
to this:
You can take the rect.center line out if you're sure your object is already going to be in the center.

Copy Mat in opencv

I try to copy a image to other image using opencv, but I got a problem. Two image is not the same, like this:
This is the code I used:
#include <opencv2\opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <cmath>
#include <iostream>
#include <opencv2\opencv.hpp>
int main()
{
cv::Mat inImg = cv::imread("C:\\Users\\DUY\\Desktop\\basic_shapes.png");
//Data point copy
unsigned char * pData = inImg.data;
int width = inImg.rows;
int height = inImg.cols;
cv::Mat outImg(width, height, CV_8UC1);
//data copy using memcpy function
memcpy(outImg.data, pData, sizeof(unsigned char)*width*height);
//processing and copy check
cv::namedWindow("Test");
imshow("Test", inImg);
cv::namedWindow("Test2");
imshow("Test2", outImg);
cvWaitKey(0);
}
Simply use .clone() function of cv::Mat:
cv::Mat source = cv::imread("basic_shapes.png");
cv::Mat dst = source.clone();
This will do the trick.
You are making an image with one channel only (which means only shades of gray are possible) with CV_8UC1, you could use CV_8UC3 or CV_8UC4 but for simply copying stick with the clone function.
You actually don't want to copy the data, since you start with a RGB CV_8UC3 image, and you want to work on a grayscale CV_8UC1 image.
You should use cvtColor, that will convert your RGB data into grayscale.
#include <opencv2\opencv.hpp>
#include <iostream>
using namespace cv;
int main()
{
Mat inImg = cv::imread("C:\\Users\\DUY\\Desktop\\basic_shapes.png"); // inImg is CV_8UC3
Mat outImg;
cvtColor(inImg, outImg, COLOR_RGB2GRAY); // Now outImg is CV_8UC1
//processing and copy check
imshow("Test", inImg);
imshow("Test2", outImg);
waitKey();
}
With a simple memcopy you're copying a sequence of uchar like this:
BGR BGR BGR BGR ...
into an image that expects them to be (G for gray):
G G G G ...
and that's is causing your outImg to be uncorrect.
Your code will be correct if you define outImage like:
cv::Mat outImg(width, height, CV_8UC3); // Instead of CV_8UC1
the best way is to use the opencv clone method:
cv::Mat outImg = inImg.clone();
Your original image is in color. cv::Mat outImg(width, height, CV_8UC1); says that your new image is of data type CV_8UC1 which is an 8-bit grayscale image. So you know that is not correct. Then you try to copy the amount of data from the original image to the new image that corresponds to total pixels * 8-bits which is at best 1/3 of the actual image (assuming the original image was 3 color, 8-bits per color, aka a 24-bit image) and perhaps even 1/4 (if it had an alpha channel, making it 4 channels of 8-bits or a 32-bit image).
TLDR: you're matrices aren't the same type, and you are making assumptions about the size of the data to be copied off of an incorrect, and incorrectly sized type.
Here is a simple code to copy image.
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <cmath>
int main()
{
cv::Mat inImg = cv::imread("1.jpg");
cv::Mat outImg = inImg.clone();
cv::namedWindow("Test");
imshow("Test", inImg);
cv::namedWindow("Test2");
imshow("Test2", outImg);
cvWaitKey(0);
}
Mat source = imread("1.png", 0);
Mat dest;
source.copyTo(dest);

Image edge smoothing with opencv

I am trying to smooth output image edges using opencv framework, I am trying following steps. Steps took from here https://stackoverflow.com/a/17175381/790842
int lowThreshold = 10.0;
int ratio = 3;
int kernel_size = 3;
Mat src_gray,detected_edges,dst,blurred;
/// Convert the image to grayscale
cvtColor( result, src_gray, CV_BGR2GRAY );
/// Reduce noise with a kernel 3x3
cv::blur( src_gray, detected_edges, cv::Size(5,5) );
/// Canny detector
cv::Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size );
//Works fine upto here I am getting perfect edge mask
cv::dilate(detected_edges, blurred, result);
//I get Assertion failed (src.channels() == 1 && func != 0) in countNonZero ERROR while doing dilate
result.copyTo(blurred, blurred);
cv::blur(blurred, blurred, cv::Size(3.0,3.0));
blurred.copyTo(result, detected_edges);
UIImage *image = [UIImageCVMatConverter UIImageFromCVMat:result];
I want help whether if I am going in right way, or what am I missing?
Thanks for any suggestion and help.
Updated:
I have got an image like below got from grabcut algorithm, now I want to apply edge smoothening to the image, as you can see the image is not smooth.
Do you want to get something like this?
If yes, then here is the code:
#include <iostream>
#include <vector>
#include <string>
#include <fstream>
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main(int argc, char **argv)
{
cv::namedWindow("result");
Mat img=imread("TestImg.png");
Mat whole_image=imread("D:\\ImagesForTest\\lena.jpg");
whole_image.convertTo(whole_image,CV_32FC3,1.0/255.0);
cv::resize(whole_image,whole_image,img.size());
img.convertTo(img,CV_32FC3,1.0/255.0);
Mat bg=Mat(img.size(),CV_32FC3);
bg=Scalar(1.0,1.0,1.0);
// Prepare mask
Mat mask;
Mat img_gray;
cv::cvtColor(img,img_gray,cv::COLOR_BGR2GRAY);
img_gray.convertTo(mask,CV_32FC1);
threshold(1.0-mask,mask,0.9,1.0,cv::THRESH_BINARY_INV);
cv::GaussianBlur(mask,mask,Size(21,21),11.0);
imshow("result",mask);
cv::waitKey(0);
// Reget the image fragment with smoothed mask
Mat res;
vector<Mat> ch_img(3);
vector<Mat> ch_bg(3);
cv::split(whole_image,ch_img);
cv::split(bg,ch_bg);
ch_img[0]=ch_img[0].mul(mask)+ch_bg[0].mul(1.0-mask);
ch_img[1]=ch_img[1].mul(mask)+ch_bg[1].mul(1.0-mask);
ch_img[2]=ch_img[2].mul(mask)+ch_bg[2].mul(1.0-mask);
cv::merge(ch_img,res);
cv::merge(ch_bg,bg);
imshow("result",res);
cv::waitKey(0);
cv::destroyAllWindows();
}
And I think this link will be interestiong for you too: Poisson Blending
I have followed the following steps to smooth the edges of the Foreground I got from GrabCut.
Create a binary image from the mask I got from GrabCut.
Find the contour of the binary image.
Create an Edge Mask by drawing the contour points. It gives the boundary edges of the Foreground image I got from GrabCut.
Then follow the steps define in https://stackoverflow.com/a/17175381/790842

watershed segmentation opencv xcode

I am now learning a code from the opencv codebook (OpenCV 2 Computer Vision Application Programming Cookbook): Chapter 5, Segmenting images using watersheds, page 131.
Here is my main code:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
using namespace std;
class WatershedSegmenter {
private:
cv::Mat markers;
public:
void setMarkers(const cv::Mat& markerImage){
markerImage.convertTo(markers, CV_32S);
}
cv::Mat process(const cv::Mat &image){
cv::watershed(image,markers);
return markers;
}
};
int main ()
{
cv::Mat image = cv::imread("/Users/yaozhongsong/Pictures/IMG_1648.JPG");
// Eliminate noise and smaller objects
cv::Mat fg;
cv::erode(binary,fg,cv::Mat(),cv::Point(-1,-1),6);
// Identify image pixels without objects
cv::Mat bg;
cv::dilate(binary,bg,cv::Mat(),cv::Point(-1,-1),6);
cv::threshold(bg,bg,1,128,cv::THRESH_BINARY_INV);
// Create markers image
cv::Mat markers(binary.size(),CV_8U,cv::Scalar(0));
markers= fg+bg;
// Create watershed segmentation object
WatershedSegmenter segmenter;
// Set markers and process
segmenter.setMarkers(markers);
segmenter.process(image);
imshow("a",image);
std::cout<<".";
cv::waitKey(0);
}
However, it doesn't work. How could I initialize a binary image? And how could I make this segmentation code work?
I am not very clear about this part of the book.
Thanks in advance!
There's a couple of things that should be mentioned about your code:
Watershed expects the input and the output image to have the same size;
You probably want to get rid of the const parameters in the methods;
Notice that the result of watershed is actually markers and not image as your code suggests; About that, you need to grab the return of process()!
This is your code, with the fixes above:
// Usage: ./app input.jpg
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
using namespace std;
class WatershedSegmenter{
private:
cv::Mat markers;
public:
void setMarkers(cv::Mat& markerImage)
{
markerImage.convertTo(markers, CV_32S);
}
cv::Mat process(cv::Mat &image)
{
cv::watershed(image, markers);
markers.convertTo(markers,CV_8U);
return markers;
}
};
int main(int argc, char* argv[])
{
cv::Mat image = cv::imread(argv[1]);
cv::Mat binary;// = cv::imread(argv[2], 0);
cv::cvtColor(image, binary, CV_BGR2GRAY);
cv::threshold(binary, binary, 100, 255, THRESH_BINARY);
imshow("originalimage", image);
imshow("originalbinary", binary);
// Eliminate noise and smaller objects
cv::Mat fg;
cv::erode(binary,fg,cv::Mat(),cv::Point(-1,-1),2);
imshow("fg", fg);
// Identify image pixels without objects
cv::Mat bg;
cv::dilate(binary,bg,cv::Mat(),cv::Point(-1,-1),3);
cv::threshold(bg,bg,1, 128,cv::THRESH_BINARY_INV);
imshow("bg", bg);
// Create markers image
cv::Mat markers(binary.size(),CV_8U,cv::Scalar(0));
markers= fg+bg;
imshow("markers", markers);
// Create watershed segmentation object
WatershedSegmenter segmenter;
segmenter.setMarkers(markers);
cv::Mat result = segmenter.process(image);
result.convertTo(result,CV_8U);
imshow("final_result", result);
cv::waitKey(0);
return 0;
}
I took the liberty of using Abid's input image for testing and this is what I got:
Below is the simplified version of your code, and it works fine for me. Check it out :
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
int main ()
{
Mat image = imread("sofwatershed.jpg");
Mat binary = imread("sofwsthresh.png",0);
// Eliminate noise and smaller objects
Mat fg;
erode(binary,fg,Mat(),Point(-1,-1),2);
// Identify image pixels without objects
Mat bg;
dilate(binary,bg,Mat(),Point(-1,-1),3);
threshold(bg,bg,1,128,THRESH_BINARY_INV);
// Create markers image
Mat markers(binary.size(),CV_8U,Scalar(0));
markers= fg+bg;
markers.convertTo(markers, CV_32S);
watershed(image,markers);
markers.convertTo(markers,CV_8U);
imshow("a",markers);
waitKey(0);
}
Below is my input image :
Below is my output image :
See the code explanation here : Simple watershed Sample in OpenCV
I had the same problem as you, following the exact same code sample of the cookbook (great book btw).
Just to place the matter I was coding under Visual Studio 2013 and OpenCV 2.4.8. After a lot of searching and no solutions I decided to change the IDE.
It's still Visual Studio BUT it's 2010!!!! And boom it works!
Becareful of how you configure Visual Studio with OpenCV. Here's a great tutorial for installation here
Good day to all