I have two photos:
and
I am getting differences between these photos. But these differences include changes of light, shaking of the camera, etc. I want to see only the man in the difference photo. I wrote a threshold value and I succeeded in it. But this threshold does not correct other photos. I can't show wrong examples because of my reputation in stackoverflow. You can run my code on other photos and you can see the disorders. My code is given below. How else can I do this threshold?
#include <Windows.h>
#include <opencv\highgui.h>
#include <iostream>
#include <opencv2\opencv.hpp>
using namespace cv;
using namespace std;
int main() {
Mat siyah;
Mat resim = imread("C:/Users/toshiba/Desktop/z.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Mat resim2 = imread("C:/Users/toshiba/Desktop/t.jpg", CV_LOAD_IMAGE_GRAYSCALE);
if (resim.empty() || resim2.empty())
{
cout << "Dosya Açılamadı " << "\n";
return 0;
}
for (int i = 0; i < resim.rows; i++)
{
for (int j = 0; j <resim.cols; j++)
{
if (resim.data[resim.channels()*(resim.cols*(i)+
(j))] - resim2.data[resim2.channels()*(resim2.cols*(i)+
(j))]>30) {
resim.data[resim.channels()*(resim.cols*(i)+
(j))] = 255;
}
else
resim.data[resim.channels()*(resim.cols*(i)+
(j))] = 0;
//inRange(resim, 150, 255, siyah);
}
}
//inRange(resim, 150, 255, siyah);
namedWindow("Resim", CV_WINDOW_NORMAL);
imshow("Resim", resim);
waitKey();
system("PAUSE");
waitKey();
return 0;
}
If your background is always the same and the pictures which include an object occur rarely enough then you can update your reference image very often such that the changes in lighting from your reference image to the image to be analyzed are always small. You could then measure/compute a threshold that most of the time will work when computing the image difference. I am not sure why your camera is moving - is it not fixed ?
I made the following code with Otsu thresholding and GrabCut algorithm. It doesn't use any pre-set threshold values, but I am still not sure how well it will perform for other images (maybe if you provide several more pictures to test with). The code is in Python, but it mostly consists of calling OpenCV functions and filling matrices, so should be easy to convert to C++ or whatever. The result for your image:
Using just Otsu alone on the difference gives the following mask:
The legs are fine but the rest is messed up. But there seem to be no false positives, so I took the mask as definite foreground, everything else as probable background and fed it to GrabCut.
import cv2
import numpy as np
#read the empty background image and the image with the guy in it,
#convert them into float32, so we don't get integers overflow
img_empty = cv2.imread("000_empty.png", 0).astype(np.float32)
img_guy = cv2.imread("001_guy.jpg", 0).astype(np.float32)
#absolute difference -> back to uint8 for thresholding etc.
diff = np.abs(img_empty - img_guy).astype(np.uint8)
#otsu thresholding
ret2, th = cv2.threshold(diff, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
#read our image again for GrabCut
img = cv2.imread("001_guy.jpg")
#fill GrabCut mask
mask = np.zeros(th.shape, np.uint8)
mask[th == 255] = cv2.GC_FGD #the value is GC_FGD (foreground) when our thresholded value is 255
mask[th == 0] = cv2.GC_PR_BGD #GC_PR_BGD (probable background) otherwise
#some internal stuff for GrabCut...
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
#run GrabCut
cv2.grabCut(img, mask, (0, 0, 1365, 767), bgdModel, fgdModel, 5, cv2.GC_INIT_WITH_MASK)
#convert the `mask` we got from GrabCut into a binary mask,
#then apply it to the original image
mask2 = np.where((mask==2)|(mask==0),0,1).astype('uint8')
img = img*mask2[:,:,np.newaxis]
#save the results
cv2.imwrite("003_grabcut.jpg", img)
Related
Firstly I integrate OpenCV framework to XCode and All the OpenCV code is on ObjectiveC and I am using in Swift Using bridging header. I am new to OpenCV Framework and trying to achieve count of vertical lines from the image.
Here is my code:
First I am converting the image to GrayScale
+ (UIImage *)convertToGrayscale:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
cv::Mat gray;
cv::cvtColor(mat, gray, CV_RGB2GRAY);
UIImage *grayscale = MatToUIImage(gray);
return grayscale;
}
Then, I am detecting edges so I can find the line of gray color
+ (UIImage *)detectEdgesInRGBImage:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
//Prepare the image for findContours
cv::threshold(mat, mat, 128, 255, CV_THRESH_BINARY);
//Find the contours. Use the contourOutput Mat so the original image doesn't get overwritten
std::vector<std::vector<cv::Point> > contours;
cv::Mat contourOutput = mat.clone();
cv::findContours( contourOutput, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
NSLog(#"Count =>%lu", contours.size());
//For Blue
/*cv::GaussianBlur(mat, gray, cv::Size(11, 11), 0); */
UIImage *grayscale = MatToUIImage(mat);
return grayscale;
}
This both Function is written on Objective C
Here, I am calling both function Swift
override func viewDidLoad() {
super.viewDidLoad()
let img = UIImage(named: "imagenamed")
let img1 = Wrapper.convert(toGrayscale: img)
self.capturedImageView.image = Wrapper.detectEdges(inRGBImage: img1)
}
I was doing this for some days and finding some useful documents(Reference Link)
OpenCV - how to count objects in photo?
How to count number of lines (Hough Trasnform) in OpenCV
OPENCV Documents
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?#findcontours
Basically, I understand the first we need to convert this image to black and white, and then using cvtColor, threshold and findContours we can find the colors or lines.
I am attaching the image that vertical Lines I want to get.
Original Image
Output Image that I am getting
I got number of lines count =>10
I am not able to get accurate count here.
Please guide me on this. Thank You!
Since you want to detect the number of the vertical lines, there is a very simple approach I can suggest for you. You already got a clear output and I used this output in my code. Here are the steps before the code:
Preprocess the input image to get the lines clearly
Check each row and check until get a pixel whose value is higher than 100(threshold value I chose)
Then increase the line counter for that row
Continue on that line until get a pixel whose value is lower than 100
Restart from step 3 and finish the image for each row
At the end, check the most repeated element in the array which you assigned line numbers for each row. This number will be the number of vertical lines.
Note: If the steps are difficult to understand, think like this way:
" I am checking the first row, I found a pixel which is higher than
100, now this is a line edge starting, increase the counter for this
row. Search on this row until get a pixel smaller than 100, and then
research a pixel bigger than 100. when row is finished, assign the
line number for this row to a big array. Do this for all image. At the
end, since some lines looks like two lines at the top and also some
noises can occur, you should take the most repeated element in the big
array as the number of lines."
Here is the code part in C++:
#include <vector>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main()
{
cv::Mat img = cv::imread("/ur/img/dir/img.jpg",cv::IMREAD_GRAYSCALE);
std::vector<int> numberOfVerticalLinesForEachRow;
cv::Rect r(0,0,img.cols-10,200);
img = img(r);
bool blackCheck = 1;
for(int i=0; i<img.rows; i++)
{
int numberOfLines = 0;
for(int j=0; j<img.cols; j++)
{
if((int)img.at<uchar>(cv::Point(j,i))>100 && blackCheck)
{
numberOfLines++;
blackCheck = 0;
}
if((int)img.at<uchar>(cv::Point(j,i))<100)
blackCheck = 1;
}
numberOfVerticalLinesForEachRow.push_back(numberOfLines);
}
// In this part you need a simple algorithm to check the most repeated element
for(int k:numberOfVerticalLinesForEachRow)
std::cout<<k<<std::endl;
cv::namedWindow("WinWin",0);
cv::imshow("WinWin",img);
cv::waitKey(0);
}
Here's another possible approach. It relies mainly on the cv::thinning function from the extended image processing module to reduce the lines at a width of 1 pixel. We can crop a ROI from this image and count the number of transitions from 255 (white) to 0 (black). These are the steps:
Threshold the image using Otsu's method
Apply some morphology to clean up the binary image
Get the skeleton of the image
Crop a ROI from the center of the image
Count the number of jumps from 255 to 0
This is the code, be sure to include the extended image processing module (ximgproc) and also link it before compiling it:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/ximgproc.hpp> // The extended image processing module
// Read Image:
std::string imagePath = "D://opencvImages//";
cv::Mat inputImage = cv::imread( imagePath+"IN2Xh.png" );
// Convert BGR to Grayscale:
cv::cvtColor( inputImage, inputImage, cv::COLOR_BGR2GRAY );
// Get binary image via Otsu:
cv::threshold( inputImage, inputImage, 0, 255, cv::THRESH_OTSU );
The above snippet produces the following image:
Note that there's a little bit of noise due to the thresholding, let's try to remove those isolated blobs of white pixels by applying some morphology. Maybe an opening, which is an erosion followed by dilation. The structuring elements and iterations, though, are not the same, and these where found by experimentation. I wanted to remove the majority of the isolated blobs without modifying too much the original image:
// Apply Morphology. Erosion + Dilation:
// Set rectangular structuring element of size 3 x 3:
cv::Mat SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
// Set the iterations:
int morphoIterations = 1;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_ERODE, SE, cv::Point(-1,-1), morphoIterations);
// Set rectangular structuring element of size 5 x 5:
SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(5, 5) );
// Set the iterations:
morphoIterations = 2;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_DILATE, SE, cv::Point(-1,-1), morphoIterations);
This combination of structuring elements and iterations yield the following filtered image:
Its looking alright. Now comes the main idea of the algorithm. If we compute the skeleton of this image, we would "normalize" all the lines to a width of 1 pixel, which is very handy, because we could reduce the image to a 1 x 1 (row) matrix and count the number of jumps. Since the lines are "normalized" we could get rid of possible overlaps between lines. Now, skeletonized images sometimes produce artifacts near the borders of the image. These artifacts resemble thickened anchors at the first and last row of the image. To prevent these artifacts we can extend borders prior to computing the skeleton:
// Extend borders to avoid skeleton artifacts, extend 5 pixels in all directions:
cv::copyMakeBorder( inputImage, inputImage, 5, 5, 5, 5, cv::BORDER_CONSTANT, 0 );
// Get the skeleton:
cv::Mat imageSkelton;
cv::ximgproc::thinning( inputImage, imageSkelton );
This is the skeleton obtained:
Nice. Before we count jumps, though, we must observe that the lines are skewed. If we reduce this image directly to a one row, some overlapping could indeed happen between to lines that are too skewed. To prevent this, I crop a middle section of the skeleton image and count transitions there. Let's crop the image:
// Crop middle ROI:
cv::Rect linesRoi;
linesRoi.x = 0;
linesRoi.y = 0.5 * imageSkelton.rows;
linesRoi.width = imageSkelton.cols;
linesRoi.height = 1;
cv::Mat imageROI = imageSkelton( linesRoi );
This would be the new ROI, which is just the middle row of the skeleton image:
Let me prepare a BGR copy of this just to draw some results:
// BGR version of the Grayscale ROI:
cv::Mat colorROI;
cv::cvtColor( imageROI, colorROI, cv::COLOR_GRAY2BGR );
Ok, let's loop through the image and count the transitions between 255 and 0. That happens when we look at the value of the current pixel and compare it with the value obtained an iteration earlier. The current pixel must be 0 and the past pixel 255. There's more than a way to loop through a cv::Mat in C++. I prefer to use cv::MatIterator_s and pointer arithmetic:
// Set the loop variables:
cv::MatIterator_<cv::Vec3b> it, end;
uchar pastPixel = 0;
int jumpsCounter = 0;
int i = 0;
// Loop thru image ROI and count 255-0 jumps:
for (it = imageROI.begin<cv::Vec3b>(), end = imageROI.end<cv::Vec3b>(); it != end; ++it) {
// Get current pixel
uchar ¤tPixel = (*it)[0];
// Compare it with past pixel:
if ( (currentPixel == 0) && (pastPixel == 255) ){
// We have a jump:
jumpsCounter++;
// Draw the point on the BGR version of the image:
cv::line( colorROI, cv::Point(i, 0), cv::Point(i, 0), cv::Scalar(0, 0, 255), 1 );
}
// current pixel is now past pixel:
pastPixel = currentPixel;
i++;
}
// Show image and print number of jumps found:
cv::namedWindow( "Jumps Found", CV_WINDOW_NORMAL );
cv::imshow( "Jumps Found", colorROI );
cv::waitKey( 0 );
std::cout<<"Jumps Found: "<<jumpsCounter<<std::endl;
The points where the jumps were found are drawn in red, and the number of total jumps printed is:
Jumps Found: 9
I have a sample image:
and I use different thresholding methods in order to count the number of pixels.
First Method is simple thresholding since on the source image I only have a one colored object against a white background.
Mat image = imread("/$image_path", IMREAD_GRAYSCALE);
Mat binary_image;
threshold(image, binary_image, 120, 255, THRESH_BINARY);
int TotalNumberOfPixels = binary_image.rows * binary_image.cols;
int PixelCount = TotalNumberOfPixels - cv::countNonZero(binary_image);
return PixelCount;
The second method is assuming I have an image with multiple colored objects (ie multiple colored marks) hence I need to filter and apply a red mask. I did it via:
Mat image2 = imread("/$image_path", IMREAD_COLOR);
Mat blurred, edge;
Mat bgrInv = ~image2;
Mat hsvIm;
Mat maskRed;
cvtColor(bgrInv, hsvIm, COLOR_BGR2HSV);
inRange(hsvIm, Scalar(80, 70, 239), Scalar(100, 255, 255), maskRed);
imshow("Mask", maskRed);
//blur(maskRed, blurred, Size(3, 3));
//Canny(blurred, edge, 75, 200, 3);
cout << "Pixel Count: " << countNonZero(maskRed)<< endl;
The output for both methods are:
Method 1: 406
Method 2: 155
I will be operating on a colored image hence I was using the second method at first. But I do not know if it will be "accurate" or correct.
Here is the sample template that I am working on. Its basically a survey type template with minor colored blocks. With red circles as mark placeholders for easier pre/post processing.
Let's have a look at the outputs of your two methods.
This is binary_image from method #1:
You count - cumbersomely (is this proper English?) - the black pixels, which corresponds to your task to count the red pixels. (By the way, invert the threshold and just count the white pixels.)
This is edge from method #2:
You count the white pixels, but as you can see, this is only the outline of the initial red object. And this does NOT correspond to your original task.
So, given both methods as they are, the first is more "accurate", at the moment and for the given example. Nevertheless, you mentioned objects of different colors, so method #2 should be reworked to count the proper pixels.
Could you please give examples for images with multiple objects of different color?
Also, I edited your question. (The edit isn't reviewed, yet.) The image loading part is important, since I guess, in method #1 you used imread(..., IMREAD_GRAYSCALE) and imread(..., IMREAD_COLOR) in method #2.
I think all methods are not accurate if your image is blured (such as in JPEG format). But let's assume it's clear.
To count colored object pixels, we can count all colored pixels or count all RED pixels.
(1) Find the colored regions in HSV:
How to detect colored patches in an image using OpenCV?
# count colored pixels in S(HSV)
def countColored(img):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
num = np.sum(s>20)
return num
(2) Find the Red regions:
How to find the RED color regions using OpenCV?
def countRed(img):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
mask1 = cv2.inRange(hsv, (0,50,20), (5,255,255))
mask2 = cv2.inRange(hsv, (175,50,20), (180,255,255))
mask = cv2.bitwise_or(mask1, mask2 )
num = cv2.countNonZero(mask)
return num
#!/usr/bin/python3
# 2019/02/28
import cv2
import numpy as np
def cvshow(img):
cv2.imshow("OpenCV", img)
cv2.waitKey();cv2.destroyAllWindows()
def countColored(img):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
num = np.sum(s>20)
return num
def countRed(img):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
mask1 = cv2.inRange(hsv, (0,50,20), (5,255,255))
mask2 = cv2.inRange(hsv, (175,50,20), (180,255,255))
mask = cv2.bitwise_or(mask1, mask2 )
num = cv2.countNonZero(mask)
return num
if __name__ == "__main__":
fpath = "fQipc.jpg"
img = cv2.imread(fpath)
num1 = countColored(img)
num2 = countRed(img)
print(num1, num2)
# 643, 555
I'm working on image processing. Firstly, I have to make image segmentation and extract only boundary of image. Then, This image is converted to freeman chain code. The part of freeman chain code is Okay. But, When I make a segmentation of image, inside of the image remains some unwanted white pixels. And thus, the next step,which is freeman chain code, is not being succesfull. I mean, It gives incorrect chain code because of unwanted pixels. So, I have to remove unwanted pixels from inside of image. I will share my code and can you tell me how i can change in this code or what kind of a correct code can i should write for this filter ? Code is here :
#include <opencv2/opencv.hpp>
#include <vector>
#include <iostream>
#include <opencv2/imgproc/imgproc_c.h>
using namespace cv;
using namespace std;
int main(){
Mat img = imread("<image-path>");
Mat gray;
cvtColor(img,gray,CV_BGR2GRAY);
Mat binary;
threshold(gray,binary, 200, 255, CV_THRESH_BINARY);
Mat kernel = (Mat_<float>(3,3) <<
1, 1, 1,
1, -8, 1,
1, 1, 1);
Mat imgLaplacian;
Mat sharp= binary;
filter2D(binary, imgLaplacian, CV_32F, kernel);
binary.convertTo(sharp, CV_32F);
Mat imgResult = sharp - imgLaplacian;
imgResult.convertTo(imgResult, CV_8UC1);
imgLaplacian.convertTo(imgLaplacian, CV_8UC1);
//Find contours
vector<vector<Point>> contours;
vector <uchar> chaincode;
vector <char> relative;
findContours(imgLaplacian,contours, CV_RETR_LIST, CHAIN_APPROX_NONE);
for (size_t i=0; i<contours.size();i++){
chain_freeman(contours[i],chaincode);
FileStorage fs("<file-path>", 1);
fs << "chain" << chaincode;
}
for (size_t i=0; i<chaincode.size()-1; i++){
int relative1 = 0;
relative1 = abs(chaincode[i]-chaincode[i+1]);
cout << relative1;
for (int j=0; j<relative1; j++){
}
relative.push_back(relative1);
FileStorage fs("<file-path>", 1);
fs << "chain" << relative;
}
imshow("binary",imgLaplacian);
cvWaitKey();
return 0;
}
original image
Result
In this result, I want to remove white pixel inside of the image. I tried all fiter in opencv but I could not achieve. It's very important because of chain code.
Okay, now I see it. As said, you can ignore small contours simply by their length. For the rest, you need maximally thin contours (seems like 4-connected is the case). There you have couple options:
1) thinning of the current. If you can grab Matlab's lookup table, you can then load it into OpenCV as How to use Matlab's 512 element lookup table array in OpenCV?
2) it's pretty simple to label the boundary pixels by hand after binarization. To make it more efficient, you can first fill small cavities (islets) by applying connected component labeling on the background (using opposite connectivity this time, 8 it is).
2i & 2ii) If you do the labeling by hand, you can either continue collecting the contour vector by hand or switch to cv::findContours
Hope this helps
I have the following code to detect contours in an image using cvThreshold and cvFindContours:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contours = 0;
cvThreshold( processedImage, processedImage, thresh1, 255, CV_THRESH_BINARY );
nContours = cvFindContours(processedImage, storage, &contours, sizeof(CvContour), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, cvPoint(0,0) );
I would like to somehow extend this code to filter/ignore/remove any contours that touch the image boundaries. However I am unsure how to go about this. Should I filter the threshold image or can I filter the contours afterwards? Hope somebody knows an elegant solution, since surprisingly I could not come up with a solution by googling.
Update 2021-11-25
updates code example
fixes bugs with image borders
adds more images
adds Github repo with CMake support to build example app
Full out-of-the-box example can be found here:
C++ application with CMake
General info
I am using OpenCV 3.0.0
Using cv::findContours actually alters the input image, so make sure that you work either on a separate copy specifically for this function or do not further use the image at all
Update 2019-03-07: "Since opencv 3.2 source image is not modified by this function." (see corresponding OpenCV documentation)
General solution
All you need to know of a contour is if any of its points touches the image border. This info can be extracted easily by one of the following two procedures:
Check each point of your contour regarding its location. If it lies at the image border (x = 0 or x = width - 1 or y = 0 or y = height - 1), simply ignore it.
Create a bounding box around the contour. If the bounding box lies along the image border, you know the contour does, too.
Code for the second solution (CMake):
cmake_minimum_required(VERSION 2.8)
project(SolutionName)
find_package(OpenCV REQUIRED)
set(TARGETNAME "ProjectName")
add_executable(${TARGETNAME} ./src/main.cpp)
include_directories(${CMAKE_CURRENT_BINARY_DIR} ${OpenCV_INCLUDE_DIRS} ${OpenCV2_INCLUDE_DIR})
target_link_libraries(${TARGETNAME} ${OpenCV_LIBS})
Code for the second solution (C++):
bool contourTouchesImageBorder(const std::vector<cv::Point>& contour, const cv::Size& imageSize)
{
cv::Rect bb = cv::boundingRect(contour);
bool retval = false;
int xMin, xMax, yMin, yMax;
xMin = 0;
yMin = 0;
xMax = imageSize.width - 1;
yMax = imageSize.height - 1;
// Use less/greater comparisons to potentially support contours outside of
// image coordinates, possible future workarounds with cv::copyMakeBorder where
// contour coordinates may be shifted and just to be safe.
// However note that bounding boxes of size 1 will have their start point
// included (of course) but also their and with/height values set to 1
// but should not contain 2 pixels.
// Which is why we have to -1 the "search grid"
int bbxEnd = bb.x + bb.width - 1;
int bbyEnd = bb.y + bb.height - 1;
if (bb.x <= xMin ||
bb.y <= yMin ||
bbxEnd >= xMax ||
bbyEnd >= yMax)
{
retval = true;
}
return retval;
}
Call it via:
...
cv::Size imageSize = processedImage.size();
for (auto c: contours)
{
if(contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
...
Full C++ example:
void testContourBorderCheck()
{
std::vector<std::string> filenames =
{
"0_single_pixel_top_left.png",
"1_left_no_touch.png",
"1_left_touch.png",
"2_right_no_touch.png",
"2_right_touch.png",
"3_top_no_touch.png",
"3_top_touch.png",
"4_bot_no_touch.png",
"4_bot_touch.png"
};
// Load example image
//std::string path = "C:/Temp/!Testdata/ContourBorderDetection/test_1/";
std::string path = "../Testdata/ContourBorderDetection/test_1/";
for (int i = 0; i < filenames.size(); ++i)
{
//std::string filename = "circle3BorderDistance0.png";
std::string filename = filenames.at(i);
std::string fqn = path + filename;
cv::Mat img = cv::imread(fqn, cv::IMREAD_GRAYSCALE);
cv::Mat processedImage;
img.copyTo(processedImage);
// Create copy for contour extraction since cv::findContours alters the input image
cv::Mat workingCopyForContourExtraction;
processedImage.copyTo(workingCopyForContourExtraction);
std::vector<std::vector<cv::Point>> contours;
// Extract contours
cv::findContours(workingCopyForContourExtraction, contours, cv::RetrievalModes::RETR_EXTERNAL, cv::ContourApproximationModes::CHAIN_APPROX_SIMPLE);
// Prepare image for contour drawing
cv::Mat drawing;
processedImage.copyTo(drawing);
cv::cvtColor(drawing, drawing, cv::COLOR_GRAY2BGR);
// Draw contours
cv::drawContours(drawing, contours, -1, cv::Scalar(255, 255, 0), 1);
//cv::imwrite(path + "processedImage.png", processedImage);
//cv::imwrite(path + "workingCopyForContourExtraction.png", workingCopyForContourExtraction);
//cv::imwrite(path + "drawing.png", drawing);
const auto imageSize = img.size();
bool liesOnBorder = contourTouchesImageBorder(contours.at(0), imageSize);
// std::cout << "lies on border: " << std::to_string(liesOnBorder);
std::cout << filename << " lies on border: "
<< liesOnBorder;
std::cout << std::endl;
std::cout << std::endl;
cv::imshow("processedImage", processedImage);
cv::imshow("workingCopyForContourExtraction", workingCopyForContourExtraction);
cv::imshow("drawing", drawing);
cv::waitKey();
//cv::Size imageSize = workingCopyForContourExtraction.size();
for (auto c : contours)
{
if (contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
for (auto c : contours)
{
if (contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
}
}
int main(int argc, char** argv)
{
testContourBorderCheck();
return 0;
}
Problem with contour detection near image borders
OpenCV seems to have a problem with correctly finding contours near image borders.
For both objects, the detected contour is the same (see images). However, in image 2 the detected contour is not correct since a part of the object lies along x = 0, but the contour lies in x = 1.
This seem like a bug to me.
There is an open issue regarding this here: https://github.com/opencv/opencv/pull/7516
There also seems to be a workaround with cv::copyMakeBorder (https://github.com/opencv/opencv/issues/4374), however it seems a bit complicated.
If you can be a bit patient, I'd recommend waiting for the release of OpenCV 3.2 which should happen within the next 1-2 months.
New example images:
Single pixel top left, objects left, right, top, bottom, each touching and not touching (1px distance)
Example images
Object touching image border
Object not touching image border
Contour for object touching image border
Contour for object not touching image border
Although this question is in C++, the same issue affects openCV in Python. A solution to the openCV '0-pixel' border issue in Python (and which can likely be used in C++ as well) is to pad the image with 1 pixel on each border, then call openCV with the padded image, and then remove the border afterwards. Something like:
img2 = np.pad(img.copy(), ((1,1), (1,1), (0,0)), 'edge')
# call openCV with img2, it will set all the border pixels in our new pad with 0
# now get rid of our border
img = img2[1:-1,1:-1,:]
# img is now set to the original dimensions, and the contours can be at the edge of the image
If anyone needs this in MATLAB, here is the function.
function [touch] = componentTouchesImageBorder(C,im_row_max,im_col_max)
%C is a bwconncomp instance
touch=0;
S = regionprops(C,'PixelList');
c_row_max = max(S.PixelList(:,1));
c_row_min = min(S.PixelList(:,1));
c_col_max = max(S.PixelList(:,2));
c_col_min = min(S.PixelList(:,2));
if (c_row_max==im_row_max || c_row_min == 1 || c_col_max == im_col_max || c_col_min == 1)
touch = 1;
end
end
I have one gray scale image which is just the R channel of a photo, now I'm trying to write that R channel into a new image, which is an RGB image. Ideally, the new image would look just like the old image, but red.
What happens though is that in the new image, the old image appears three times squished next to each other.
Here you can see the gray scale image and the output image.
Here is my code, I think it's pretty straightforward:
Mat img_in = imread("in.png", CV_LOAD_IMAGE_GRAYSCALE);
Mat img_out = Mat::zeros(img_in.size(), CV_8UC3);
for (int i = 0; i < img_in.rows; i++)
{
for (int j = 0; j < img_in.cols; j++)
{
img_out.at<Vec3b>(i,j)[2] = img_in.at<Vec3b>(i,j)[2];
}
}
imwrite("test_img_in.png", img_in);
imwrite("test_img_out.png", img_out);
At first I thought it was some kind of indices mixup, but I've tried a lot of combinations, and it always multiplies the output image three times horizontally, never vertically.
Now my thought is that it comes from some OpenCV specification, like the CV_8UC3 type (I've tried others too), which I've chosen because I think it support RGB images. Unfortunately, I don't know too much about OpenCV itself, that's why I'm seeking help here.
PS: This is part of a whole bigger program which wants to generate a color image from three gray scale channel images, but I'm currently stuck on combining the aligned gray scale images, since this happens. The code I posted is isolated from the rest of the program and works like this on its own.
My OpenCV version is 2.4.11.
The problem is here:
img_out.at<Vec3b>(i,j)[2] = img_in.at<Vec3b>(i,j)[2];
As you said the input image is gray. So, just use:
img_out.at<Vec3b>(i,j)[2] = img_in.at<unsigned char>(i,j);
you will get the same result by loading your image as 3 channel and subtract Scalar(255,255,0)
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char **argv)
{
Mat src = imread(argv[1]);
imshow("src", src );
src -= Scalar(255,255,0);
imshow("Red channel", src );
waitKey();
return 0;
}