Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
My project's goal is to detect the lung's nodules , after filtering and classification i got a binary image like this:
my problem is i don't know how to mark these points of interest on the original image . In matlab ,i can do it easily with [hold on-off] a loop and some plot() function . But how do i do that in C++ , i don't mean to translate matlab code to C++ , i just need to mark these PoIs on the original image by any mean necessary .
Here is the result i want :
i did it in Matlab
EDIT : I already have the points's positions from my program (as you can see in the first image) all i want to do is draw them on the original image like the second one.
Try this...
from your binary image you can extract contours.
Then either draw the contours directly, or extract bounding circles that cover the whole contour.
I'll present both methods.
int main()
{
cv::Mat input = cv::imread("../inputData/markMatlab.png");
cv::Mat gray;
cv::cvtColor(input, gray, CV_BGR2GRAY);
cv::Mat binaryImage = gray>0;
cv::imshow("binary image", binaryImage);
// here you start
std::vector<std::vector<cv::Point> > contours;
cv::findContours(binaryImage, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// either this:
cv::Mat inputBlobs = input.clone(); // create output image
for(unsigned int i=0; i<contours.size(); ++i)
{
cv::Point2f blobCenter;
float blobRadius;
cv::minEnclosingCircle(contours[i], blobCenter, blobRadius);
cv::circle(inputBlobs, blobCenter, blobRadius, cv::Scalar(0,0,255), 2);
}
// or this one:
cv::Mat inputContours = input.clone(); // create output image
for(unsigned int i=0; i<contours.size(); ++i)
{
cv::drawContours(inputContours, contours, i, cv::Scalar(0,0,255), 2);
}
cv::imshow("input", input);
cv::imshow("input blobs", inputBlobs);
cv::imshow("input contours", inputContours);
cv::imwrite("../outputData/markMatlab.png", input);
cv::imwrite("../outputData/markMatlabBlobs.png", inputBlobs);
cv::imwrite("../outputData/markMatlabContours.png", inputContours);
cv::waitKey(0);
return 0;
}
bounding circles:
drawing contours directly:
just use your original image as input for the drawing functions.
Have you tried:
Mat image;
image = imread(filename, CV_LOAD_IMAGE_COLOR);
And then using ellipse on it?
On maybe, if you do not know the coordinates, but you just have the resulting filtered image using blend
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 19 days ago.
Improve this question
Any thoughts where I might be going wrong. PS: New to coding and StackOverFlow.
#include<iostream>
#include<opencv2/opencv.hpp>
#include<opencv2/highgui.hpp>
#include<opencv2/imgcodecs.hpp>
#include<opencv2/imgproc.hpp>
//Declare the image variables
cv::Mat img, imgGray, imgBlur, imgCanny, imgDil;
void GetContours(cv::Mat dilatedImg, cv::Mat originalImg);
int main(int argc, char** argv)
{
std::string path="E://Trial//Resources//Resources//shapes.png";
img= cv::imread(path);
//pre=processing
cv::cvtColor(img,imgGray,cv::COLOR_BGR2GRAY);
cv::GaussianBlur(imgGray, imgBlur,cv::Size(3,3),3,0);
cv::Canny(imgBlur,imgCanny,25,75);
cv::Mat kernel= cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3,3)) ;
cv::dilate(imgCanny,imgDil,kernel);
//Processing
GetContours(imgDil, img);
//Display contours
cv::imshow("Image",img);
cv::waitKey(0);
return 0;
}
void GetContours(cv::Mat dilatedImg, cv::Mat originalImg)
{
std::vector<std::vector<cv::Point>> contours;
std::vector<cv::Vec4i> hierarchy;
std::vector<std::vector<cv::Point>> conPoly(contours.size());
double area=0;
//finds the contours in the shapes
cv::findContours(dilatedImg, contours, hierarchy, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
for(int i=0; i<contours.size(); i++)
{
area = cv::contourArea(contours[i]);
std::cout<<area<<std::endl;
if(area>1000)
{
//Draw contours around shapes
cv::drawContours(originalImg,contours, i,cv::Scalar(255,0,255),2);
// create a bounding box around the shapes
cv::approxPolyDP(cv::Mat(contours[i]), conPoly[i], 3, true);
//draw contours using the contour points
cv::drawContours(originalImg,conPoly, i,cv::Scalar(255,255,0),2);
}
}
}
ApproxPollyDP is where I think the code is failing. I am getting an Assertion Failed error with vector out of range. I think I am doing some silly mistake but I have not been able to debug the issue.
The vector conPoly declared as shown below
std::vector<std::vector<cv::Point>> contours;
std::vector<cv::Vec4i> hierarchy;
std::vector<std::vector<cv::Point>> conPoly(contours.size());
is an empty vector because initially contours.size() is equal tp 0 because the vector contours in turn is declared as empty.
So using the subscript operator with the vector
cv::approxPolyDP(cv::Mat(contours[i]), conPoly[i], 3, true);
invokes undefined behavior.
So this works:
std::vector<std::vector<cv::Point>> contours;
std::vector<cv::Vec4i> hierarchy;
double area=0;
//finds the contours in the shapes
cv::findContours(dilatedImg, contours, hierarchy, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
std::vector<std::vector<cv::Point>> conPoly(contours.size());
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I would like to set all the pixels in a region of an image to the same color. I couldn't find how to do that in C++ using Opencv, if anyone can please help. I found this code in Python but it doesn't work for C++:
It starts from the x,y coords of the region(bottom right of the image segment) up until the given x+width and y+height.
img[child.getX():child.getX()+child.getW(), child.getY():child.getY()+child.getH(),0]=val.B;
img[child.getX():child.getX()+child.getW(), child.getY():child.getY()+child.getH(),1]=val.G;
img[child.getX():child.getX()+child.getW(), child.getY():child.getY()+child.getH(),2]=val.R
;
Define the rectangle. OpenCV calls it a "region of interest" (ROI).
Then call the Mat instance with that rectangle... sounds strange but bear with me.
Now you have a view into the original Mat. Any changes happen to the same memory, and you see the changes in either object.
Now you can use the setTo method to fill the ROI with your color.
#include <string>
#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
int main()
{
// load a picture
std::string image_path { cv::samples::findFile("lena.jpg") };
cv::Mat img { cv::imread(image_path) };
// define the region
cv::Rect rect { 100, 50, 200, 100 };
// here the img object is called with the rectangle
cv::Mat roi { img(rect) };
// show both Mat instances
cv::imshow("img", img);
cv::imshow("roi", roi);
cv::waitKey(-1); // waits forever. press a key?
// fill with one color
roi.setTo(cv::Scalar(255, 0, 255));
cv::imshow("img", img);
cv::imshow("roi", roi);
cv::waitKey(-1);
// all in one expression
img(cv::Rect(150, 100, 100, 100)).setTo(cv::Scalar(0, 255, 0));
cv::imshow("img", img);
cv::imshow("roi", roi);
cv::waitKey(-1);
return 0;
}
I'm using openCV + C++ to extract the contours of an image.
See these lines:
vector<vector<cv::Point>> contours;
vector<Vec4i> hierarchy;
cvtColor(image, image, CV_BGR2GRAY);
findContours( image, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0) );
for( int i = 0; i < contours.size(); i++ ) {
// do stuff with contours[i];
}
I was wondering if there is a way to extract the extract the main-color of the image.
From this image --> extract color "blue" as RGB;
Any help how to implement this in C++ would be very appreciated.
Note: This question (Finding contour color using opencv c++) was all I've found during my research but it looks like 1.the question is Off-Topic & 2.the question didn't get answered.
What I'm trying to do is measure the thickness of the eyeglasses frames. I had the idea to measure the thickness of the frame's contours (may be a better way?). I have so far outlined the frame of the glasses, but there are gaps where the lines don't meet. I thought about using HoughLinesP, but I'm not sure if this is what I need.
So far I have conducted the following steps:
Convert image to grayscale
Create ROI around the eye/glasses area
Blur the image
Dilate the image (have done this to remove any thin framed glasses)
Conduct Canny edge detection
Found contours
These are the results:
This is my code so far:
//convert to grayscale
cv::Mat grayscaleImg;
cv::cvtColor( img, grayscaleImg, CV_BGR2GRAY );
//create ROI
cv::Mat eyeAreaROI(grayscaleImg, centreEyesRect);
cv::imshow("roi", eyeAreaROI);
//blur
cv::Mat blurredROI;
cv::blur(eyeAreaROI, blurredROI, Size(3,3));
cv::imshow("blurred", blurredROI);
//dilate thin lines
cv::Mat dilated_dst;
int dilate_elem = 0;
int dilate_size = 1;
int dilate_type = MORPH_RECT;
cv::Mat element = getStructuringElement(dilate_type,
cv::Size(2*dilate_size + 1, 2*dilate_size+1),
cv::Point(dilate_size, dilate_size));
cv::dilate(blurredROI, dilated_dst, element);
cv::imshow("dilate", dilated_dst);
//edge detection
int lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
cv::Canny(dilated_dst, dilated_dst, lowThreshold, lowThreshold*ratio, kernel_size);
//create matrix of the same type and size as ROI
Mat dst;
dst.create(eyeAreaROI.size(), dilated_dst.type());
dst = Scalar::all(0);
dilated_dst.copyTo(dst, dilated_dst);
cv::imshow("edges", dst);
//join the lines and fill in
vector<Vec4i> hierarchy;
vector<vector<Point>> contours;
cv::findContours(dilated_dst, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::imshow("contours", dilated_dst);
I'm not entirely sure what the next steps would be, or as I said above, if I should use HoughLinesP and how to implement it. Any help is very much appreciated!
I think there are 2 main problems.
segment the glasses frame
find the thickness of the segmented frame
I'll now post a way to segment the glasses of your sample image. Maybe this method will work for different images too, but you'll probably have to adjust parameters, or you might be able to use the main ideas.
Main idea is:
First, find the biggest contour in the image, which should be the glasses. Second, find the two biggest contours within the previous found biggest contour, which should be the glasses within the frame!
I use this image as input (which should be your blurred but not dilated image):
// this functions finds the biggest X contours. Probably there are faster ways, but it should work...
std::vector<std::vector<cv::Point>> findBiggestContours(std::vector<std::vector<cv::Point>> contours, int amount)
{
std::vector<std::vector<cv::Point>> sortedContours;
if(amount <= 0) amount = contours.size();
if(amount > contours.size()) amount = contours.size();
for(int chosen = 0; chosen < amount; )
{
double biggestContourArea = 0;
int biggestContourID = -1;
for(unsigned int i=0; i<contours.size() && contours.size(); ++i)
{
double tmpArea = cv::contourArea(contours[i]);
if(tmpArea > biggestContourArea)
{
biggestContourArea = tmpArea;
biggestContourID = i;
}
}
if(biggestContourID >= 0)
{
//std::cout << "found area: " << biggestContourArea << std::endl;
// found biggest contour
// add contour to sorted contours vector:
sortedContours.push_back(contours[biggestContourID]);
chosen++;
// remove biggest contour from original vector:
contours[biggestContourID] = contours.back();
contours.pop_back();
}
else
{
// should never happen except for broken contours with size 0?!?
return sortedContours;
}
}
return sortedContours;
}
int main()
{
cv::Mat input = cv::imread("../Data/glass2.png", CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat inputColors = cv::imread("../Data/glass2.png"); // used for displaying later
cv::imshow("input", input);
//edge detection
int lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
cv::Mat canny;
cv::Canny(input, canny, lowThreshold, lowThreshold*ratio, kernel_size);
cv::imshow("canny", canny);
// close gaps with "close operator"
cv::Mat mask = canny.clone();
cv::dilate(mask,mask,cv::Mat());
cv::dilate(mask,mask,cv::Mat());
cv::dilate(mask,mask,cv::Mat());
cv::erode(mask,mask,cv::Mat());
cv::erode(mask,mask,cv::Mat());
cv::erode(mask,mask,cv::Mat());
cv::imshow("closed mask",mask);
// extract outermost contour
std::vector<cv::Vec4i> hierarchy;
std::vector<std::vector<cv::Point>> contours;
//cv::findContours(mask, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::findContours(mask, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// find biggest contour which should be the outer contour of the frame
std::vector<std::vector<cv::Point>> biggestContour;
biggestContour = findBiggestContours(contours,1); // find the one biggest contour
if(biggestContour.size() < 1)
{
std::cout << "Error: no outer frame of glasses found" << std::endl;
return 1;
}
// draw contour on an empty image
cv::Mat outerFrame = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
cv::drawContours(outerFrame,biggestContour,0,cv::Scalar(255),-1);
cv::imshow("outer frame border", outerFrame);
// now find the glasses which should be the outer contours within the frame. therefore erode the outer border ;)
cv::Mat glassesMask = outerFrame.clone();
cv::erode(glassesMask,glassesMask, cv::Mat());
cv::imshow("eroded outer",glassesMask);
// after erosion if we dilate, it's an Open-Operator which can be used to clean the image.
cv::Mat cleanedOuter;
cv::dilate(glassesMask,cleanedOuter, cv::Mat());
cv::imshow("cleaned outer",cleanedOuter);
// use the outer frame mask as a mask for copying canny edges. The result should be the inner edges inside the frame only
cv::Mat glassesInner;
canny.copyTo(glassesInner, glassesMask);
// there is small gap in the contour which unfortunately cant be closed with a closing operator...
cv::dilate(glassesInner, glassesInner, cv::Mat());
//cv::erode(glassesInner, glassesInner, cv::Mat());
// this part was cheated... in fact we would like to erode directly after dilation to not modify the thickness but just close small gaps.
cv::imshow("innerCanny", glassesInner);
// extract contours from within the frame
std::vector<cv::Vec4i> hierarchyInner;
std::vector<std::vector<cv::Point>> contoursInner;
//cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// find the two biggest contours which should be the glasses within the frame
std::vector<std::vector<cv::Point>> biggestInnerContours;
biggestInnerContours = findBiggestContours(contoursInner,2); // find the one biggest contour
if(biggestInnerContours.size() < 1)
{
std::cout << "Error: no inner frames of glasses found" << std::endl;
return 1;
}
// draw the 2 biggest contours which should be the inner glasses
cv::Mat innerGlasses = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
for(unsigned int i=0; i<biggestInnerContours.size(); ++i)
cv::drawContours(innerGlasses,biggestInnerContours,i,cv::Scalar(255),-1);
cv::imshow("inner frame border", innerGlasses);
// since we dilated earlier and didnt erode quite afterwards, we have to erode here... this is a bit of cheating :-(
cv::erode(innerGlasses,innerGlasses,cv::Mat() );
// remove the inner glasses from the frame mask
cv::Mat fullGlassesMask = cleanedOuter - innerGlasses;
cv::imshow("complete glasses mask", fullGlassesMask);
// color code the result to get an impression of segmentation quality
cv::Mat outputColors1 = inputColors.clone();
cv::Mat outputColors2 = inputColors.clone();
for(int y=0; y<fullGlassesMask.rows; ++y)
for(int x=0; x<fullGlassesMask.cols; ++x)
{
if(!fullGlassesMask.at<unsigned char>(y,x))
outputColors1.at<cv::Vec3b>(y,x)[1] = 255;
else
outputColors2.at<cv::Vec3b>(y,x)[1] = 255;
}
cv::imshow("output", outputColors1);
/*
cv::imwrite("../Data/Output/face_colored.png", outputColors1);
cv::imwrite("../Data/Output/glasses_colored.png", outputColors2);
cv::imwrite("../Data/Output/glasses_fullMask.png", fullGlassesMask);
*/
cv::waitKey(-1);
return 0;
}
I get this result for segmentation:
the overlay in original image will give you an impression of quality:
and inverse:
There are some tricky parts in the code and it's not tidied up yet. I hope it's understandable.
The next step would be to compute the thickness of the the segmented frame. My suggestion is to compute the distance transform of the inversed mask. From this you will want to compute a ridge detection or skeletonize the mask to find the ridge. After that use the median value of ridge distances.
Anyways I hope this posting can help you a little, although it's not a solution yet.
Depending on lighting, frame color etc this may or may not work but how about simple color detection to separate the frame ? Frame color will usually be a lot darker than human skin. You'll end up with a binary image (just black and white) and by calculating the number (area) of black pixels you get the area of the frame.
Another possible way is to get better edge detection, by adjusting/dilating/eroding/both until you get better contours. You will also need to differentiate the contour from the lenses and then apply cvContourArea.
I'm trying to locate some regions of a frame, the frame is in Ycbcr color space. and I have to select those regions based on their Y values.
so I wrote this code:
Mat frame. ychannel;
VideoCapture cap(1);
int key =0;
int maxV , minV;
Point max, min;
while(key != 27){
cap >> frame;
cvtColor(frame,yframe,CV_RGB_YCrCb); // converting to YCbCr color space
extractChannel(yframe, yframe, 0); // extracting the Y channel
cv::minMaxLoc(yframe,&minV,&maxV,&min,&max);
cv::threshold(outf,outf,(maxV-10),(maxV),CV_THRESH_TOZERO);
/**
Now I want to use :
cv::rectangle()
but I want to draw a rect around any pixel (see the picture bellow)that's higher than (maxV-10)
and that during the streaming
**/
key = waitKey(1);
}
I draw this picture hopping that it helps to understand what I what to do .
thanks for your help.
Once you have applied your threshold you will end up with a binary image containing a number of connected components, if you want to draw a rectangle around each component then you first need to detect those components.
The OpenCV function findContours does just that, pass it your binary image, and it will provide you with a vector of vectors of points which trace the boundary of each component in your image.
cv::Mat binaryImage;
std::vector<std::vector<cv::Point>> contours;
cv::findContours(binaryImage, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE)
Then all you need to do is find the bounding rectangle of each of those sets of points and draw them to your output image.
for (int i=0; i<contours.size(); ++i)
{
cv::Rect r = cv::boundingRect(contours.at(i));
cv::rectangle(outputImage, r, CV_RGB(255,0,0));
}
You have to find the each of the connected components, and draw their bounding box.