Detecting slightly bright areas (fawn of deer) in thermal images - c++

I am looking into detecting slightly bright areas (fawns from the roe deer) in thermal images with openCV.
So far I managed to get some code that works somehow, but with to many false negatives and false positives.
I basically know my way around openCV. But from the algorithmic side I a not sure what the best solution should be to result in a most perfect detection.
So far I use a cascade of something like this
gaussion blur
some sore of hysteresis thesholding
blob detection
Code snipped:
cv::GaussianBlur(gray, gray, cv::Size(gauss_size, gauss_size), 0);
Mat threshUpper, threshLower;
threshold(gray, threshUpper, mask_min, mask_max, cv::THRESH_BINARY);
threshold(gray, threshLower, mask_min-mask_thresh, mask_max, cv::THRESH_BINARY);
imshow("threshUpper", threshUpper);
imshow("threshLower", threshLower);
vector<vector<Point>> contoursUpper;
cv::findContours(threshUpper, contoursUpper, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
for(auto cnt : contoursUpper){
cv::floodFill(threshLower, cnt[0], 255, 0, 2, 2, cv::FLOODFILL_FIXED_RANGE);
}
threshold(threshLower, out, 200, 255, cv::THRESH_BINARY);
vector<vector<Point>> contours2clean;
cv::findContours(out, contours2clean, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
for(const auto& cnt : contours2clean) {
double area = cv::contourArea(cnt);
if ( area > cut_max_size || area < cut_min_size) {
cv::floodFill(out, cnt[0], 0, 0, 2, 2, cv::FLOODFILL_FIXED_RANGE);
}
else {
cv::floodFill(out, cnt[0], 255, 0, 2, 2, cv::FLOODFILL_FIXED_RANGE);
}
}
std::vector<cv::KeyPoint> points;
detector_->detect(out, points);
cv::drawKeypoints(out, points, out, cv::Scalar(0, 0, 255), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
I am looking for some advice for better approaches. Two images (raw and marked) are here:
Thanks!

Related

Perfomance Issues while capturing and processing a video

I'm currently working on a project where I need to display a processed live video capture. Therefore, I'm using something similar to this:
cv::VideoCapture cap(0);
if (!cap.isOpened())
return -1;
cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 720);
cv::namedWindow("Current Capture");
for (;;)
{
cv::Mat frame;
cap >> frame;
cv::Mat mirrored;
cv::flip(frame, mirrored, 1);
cv::imshow("Current Capture", process_image(mirrored));
if (cv::waitKey(30) >= 0) break;
}
The problem I have is, that process_image, which perfomes a circle detection in the image, needs some time to finish and causes the displaying to be rather a slideshow then a video.
My Question is: How can I speed up the processing without manipulating the process_image function?
I thought about performing the image processing in another thread, but I'm not really sure how to start. Do you have any other idea than this?
PS.: I'm not expecting you to write code for me, I only need a point to start from ;)
EDIT:
Ok, if there is nothing i can do about the performance while capturing, I will need to change the process_image function.
cv::Mat process_image(cv::Mat img)
{
cv::Mat hsv;
cv::medianBlur(img, img, 7);
cv::cvtColor(img, hsv, cv::COLOR_BGR2HSV);
cv::Mat lower_hue_range; // lower and upper hue range in case of red color
cv::Mat upper_hue_range;
cv::inRange(hsv, cv::Scalar(LOWER_HUE1, 100, 100), cv::Scalar(UPPER_HUE1, 255, 255), lower_hue_range);
cv::inRange(hsv, cv::Scalar(LOWER_HUE2, 100, 100), cv::Scalar(UPPER_HUE1, 255, 255), upper_hue_range);
/// Combine the above two images
cv::Mat hue_image;
cv::addWeighted(lower_hue_range, 1.0, upper_hue_range, 1.0, 0.0, hue_image);
/// Reduce the noise so we avoid false circle detection
cv::GaussianBlur(hue_image, hue_image, cv::Size(13, 13), 2, 2);
/// store all found circles here
std::vector<cv::Vec3f> circles;
cv::HoughCircles(hue_image, circles, CV_HOUGH_GRADIENT, 1, hue_image.rows / 8, 100, 20, 0, 0);
for (size_t i = 0; i < circles.size(); i++)
{
/// circle center
cv::circle(hsv, cv::Point(circles[i][0], circles[i][1]), 3, cv::Scalar(0, 255, 0), -1, 8, 0);
/// circle outline
cv::circle(hsv, cv::Point(circles[i][0], circles[i][1]), circles[i][2], cv::Scalar(0, 0, 255), 3, 8, 0);
}
cv::Mat newI;
cv::cvtColor(hsv, newI, cv::COLOR_HSV2BGR);
return newI;
}
Is there a huge perfomance issue I can do anything about?
If you are sure that the process_image function is what is causing the bottle neck in your program, but you can't modify it, then there's not really a lot you can do. If that function takes longer to execute than the duration of a video frame then you will never get what you need.
How about reducing the quality of the video capture or reducing the size? At the moment I can see you have it set to 1280*720. If the process_image function has less data to work with it should execute faster.

Detecting circles with OpenCV in a Win10 C++ Universal App

I have some issues detecting circles in a Win10 Universal App (C++).
I Need to detect the blue circles in the following Image:
For that reason I am using OpenCV with the following code:
Mat img_image = imread("template_rund.png");
Mat img_hsv;
Mat img_result;
Mat img_blue;
Mat img_canny;
cvtColor(img_image, img_image, CV_BGR2BGRA);
cv::cvtColor(img_image, img_hsv, cv::COLOR_BGR2HSV);
cv::inRange(img_hsv, cv::Scalar(100, 50, 0), cv::Scalar(140, 255, 255), img_blue);
cv::Canny(img_blue, img_canny, 300, 350);
std::vector<cv::Vec3f> circles;
GaussianBlur(img_canny, img_canny, cv::Size(9, 9), 2, 2);
cv::HoughCircles(img_canny, circles, CV_HOUGH_GRADIENT, 2, 5, 1000, 1000, 0, 1000);
for (size_t current_circle = 0; current_circle < circles.size(); ++current_circle) {
...
}
The algorithm is working fine until the HoughCircles-call.
Inside the circles-vector should be stored all found circles.
But the size of the vector is always about 1537228453755672812
At that Point i thought it would be a good idea to Change the Parameters of the HoughCircles-Call. B ut if I Change the min/max-radius to lets say 10/100, the algorithm still find around 1517229... Circles.
What could be the problem?
Further Info:
I compiled the OCV-Libraries for Windows by myself:
https://msopentech.com/blog/2015/05/15/uap-in-action-running-opencv-on-raspberry-pi-ii/#

OpenCV: how can I interpret the results of inRange?

I am processing video images and I would like to detect if the video contains any pixels of a certain range of red. Is this possible?
Here is the code I am adapting from a tutorial:
#ifdef __cplusplus
- (void)processImage:(Mat&)image;
{
cv::Mat orig_image = image.clone();
cv::medianBlur(image, image, 3);
cv::Mat hsv_image;
cv::cvtColor(image, hsv_image, cv::COLOR_BGR2HSV);
cv::Mat lower_red_hue_range;
cv::Mat upper_red_hue_range;
cv::inRange(hsv_image, cv::Scalar(0, 100, 100), cv::Scalar(10, 255, 255), lower_red_hue_range);
cv::inRange(hsv_image, cv::Scalar(160, 100, 100), cv::Scalar(179, 255, 255), upper_red_hue_range);
// Interpret values here
}
Interpreting values
I would like to detect if the results from the inRange operations are nil or not. In other words I want to understand if there are any matching pixels in the original image with a colour inRange from the given lower and upper red scale. How can I interpret the results?
First you need to OR the lower and upper mask:
Mat mask = lower_red_hue_range | upper_red_hue_range;
Then you can countNonZero to see if there are non zero pixels (i.e. you found something).
int number_of_non_zero_pixels = countNonZero(mask);
It could be better to first apply morphological erosion or opening to remove small (probably noisy) blobs:
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(mask, mask, MORPH_OPEN, kernel); // or MORPH_ERODE
or find connected components (findContours, connectedComponentsWithStats) and prune / search for according to some criteria:
vector<vector<Point>> contours
findContours(mask.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
double threshold_on_area = 100.0;
for(int i=0; i<contours.size(); ++i)
{
double area = countourArea(contours[i]);
if(area < threshold_on_area)
{
// don't consider this contour
continue;
}
else
{
// do something (e.g. drawing a bounding box around the contour)
Rect box = boundingRect(contours[i]);
rectangle(hsv_image, box, Scalar(0, 255, 255));
}
}

Derivatives in OpenCV

I'm writing a program using opencv that does text detection and extraction.
Im using the Sobel derivative in order to do edge detection and have gotten the following result:
But I wish to get the following result:
(I appologize for the blurry image.)
The problem I'm having is the "blank areas" inside the edges "confuse" the algorithem I'm using so when the algorithem detects the "blank part" seperating between two lines from the lines themselves it gets confused and start running into the letter themselves instead of keepeing between two lines. This error, I believe would be solves by achieving the second result.
Anyone knows what changes i need to make? in the soble derivative? maybe use a different derivative?
Code:
Mat ProfileSeamTextLineExtractor::computeDerivative(){
Mat img = _image;
Mat gradiant_mat;
int scale = 2;
int delta = 0;
int ddepth = CV_16S;
GaussianBlur(img, img, Size(3, 3), 0, 0, BORDER_DEFAULT);
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
Sobel(img, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs(grad_x, abs_grad_x);
Sobel(img, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs(grad_y, abs_grad_y);
/// Total Gradient (approximate)
addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, gradiant_mat);
return gradiant_mat;
}
Regards,
Try using the second sobel derivative, add, normalize (this may do the same as addWeighted), and then thresholding optimally. I had results similar to yours with different threshold values.
Here's an example:
cv::Mat result;
cvtColor(image, gray, CV_BGR2GRAY);
cv::medianBlur(gray, gray, 3);
cv::Mat sobel_x, sobel_y, result;
cv::Sobel(gray, sobel_x, CV_32FC1, 2, 0, 5);
cv::Sobel(gray, sobel_y, CV_32FC1, 0, 2, 5);
cv::Mat sum = sobel_x + sobel_y;
cv::normalize(sum, result, 0, 255, CV_MINMAX, CV_8UC1);
//Determine optimal threshold value using THRESH_OTSU.
// This didn't give me optimal results, but was a good starting point.
cv::Mat temp, final;
double threshold = cv::threshold(result, temp, 0, 255, CV_THRESH_BINARY+CV_THRESH_OTSU);
cv::threshold(result, final, threshold*.9, 255, CV_THRESH_BINARY);
I was able to clearly extract both light text on a dark background, and dark text on a light background.
If you need the final image to consistently be white background with black text, you can do this:
cv::Scalar avgPixelIntensity = cv::mean( final );
if(avgPixelIntensity[0] < 127.0)
cv::bitwise_not(final, final);
I tried a lot of different text extraction methods and couldn't find any that worked across the board, but this seems to. This took a lot of trial and error to figure out, so I hope this helps.
I don't really understand what your final aim is. Do you eventually want a nice filled in version of the text so you can recognise the characters? I can give that a shot if that's what you are looking for.
This is what I did while trying to remove inner holes:
For this one I didn't bother:
It fails at the edges where the text is cut off.
Obviously, I had to work with the image that had already gone through some processing. I might be able to give you more help if I had the original and produce a better output. You might not even need to use derivatives at all if the background is clean enough.
Here is the code:
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
using namespace cv;
using namespace std;
void printInnerContours (int contourPos, Mat &filled, vector<vector<Point2i > > &contours, vector<Vec4i> &hierarchy, int area);
int main() {
int areaThresh;
vector<vector<Point2i > > contours;
vector<Vec4i> hierarchy;
Mat text = imread ("../wHWHA.jpg", 0); //write greyscale
threshold (text, text, 50, 255, THRESH_BINARY);
imwrite ("../text1.jpg", text);
areaThresh = (0.01 * text.rows * text.cols) / 100;
findContours (text, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_NONE);
Mat filled = Mat::zeros(text.rows, text.cols, CV_8U);
cout << contours.size() << endl;
for (int i = 0; i < contours.size(); i++) {
int area = contourArea(contours[i]);
if (area > areaThresh) {
if ((hierarchy[i][2] != -1) && (hierarchy[i][3] == -1)) {
drawContours (filled, contours, i, 255, -1);
if (hierarchy[i][2] != -1) {
printInnerContours (hierarchy[i][2], filled, contours, hierarchy, area);
}
}
}
}
imwrite("../output.jpg", filled);
return 0;
}
void printInnerContours (int contourPos, Mat &filled, vector<vector<Point2i > > &contours, vector<Vec4i> &hierarchy, int area) {
int areaFrac = 5;
if (((contourArea (contours[contourPos]) * 100) / area) < areaFrac) {
//drawContours (filled, contours, contourPos, 0, -1);
}
if (hierarchy[contourPos][2] != -1) {
printInnerContours (hierarchy[contourPos][2], filled, contours, hierarchy, area);
}
if (hierarchy[contourPos][0] != -1) {
printInnerContours (hierarchy[contourPos][0], filled, contours, hierarchy, area);
}
}

opencv houghcircles differences c c++

I'm introducing myself in OpenCV (in order for an software project at university) and found a tutorial for color circle detection which I adapted and tested. It was written with OpenCV 1 in C. So I tried to convert it to OpenCv 2 classes API and everything was fine, but I ran into one problem:
The C function cvHoughCircles produces other results than the C++ function HoughCircles.
The C version finds my test circle and has a low rate of false positives, but the C++ version has a significantly higher mistake rate.
//My C implementation
IplImage *img = cvQueryFrame( capture );
CvSize size = cvGetSize(img);
IplImage *hsv = cvCreateImage(size, IPL_DEPTH_8U, 3);
cvCvtColor(img, hsv, CV_BGR2HSV);
CvMat *mask = cvCreateMat(size.height, size.width, CV_8UC1);
cvInRangeS(hsv, cvScalar(107, 61, 0, 0), cvScalar(134, 255, 255, 0), mask);
/* Copy mask into a grayscale image */
IplImage *hough_in = cvCreateImage(size, 8, 1);
cvCopy(mask, hough_in, NULL);
cvSmooth(hough_in, hough_in, CV_GAUSSIAN, 15, 15, 0, 0);
cvShowImage("mask",hough_in);
/* Run the Hough function */
CvMemStorage *storage = cvCreateMemStorage(0);
CvSeq *circles = cvHoughCircles(hough_in, storage, CV_HOUGH_GRADIENT,
4, size.height/4, 100, 40, 0, 0);
// ... iterating over all found circles
this works pretty well
//My C++ implementation
cv::Mat img;
cap.read(img);
cv::Size size(img.cols,img.rows);
cv::Mat hsv(size, IPL_DEPTH_8U, 3);
cv::cvtColor(img, hsv, CV_BGR2HSV);
cv::Mat mask(size.height, size.width, CV_8UC1);
cv::inRange(hsv, cv::Scalar(107, 61, 0, 0), cv::Scalar(134, 255, 255, 0), mask);
GaussianBlur( mask, mask, cv::Size(15, 15), 0, 0 );
/* Run the Hough function */
imshow("mask",mask);
vector<cv::Vec3f> circles;
cv::HoughCircles(mask, circles, CV_HOUGH_GRADIENT,
4, size.height/4, 100, 140, 0, 0);
// ... iterating over all found circles
As you can see, I use same arguments to all calls. I tested this with a webcam and a static sample object.One requirement is to use OpenCV2 C++ API.
Does anybody know, why I get so different results under equivalent conditions?
Edit
The different threshold values was just a mistake when I tested to make results more equally. These screenshots are taken with threshold set to 40 for both versions:
Screenshots: (Sorry, cannot yet post images)
C and C++ version
I see Hough parameters in C version as "..., 100, 40, 0, 0); " while in C++ version as "... 100, 140, 0, 0);" This difference in thresholds probably explains the difference in results.