opencv houghcircles differences c c++ - c++

I'm introducing myself in OpenCV (in order for an software project at university) and found a tutorial for color circle detection which I adapted and tested. It was written with OpenCV 1 in C. So I tried to convert it to OpenCv 2 classes API and everything was fine, but I ran into one problem:
The C function cvHoughCircles produces other results than the C++ function HoughCircles.
The C version finds my test circle and has a low rate of false positives, but the C++ version has a significantly higher mistake rate.
//My C implementation
IplImage *img = cvQueryFrame( capture );
CvSize size = cvGetSize(img);
IplImage *hsv = cvCreateImage(size, IPL_DEPTH_8U, 3);
cvCvtColor(img, hsv, CV_BGR2HSV);
CvMat *mask = cvCreateMat(size.height, size.width, CV_8UC1);
cvInRangeS(hsv, cvScalar(107, 61, 0, 0), cvScalar(134, 255, 255, 0), mask);
/* Copy mask into a grayscale image */
IplImage *hough_in = cvCreateImage(size, 8, 1);
cvCopy(mask, hough_in, NULL);
cvSmooth(hough_in, hough_in, CV_GAUSSIAN, 15, 15, 0, 0);
cvShowImage("mask",hough_in);
/* Run the Hough function */
CvMemStorage *storage = cvCreateMemStorage(0);
CvSeq *circles = cvHoughCircles(hough_in, storage, CV_HOUGH_GRADIENT,
4, size.height/4, 100, 40, 0, 0);
// ... iterating over all found circles
this works pretty well
//My C++ implementation
cv::Mat img;
cap.read(img);
cv::Size size(img.cols,img.rows);
cv::Mat hsv(size, IPL_DEPTH_8U, 3);
cv::cvtColor(img, hsv, CV_BGR2HSV);
cv::Mat mask(size.height, size.width, CV_8UC1);
cv::inRange(hsv, cv::Scalar(107, 61, 0, 0), cv::Scalar(134, 255, 255, 0), mask);
GaussianBlur( mask, mask, cv::Size(15, 15), 0, 0 );
/* Run the Hough function */
imshow("mask",mask);
vector<cv::Vec3f> circles;
cv::HoughCircles(mask, circles, CV_HOUGH_GRADIENT,
4, size.height/4, 100, 140, 0, 0);
// ... iterating over all found circles
As you can see, I use same arguments to all calls. I tested this with a webcam and a static sample object.One requirement is to use OpenCV2 C++ API.
Does anybody know, why I get so different results under equivalent conditions?
Edit
The different threshold values was just a mistake when I tested to make results more equally. These screenshots are taken with threshold set to 40 for both versions:
Screenshots: (Sorry, cannot yet post images)
C and C++ version

I see Hough parameters in C version as "..., 100, 40, 0, 0); " while in C++ version as "... 100, 140, 0, 0);" This difference in thresholds probably explains the difference in results.

Related

Detecting slightly bright areas (fawn of deer) in thermal images

I am looking into detecting slightly bright areas (fawns from the roe deer) in thermal images with openCV.
So far I managed to get some code that works somehow, but with to many false negatives and false positives.
I basically know my way around openCV. But from the algorithmic side I a not sure what the best solution should be to result in a most perfect detection.
So far I use a cascade of something like this
gaussion blur
some sore of hysteresis thesholding
blob detection
Code snipped:
cv::GaussianBlur(gray, gray, cv::Size(gauss_size, gauss_size), 0);
Mat threshUpper, threshLower;
threshold(gray, threshUpper, mask_min, mask_max, cv::THRESH_BINARY);
threshold(gray, threshLower, mask_min-mask_thresh, mask_max, cv::THRESH_BINARY);
imshow("threshUpper", threshUpper);
imshow("threshLower", threshLower);
vector<vector<Point>> contoursUpper;
cv::findContours(threshUpper, contoursUpper, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
for(auto cnt : contoursUpper){
cv::floodFill(threshLower, cnt[0], 255, 0, 2, 2, cv::FLOODFILL_FIXED_RANGE);
}
threshold(threshLower, out, 200, 255, cv::THRESH_BINARY);
vector<vector<Point>> contours2clean;
cv::findContours(out, contours2clean, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
for(const auto& cnt : contours2clean) {
double area = cv::contourArea(cnt);
if ( area > cut_max_size || area < cut_min_size) {
cv::floodFill(out, cnt[0], 0, 0, 2, 2, cv::FLOODFILL_FIXED_RANGE);
}
else {
cv::floodFill(out, cnt[0], 255, 0, 2, 2, cv::FLOODFILL_FIXED_RANGE);
}
}
std::vector<cv::KeyPoint> points;
detector_->detect(out, points);
cv::drawKeypoints(out, points, out, cv::Scalar(0, 0, 255), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
I am looking for some advice for better approaches. Two images (raw and marked) are here:
Thanks!

Perfomance Issues while capturing and processing a video

I'm currently working on a project where I need to display a processed live video capture. Therefore, I'm using something similar to this:
cv::VideoCapture cap(0);
if (!cap.isOpened())
return -1;
cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 720);
cv::namedWindow("Current Capture");
for (;;)
{
cv::Mat frame;
cap >> frame;
cv::Mat mirrored;
cv::flip(frame, mirrored, 1);
cv::imshow("Current Capture", process_image(mirrored));
if (cv::waitKey(30) >= 0) break;
}
The problem I have is, that process_image, which perfomes a circle detection in the image, needs some time to finish and causes the displaying to be rather a slideshow then a video.
My Question is: How can I speed up the processing without manipulating the process_image function?
I thought about performing the image processing in another thread, but I'm not really sure how to start. Do you have any other idea than this?
PS.: I'm not expecting you to write code for me, I only need a point to start from ;)
EDIT:
Ok, if there is nothing i can do about the performance while capturing, I will need to change the process_image function.
cv::Mat process_image(cv::Mat img)
{
cv::Mat hsv;
cv::medianBlur(img, img, 7);
cv::cvtColor(img, hsv, cv::COLOR_BGR2HSV);
cv::Mat lower_hue_range; // lower and upper hue range in case of red color
cv::Mat upper_hue_range;
cv::inRange(hsv, cv::Scalar(LOWER_HUE1, 100, 100), cv::Scalar(UPPER_HUE1, 255, 255), lower_hue_range);
cv::inRange(hsv, cv::Scalar(LOWER_HUE2, 100, 100), cv::Scalar(UPPER_HUE1, 255, 255), upper_hue_range);
/// Combine the above two images
cv::Mat hue_image;
cv::addWeighted(lower_hue_range, 1.0, upper_hue_range, 1.0, 0.0, hue_image);
/// Reduce the noise so we avoid false circle detection
cv::GaussianBlur(hue_image, hue_image, cv::Size(13, 13), 2, 2);
/// store all found circles here
std::vector<cv::Vec3f> circles;
cv::HoughCircles(hue_image, circles, CV_HOUGH_GRADIENT, 1, hue_image.rows / 8, 100, 20, 0, 0);
for (size_t i = 0; i < circles.size(); i++)
{
/// circle center
cv::circle(hsv, cv::Point(circles[i][0], circles[i][1]), 3, cv::Scalar(0, 255, 0), -1, 8, 0);
/// circle outline
cv::circle(hsv, cv::Point(circles[i][0], circles[i][1]), circles[i][2], cv::Scalar(0, 0, 255), 3, 8, 0);
}
cv::Mat newI;
cv::cvtColor(hsv, newI, cv::COLOR_HSV2BGR);
return newI;
}
Is there a huge perfomance issue I can do anything about?
If you are sure that the process_image function is what is causing the bottle neck in your program, but you can't modify it, then there's not really a lot you can do. If that function takes longer to execute than the duration of a video frame then you will never get what you need.
How about reducing the quality of the video capture or reducing the size? At the moment I can see you have it set to 1280*720. If the process_image function has less data to work with it should execute faster.

Detecting circles with OpenCV in a Win10 C++ Universal App

I have some issues detecting circles in a Win10 Universal App (C++).
I Need to detect the blue circles in the following Image:
For that reason I am using OpenCV with the following code:
Mat img_image = imread("template_rund.png");
Mat img_hsv;
Mat img_result;
Mat img_blue;
Mat img_canny;
cvtColor(img_image, img_image, CV_BGR2BGRA);
cv::cvtColor(img_image, img_hsv, cv::COLOR_BGR2HSV);
cv::inRange(img_hsv, cv::Scalar(100, 50, 0), cv::Scalar(140, 255, 255), img_blue);
cv::Canny(img_blue, img_canny, 300, 350);
std::vector<cv::Vec3f> circles;
GaussianBlur(img_canny, img_canny, cv::Size(9, 9), 2, 2);
cv::HoughCircles(img_canny, circles, CV_HOUGH_GRADIENT, 2, 5, 1000, 1000, 0, 1000);
for (size_t current_circle = 0; current_circle < circles.size(); ++current_circle) {
...
}
The algorithm is working fine until the HoughCircles-call.
Inside the circles-vector should be stored all found circles.
But the size of the vector is always about 1537228453755672812
At that Point i thought it would be a good idea to Change the Parameters of the HoughCircles-Call. B ut if I Change the min/max-radius to lets say 10/100, the algorithm still find around 1517229... Circles.
What could be the problem?
Further Info:
I compiled the OCV-Libraries for Windows by myself:
https://msopentech.com/blog/2015/05/15/uap-in-action-running-opencv-on-raspberry-pi-ii/#

How threshold images with texture? Recognition by tesseract

Source image:
Destination image:
Code:
cv::Mat sharpenedLena;
cv::Mat kernel = (cv::Mat_<float>(3, 3) << 0, -1, 0, -1, 5, -1, 0, -1, 0);
cv::filter2D(matGrey, sharpenedLena, matGrey.depth(), kernel);
cv::adaptiveThreshold(sharpenedLena, matBinary, 255, cv::ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY, 55, 30);
cv::Mat dst_img1;
//cv::GaussianBlur(matBinary, dst_img1, cv::Size(3,3), 0, 0);
cv::medianBlur(matBinary, dst_img1, 3);
UIImage *addrUIImage = [ImageUtil UIImageFromCVMat:dst_img1];
[self recognizeImageWithTesseract:addrUIImage withLauange:1];
Result:
三胡南省慈利昙龙三覃河镇文
I think it should be a picture deal with the problem. Here there is a treatment effect of others. How to achieve this effect?
Target image:
Here is my results & Code Snippet:
Mat mSource_Bgr,mSource_Gray,mSource_Hsv,mThreshold;
mSource_Bgr= imread(FileName_S.c_str(),1);
namedWindow("Source Image",WINDOW_AUTOSIZE);
imshow("Source Image",mSource_Bgr);
cvtColor(mSource_Bgr,mSource_Hsv,COLOR_BGR2HSV);
mSource_Hsv = mSource_Hsv + Scalar(0,0,-25); //Subtracting 25 from all the Pixel Values
cvtColor(mSource_Hsv,mSource_Bgr,COLOR_HSV2BGR);// Back to BGR Just for Debug purpose
imshow("Improved Darkness",mSource_Bgr);
imwrite(FileName_S+"_Res.bmp",mSource_Bgr);
cvtColor(mSource_Bgr,mSource_Gray,COLOR_BGR2GRAY); // for Adaptive Thresholding the input Image
adaptiveThreshold(mSource_Gray,mThreshold,255,ADAPTIVE_THRESH_GAUSSIAN_C,THRESH_BINARY,59,10);
imshow("Adaptive Thres",mThreshold);
imwrite(FileName_S+"_Thres.bmp",mThreshold);
You can remove the Noise i.e small dots by using contour Area or by Morphological Processing.Hope this helps you!
You can try using adaptiveThreshold() or algorithms such as MSER OpenCV.
These will perform better, especially MSER and its variant CSER are designed to detect text-like structures.
You can try using Binary thresholding with a OPEN Morphology operation.

Using cvSplit(): I'm getting a BW pic instead of a colored pic

I'm using cvSplit() to separate RGB channels and print them in 3 different images showing colors r, g, and b. But I only got BW images with black and white pics. Is this the correct output when using cvSplit()? or I have to do something to make it colored?
Below is my code so far.
![#include <iostream>
#include <cv.h>
#include <highgui.h>
#include "rgb.h"
using namespace std;
int main(){
IplImage* img = cvLoadImage("rgb.jpg");
IplImage* channelRed = cvCreateImage(cvGetSize(img), 8, 1);
IplImage* channelGreen = cvCreateImage(cvGetSize(img), 8, 1);
IplImage* channelBlue = cvCreateImage(cvGetSize(img), 8, 1);
IplImage* Result1 = cvCreateImage(cvGetSize(img), 8, 1);
IplImage* Result2 = cvCreateImage(cvGetSize(img), 8, 1);
IplImage* Result3= cvCreateImage(cvGetSize(img), 8, 1);
cvSplit(img, channelBlue, channelGreen, channelRed, NULL);
cvThreshold(channelBlue, Result1, 20, 255, CV_THRESH_BINARY);
cvThreshold(channelGreen, Result2, 20, 255, CV_THRESH_BINARY);
cvThreshold(channelRed, Result3, 20, 255, CV_THRESH_BINARY);
cvShowImage("original", img);
cvShowImage("blue", Result1);
cvShowImage("green", Result2);
cvShowImage("red", Result3);
cvWaitKey(0);
return 0;
}][1]
It's not going to work this way. When you have a single-channel image, opencv assumes it's grayscale.
What you can do is create a blue, red, and green filter images with the same size, filled with 255's in the channel of interest, and all zeroes in the other channels.
Then you just run the following function to get your blue image:
cvAnd(original_img,bluefilter_img,blue_result_img)
Repeat for the red and green filters.
If you have a 3 channel image, when you use cvSplit it gives you 3 gray scale images (and not B&W) because they are single channel images.
img(R,G,B) ==> chRed(R), chGreen(G), chBlue(B) with R, G and B from 0 to 255.
img has 255*255*255 colors and the others only 256.
And you end up with B&W images only because you threshold them afterwards.
You might want to use the last version of OpenCV 2.4.4 and do something more automatic and robust like threshold(imgIn, imgOut, 100, 255, CV_THRESH_OTSU);
( see : http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html?highlight=threshold#cv.Threshold )
So if the goal is to have some artistic view, then what perfanoff said is the right way.
But if you are planning to do some kind of work with color detection there is no business having color in your single channel images (as it's not possible).
I don't know if that was clear, let me know.