Perfomance Issues while capturing and processing a video - c++

I'm currently working on a project where I need to display a processed live video capture. Therefore, I'm using something similar to this:
cv::VideoCapture cap(0);
if (!cap.isOpened())
return -1;
cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 720);
cv::namedWindow("Current Capture");
for (;;)
{
cv::Mat frame;
cap >> frame;
cv::Mat mirrored;
cv::flip(frame, mirrored, 1);
cv::imshow("Current Capture", process_image(mirrored));
if (cv::waitKey(30) >= 0) break;
}
The problem I have is, that process_image, which perfomes a circle detection in the image, needs some time to finish and causes the displaying to be rather a slideshow then a video.
My Question is: How can I speed up the processing without manipulating the process_image function?
I thought about performing the image processing in another thread, but I'm not really sure how to start. Do you have any other idea than this?
PS.: I'm not expecting you to write code for me, I only need a point to start from ;)
EDIT:
Ok, if there is nothing i can do about the performance while capturing, I will need to change the process_image function.
cv::Mat process_image(cv::Mat img)
{
cv::Mat hsv;
cv::medianBlur(img, img, 7);
cv::cvtColor(img, hsv, cv::COLOR_BGR2HSV);
cv::Mat lower_hue_range; // lower and upper hue range in case of red color
cv::Mat upper_hue_range;
cv::inRange(hsv, cv::Scalar(LOWER_HUE1, 100, 100), cv::Scalar(UPPER_HUE1, 255, 255), lower_hue_range);
cv::inRange(hsv, cv::Scalar(LOWER_HUE2, 100, 100), cv::Scalar(UPPER_HUE1, 255, 255), upper_hue_range);
/// Combine the above two images
cv::Mat hue_image;
cv::addWeighted(lower_hue_range, 1.0, upper_hue_range, 1.0, 0.0, hue_image);
/// Reduce the noise so we avoid false circle detection
cv::GaussianBlur(hue_image, hue_image, cv::Size(13, 13), 2, 2);
/// store all found circles here
std::vector<cv::Vec3f> circles;
cv::HoughCircles(hue_image, circles, CV_HOUGH_GRADIENT, 1, hue_image.rows / 8, 100, 20, 0, 0);
for (size_t i = 0; i < circles.size(); i++)
{
/// circle center
cv::circle(hsv, cv::Point(circles[i][0], circles[i][1]), 3, cv::Scalar(0, 255, 0), -1, 8, 0);
/// circle outline
cv::circle(hsv, cv::Point(circles[i][0], circles[i][1]), circles[i][2], cv::Scalar(0, 0, 255), 3, 8, 0);
}
cv::Mat newI;
cv::cvtColor(hsv, newI, cv::COLOR_HSV2BGR);
return newI;
}
Is there a huge perfomance issue I can do anything about?

If you are sure that the process_image function is what is causing the bottle neck in your program, but you can't modify it, then there's not really a lot you can do. If that function takes longer to execute than the duration of a video frame then you will never get what you need.
How about reducing the quality of the video capture or reducing the size? At the moment I can see you have it set to 1280*720. If the process_image function has less data to work with it should execute faster.

Related

Closing gaps in objects of opencv mask

I've computed the following mask of an image containing corn:
the problem is, now i have black parts inside the seeds. Is there a way to get rid of them while making sure that seeds remain disconnected from each other?
My ultimate goal is to count the seeds on the picture, using watershed algorithm. I observed when the seeds are touching each other, it introduces a imprecision to the algorithm, so I tried to introduce gaps between seeds by using canny and subtracting the borders from the mask, which is the result above.
My code so far:
auto img = GetAreaOfInterest(orig, bt); // <- just a Gauss blur with OTSU thresh
{
cv::Mat gray, edges;
cv::cvtColor(orig, gray, cv::COLOR_BGR2GRAY);
// tried bilateralFIlter here, but things only got worse
cv::Canny(gray, edges, 40, 160);
cv::dilate(edges, edges,
cv::getStructuringElement(cv::MORPH_ELLIPSE, {5, 5}),
{-1, -1}, 1);
img.setTo(0, edges != 0);
}
cv::Mat background, foreground, unknown, markers;
cv::dilate(img, background,
cv::getStructuringElement(cv::MORPH_ELLIPSE, {3,3}));
/* Get area for which we are sure it's the foreground */
cv::distanceTransform(img, img, cv::DIST_L1, 3, CV_8U);
{
double max;
cv::minMaxLoc(img, nullptr, &max);
cv::threshold(img, img, 0.46 * max, 255, cv::THRESH_BINARY);
}
img.convertTo(foreground, CV_8UC3);
show_resized("distance", foreground, 0.5);
/* Mark unknown areas */
cv::subtract(background, foreground, unknown);
/* Set up markers */
cv::connectedComponents(foreground, markers);
markers = markers + 1;
markers.setTo(0, unknown == 255);
/* Expand markers */
cv::watershed(orig, markers);
// Count objects
original image:
the most problematic image:

cv::imshow in opencv is only displaying parts of a composite image, but displaying the parts separately works. Why?

Objective and problem
I'm trying to process a video file on the fly using OpenCV 3.4.1 by grabbing each frame, converting to grayscale, then doing Canny edge detection on it. In order to display the images (on the fly as well), I created a Mat class with 3 additional headers that is three times as wide as the original frame. The 3 extra headers represent the images I would like to display in the composite, and are positioned to the 1st, 2nd and 3rd horizontal segment of the composite.
After image processing however, the display of the composite image is not as expected: the first segment (where the original frame should be) is completely black, while the other segments (of processed images) are displayed fine. If, on the other hand, I display the ROIs one by one in separate windows, all the images look fine.
These are the things I tried to overcome this issue:
use .copyTo to actually copy the data into the appropriate image segments. The result was the same.
I put the Canny image to the compOrigPart ROI, and it did display in the first segment, so it is not a problem with the definition of the ROIs.
Define the composite as three channel image
In the loop convert it to grayscale
put processed images into it
convert back to BGR
put the original in.
This time around the whole composite was black, nothing showed.
As per gameon67's suggestion, I tried to create a namedWindow as well, but that doesn't help either.
Code:
int main() {
cv::VideoCapture vid("./Vid.avi");
if (!vid.isOpened()) return -1;
int frameWidth = vid.get(cv::CAP_PROP_FRAME_WIDTH);
int frameHeight = vid.get(cv::CAP_PROP_FRAME_HEIGHT);
int frameFormat = vid.get(cv::CAP_PROP_FORMAT);
cv::Scalar fontColor(250, 250, 250);
cv::Point textPos(20, 20);
cv::Mat frame;
cv::Mat compositeFrame(frameHeight, frameWidth*3, frameFormat);
cv::Mat compOrigPart(compositeFrame, cv::Range(0, frameHeight), cv::Range(0, frameWidth));
cv::Mat compBwPart(compositeFrame, cv::Range(0, frameHeight), cv::Range(frameWidth, frameWidth*2));
cv::Mat compEdgePart(compositeFrame, cv::Range(0, frameHeight), cv::Range(frameWidth*2, frameWidth*3));
while (vid.read(frame)) {
if (frame.empty()) break;
cv::cvtColor(frame, compBwPart, cv::COLOR_BGR2GRAY);
cv::Canny(compBwPart, compEdgePart, 100, 150);
compOrigPart = frame;
cv::putText(compOrigPart, "Original", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::putText(compBwPart, "GrayScale", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::putText(compEdgePart, "Canny edge detection", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::imshow("Composite of Original, BW and Canny frames", compositeFrame);
cv::imshow("Original", compOrigPart);
cv::imshow("BW", compBwPart);
cv::imshow("Canny", compEdgePart);
cv::waitKey(33);
}
}
Questions
Why can't I display the entirety of the composite image in a single window, while displaying them separately is OK?
What is the difference between these displays? The data is obviously there, as evidenced by the separate windows.
Why only the original frame is misbehaving?
Your compBwPart and compEdgePart are grayscale images so the Mat type is CV8UC1 - single channel and therefore your compositeFrame is in grayscale too. If you want to combine these two images with a color image you have to convert it to BGR first and then fill the compOrigPart.
while (vid.read(frame)) {
if (frame.empty()) break;
cv::cvtColor(frame, compBwPart, cv::COLOR_BGR2GRAY);
cv::Canny(compBwPart, compEdgePart, 100, 150);
cv::cvtColor(compositeFrame, compositeFrame, cv::COLOR_GRAY2BGR);
frame.copyTo(compositeFrame(cv::Rect(0, 0, frameWidth, frameHeight)));
cv::putText(compOrigPart, "Original", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor); //the rest of your code
This is a combination of several issues.
The first problem is that you set the type of compositeFrame to the value returned by vid.get(cv::CAP_PROP_FORMAT). Unfortunately that property doesn't seem entirely reliable -- I've just had it return 0 (meaning CV_8UC1) after opening a color video, and then getting 3 channel (CV_8UC3) frames. Since you want to have the compositeFrame the same type as the input frame, this won't work.
To work around it, instead of using those properties, I'd lazy initialize compositeFrame and the 3 ROIs after receiving the first frame (based on it's dimensions and type).
The next set of problems lies in those two statements:
cv::cvtColor(frame, compBwPart, cv::COLOR_BGR2GRAY);
cv::Canny(compBwPart, compEdgePart, 100, 150);
In this case assumption is made that frame is BGR (since you're trying to convert), meaning compositeFrame and its ROIs are also BGR. Unfortunately, in both cases you're writing a grayscale image into the ROI. This will cause a reallocation, and the target Mat will cease to be a ROI.
To correct this, use temporary Mats for the grayscale data, and use cvtColor to turn it back to BGR to write into the ROIs.
Similar problem lies in the following statement:
compOrigPart = frame;
That's a shallow copy, meaning it will just make compOrigPart another reference to frame (and therefore it will cease to be a ROI of compositeFrame).
What you need is a deep copy, using copyTo (note that the data types still need to match, but that was fixed earlier).
Finally, even though you try to be flexible regarding the type of the input video (judging by the vid.get(cv::CAP_PROP_FORMAT)), the rest of the code really assumes that the input is 3 channel, and will break if it isn't.
At the least, there should be some assertion to cover this expectation.
Putting this all together:
#include <opencv2/opencv.hpp>
int main()
{
cv::VideoCapture vid("./Vid.avi");
if (!vid.isOpened()) return -1;
cv::Scalar fontColor(250, 250, 250);
cv::Point textPos(20, 20);
cv::Mat frame, frame_gray, edges_gray;
cv::Mat compositeFrame;
cv::Mat compOrigPart, compBwPart, compEdgePart; // ROIs
while (vid.read(frame)) {
if (frame.empty()) break;
if (compositeFrame.empty()) {
// The rest of code assumes video to be BGR (i.e. 3 channel)
CV_Assert(frame.type() == CV_8UC3);
// Lazy initialize once we have the first frame
compositeFrame = cv::Mat(frame.rows, frame.cols * 3, frame.type());
compOrigPart = compositeFrame(cv::Range::all(), cv::Range(0, frame.cols));
compBwPart = compositeFrame(cv::Range::all(), cv::Range(frame.cols, frame.cols * 2));
compEdgePart = compositeFrame(cv::Range::all(), cv::Range(frame.cols * 2, frame.cols * 3));
}
cv::cvtColor(frame, frame_gray, cv::COLOR_BGR2GRAY);
cv::Canny(frame_gray, edges_gray, 100, 150);
// Deep copy data to the ROI
frame.copyTo(compOrigPart);
// The ROI is BGR, so we need to convert back
cv::cvtColor(frame_gray, compBwPart, cv::COLOR_GRAY2BGR);
cv::cvtColor(edges_gray, compEdgePart, cv::COLOR_GRAY2BGR);
cv::putText(compOrigPart, "Original", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::putText(compBwPart, "GrayScale", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::putText(compEdgePart, "Canny edge detection", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::imshow("Composite of Original, BW and Canny frames", compositeFrame);
cv::imshow("Original", compOrigPart);
cv::imshow("BW", compBwPart);
cv::imshow("Canny", compEdgePart);
cv::waitKey(33);
}
}
Screenshot of the composite window (using some random test video off the web):

Watershed boundaries closely surround one area

I am trying to make an average of two blobs in OpenCV. To achieve that I was planning to use watershed algorithm on the image preprocessed in the following way:
cv::Mat common, diff, processed, result;
cv::bitwise_and(blob1, blob2, common); //calc common area of the two blobs
cv::absdiff(blob1, blob2, diff); //calc area where they differ
cv::distanceTransform(diff, processed, CV_DIST_L2, 3); //idea here is that the highest intensity
//will be in the middle of the differing area
cv::normalize(processed, processed, 0, 255, cv::NORM_MINMAX, CV_8U); //convert floats to bytes
cv::Mat watershedMarkers, watershedOutline;
common.convertTo(watershedMarkers, CV_32S, 1. / 255, 1); //change background to label 1, common area to label 2
watershedMarkers.setTo(0, processed); //set 0 (unknown) for area where blobs differ
cv::cvtColor(processed, processed, CV_GRAY2RGB); //watershed wants 3 channels
cv::watershed(processed, watershedMarkers);
cv::rectangle(watershedMarkers, cv::Rect(0, 0, watershedMarkers.cols, watershedMarkers.rows), 1); //remove the outline
//draw the boundary in red (for debugging)
watershedMarkers.convertTo(watershedOutline, CV_16S);
cv::threshold(watershedOutline, watershedOutline, 0, 255, CV_THRESH_BINARY_INV);
watershedOutline.convertTo(watershedOutline, CV_8U);
processed.setTo(cv::Scalar(CV_RGB(255, 0, 0)), watershedOutline);
//convert computed labels back to mask (blob), less relevant but shows my ultimate goal
watershedMarkers.convertTo(watershedMarkers, CV_8U);
cv::threshold(watershedMarkers, watershedMarkers, 1, 0, CV_THRESH_TOZERO_INV);
cv::bitwise_not(watershedMarkers * 255, result);
My problem with the results is that the calculated boundary is (almost) always adjacent to the area common to both blobs. Here are the pictures:
Input markers (black = 0, gray = 1, white = 2)
Watershed input image (distance transform result) with resulting outline drawn in red:
I would expect the boundary to go along the maximum intensity region of the input (that is, along the middle of the differing area). Instead (as you can see) it mostly goes around the area marked as 2, with a bit shifted to touch the background (marked as 1). Do I do something wrong here, or did I misunderstand how watershed works?
Starting from this image:
You can get the correct result simply passing an all-zero image to watershed algorithm. The "basin" is then equally filled of "water" starting from each "side" (then just remember to remove the outer border which is set by default to -1 by watershed algorithm):
Code:
#include <opencv2\opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
Mat1i markers(img.rows, img.cols, int(0));
markers.setTo(1, img == 128);
markers.setTo(2, img == 255);
Mat3b image(markers.rows, markers.cols, Vec3b(0,0,0));
markers.convertTo(markers, CV_32S);
watershed(image, markers);
Mat3b result;
cvtColor(img, result, COLOR_GRAY2BGR);
result.setTo(Scalar(0, 0, 255), markers == -1);
imshow("Result", result);
waitKey();
return(0);
}

OpenCV: how can I interpret the results of inRange?

I am processing video images and I would like to detect if the video contains any pixels of a certain range of red. Is this possible?
Here is the code I am adapting from a tutorial:
#ifdef __cplusplus
- (void)processImage:(Mat&)image;
{
cv::Mat orig_image = image.clone();
cv::medianBlur(image, image, 3);
cv::Mat hsv_image;
cv::cvtColor(image, hsv_image, cv::COLOR_BGR2HSV);
cv::Mat lower_red_hue_range;
cv::Mat upper_red_hue_range;
cv::inRange(hsv_image, cv::Scalar(0, 100, 100), cv::Scalar(10, 255, 255), lower_red_hue_range);
cv::inRange(hsv_image, cv::Scalar(160, 100, 100), cv::Scalar(179, 255, 255), upper_red_hue_range);
// Interpret values here
}
Interpreting values
I would like to detect if the results from the inRange operations are nil or not. In other words I want to understand if there are any matching pixels in the original image with a colour inRange from the given lower and upper red scale. How can I interpret the results?
First you need to OR the lower and upper mask:
Mat mask = lower_red_hue_range | upper_red_hue_range;
Then you can countNonZero to see if there are non zero pixels (i.e. you found something).
int number_of_non_zero_pixels = countNonZero(mask);
It could be better to first apply morphological erosion or opening to remove small (probably noisy) blobs:
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(mask, mask, MORPH_OPEN, kernel); // or MORPH_ERODE
or find connected components (findContours, connectedComponentsWithStats) and prune / search for according to some criteria:
vector<vector<Point>> contours
findContours(mask.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
double threshold_on_area = 100.0;
for(int i=0; i<contours.size(); ++i)
{
double area = countourArea(contours[i]);
if(area < threshold_on_area)
{
// don't consider this contour
continue;
}
else
{
// do something (e.g. drawing a bounding box around the contour)
Rect box = boundingRect(contours[i]);
rectangle(hsv_image, box, Scalar(0, 255, 255));
}
}

opencv houghcircles differences c c++

I'm introducing myself in OpenCV (in order for an software project at university) and found a tutorial for color circle detection which I adapted and tested. It was written with OpenCV 1 in C. So I tried to convert it to OpenCv 2 classes API and everything was fine, but I ran into one problem:
The C function cvHoughCircles produces other results than the C++ function HoughCircles.
The C version finds my test circle and has a low rate of false positives, but the C++ version has a significantly higher mistake rate.
//My C implementation
IplImage *img = cvQueryFrame( capture );
CvSize size = cvGetSize(img);
IplImage *hsv = cvCreateImage(size, IPL_DEPTH_8U, 3);
cvCvtColor(img, hsv, CV_BGR2HSV);
CvMat *mask = cvCreateMat(size.height, size.width, CV_8UC1);
cvInRangeS(hsv, cvScalar(107, 61, 0, 0), cvScalar(134, 255, 255, 0), mask);
/* Copy mask into a grayscale image */
IplImage *hough_in = cvCreateImage(size, 8, 1);
cvCopy(mask, hough_in, NULL);
cvSmooth(hough_in, hough_in, CV_GAUSSIAN, 15, 15, 0, 0);
cvShowImage("mask",hough_in);
/* Run the Hough function */
CvMemStorage *storage = cvCreateMemStorage(0);
CvSeq *circles = cvHoughCircles(hough_in, storage, CV_HOUGH_GRADIENT,
4, size.height/4, 100, 40, 0, 0);
// ... iterating over all found circles
this works pretty well
//My C++ implementation
cv::Mat img;
cap.read(img);
cv::Size size(img.cols,img.rows);
cv::Mat hsv(size, IPL_DEPTH_8U, 3);
cv::cvtColor(img, hsv, CV_BGR2HSV);
cv::Mat mask(size.height, size.width, CV_8UC1);
cv::inRange(hsv, cv::Scalar(107, 61, 0, 0), cv::Scalar(134, 255, 255, 0), mask);
GaussianBlur( mask, mask, cv::Size(15, 15), 0, 0 );
/* Run the Hough function */
imshow("mask",mask);
vector<cv::Vec3f> circles;
cv::HoughCircles(mask, circles, CV_HOUGH_GRADIENT,
4, size.height/4, 100, 140, 0, 0);
// ... iterating over all found circles
As you can see, I use same arguments to all calls. I tested this with a webcam and a static sample object.One requirement is to use OpenCV2 C++ API.
Does anybody know, why I get so different results under equivalent conditions?
Edit
The different threshold values was just a mistake when I tested to make results more equally. These screenshots are taken with threshold set to 40 for both versions:
Screenshots: (Sorry, cannot yet post images)
C and C++ version
I see Hough parameters in C version as "..., 100, 40, 0, 0); " while in C++ version as "... 100, 140, 0, 0);" This difference in thresholds probably explains the difference in results.