OpenCV C++ Color Detection and Print Mac - c++

I'm new to OpenCV and am working on a video analysis project. Basically, I want to split my webcam into two sides (left and right), and have already figured out how to do this. However, I also want to analyze each side for red and green colors, and print out the amount of pixels that are red/green. I must have gone through every possible blog to figure this out, but alas it still doesn't work. The following code runs, however instead of detecting red as the code might suggest it seems to pick up white (all light sources and white walls). I have spent hours combing through the code but still cannot find the solution. Please help! Also note that this is being run on OSX 10.8, via Xcode. Thanks!
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
VideoCapture cap(0); //capture the video from webcam
if ( !cap.isOpened() ) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
return -1;
}
namedWindow("HSVLeftRed", CV_WINDOW_AUTOSIZE);
namedWindow("HSVLeftGreen", CV_WINDOW_AUTOSIZE);
while (true) {
Mat image;
cap.read(image);
Mat HSV;
Mat threshold;
//Left Cropping
Mat leftimg = image(Rect(0, 0, 640, 720));
//Left Red Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(0,0,150),Scalar(0,0,255),threshold);
imshow("HSVLeftRed",threshold);
//Left Green Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(still need to find proper min values),Scalar(still need to find proper max values),threshold);
imshow("HSVLeftGreen",threshold);
}
return 0;
}

You're cropping a 640x720 area, which might not fit exactly your contents. Tip: Check your actual capture resolution with capture.get(CAP_PROP_FRAME_WIDTH) and capture.get(CAP_PROP_FRAME_HEIGHT). You might want to consider Mat threshold --> Mat thresholded. This is just some ranting :)
What I suspect is the actual issue is the threshold you use for HSV. According to cvtolor, section on RGB to HSV conversion,
On output 0 <= V <= 1.
so you should use a float representing your V threshold, i.e. 150 -> 150/255 ~= 0.58 etc.

Related

Calculate the distance to a red point with opencv

i'm newbie with opencv. Just managed to install and set it up to Visual 2013. I tested it with a sample of live stream for my laptop's camera and it works. Now i want to calculate the distance with the webcam to a red laser spot that will be in the middle of the screen(live_stream). Tell me from where can i start? I know that i must find the R(red) pixel from the middle of the screen but i dont know how to do that and what functions can i use. Some help, please?
The live stream from webcam that works is shown below:
#include "opencv2/highgui/highgui.hpp"
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>
#include <stdio.h>
int main()
{
//Data Structure to store cam.
CvCapture* cap=cvCreateCameraCapture(0);
//Image variable to store frame
IplImage* frame;
//Window to show livefeed
cvNamedWindow("Imagine Live",CV_WINDOW_AUTOSIZE);
while(1)
{
//Load the next frame
frame=cvQueryFrame(cap);
//If frame is not loaded break from the loop
if(!frame)
printf("\nno");;
//Show the present frame
cvShowImage("Imagine Live",frame);
//Escape Sequence
char c=cvWaitKey(33);
//If the key pressed by user is Esc(ASCII is 27) then break out of the loop
if(c==27)
break;
}
//CleanUp
cvReleaseCapture(&cap);
cvDestroyAllWindows();
}
Your red dot is most likely going to show up as total white in the camera stream, so I would suggest
Convert to grayscale using cvtColor().
Threshold using threshold(), for parameters use something like thresh=253, maxval=255 and mode THRESH_BINARY. That should give you an image that is all black with a small white dot where your laser is.
Then you can use findContours() to locate your dot in the image. Get the boundingRect() of a contour and then you can calculate its center to get the precise coordinates of your dot.
Also as has been already mentioned, do not use the deprecated C API, use the C++ API instead.

Using custom kernel in opencv 2DFilter - causing crash ... convolution how?

Thought I'd try my hand at a little (auto)correlation/convolution today in openCV and make my own 2D filter kernel.
Following openCV's 2D Filter Tutorial I discovered that making your own kernels for openCV's Filter2D might not be that hard. However I'm getting unhandled exceptions when I try to use one.
Code with comments relating to the issue here:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char** argv) {
//Loading the source image
Mat src;
src = imread( "1.png" );
//Output image of the same size and the same number of channels as src.
Mat dst;
//Mat dst = src.clone(); //didn't help...
//desired depth of the destination image
//negative so dst will be the same as src.depth()
int ddepth = -1;
//the convolution kernel, a single-channel floating point matrix:
Mat kernel = imread( "kernel.png" );
kernel.convertTo(kernel, CV_32F); //<<not working
//normalize(kernel, kernel, 1.0, 0.0, 4, -1, noArray()); //doesn't help
//cout << kernel.size() << endl; // ... gives 11, 11
//however, the example from tutorial that does work:
//kernel = Mat::ones( 11, 11, CV_32F )/ (float)(11*11);
//default value (-1,-1) here means that the anchor is at the kernel center.
Point anchor = Point(-1,-1);
//value added to the filtered pixels before storing them in dst.
double delta = 0;
//alright, let's do this...
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
imshow("Source", src); //<< unhandled exception here
imshow("Kernel", kernel);
imshow("Destination", dst);
waitKey(1000000);
return 0;
}
As you can see, using the tutorials kernel works fine, but my image will crash the program, I've tried changing the bit-depth, normalizing, checking size and lots of commenting out blocks to see where it fails, but haven't cracked it yet.
The image is, '1.png':
And the kernel I want 'kernel.png':
I'm trying to see if I can get a hotspot in dst at the point where the eye catchlight is (the kernel I've chosen is the catchlight). I know there are other ways to do this, but I'm interested to see how effective convolving the catchlight over itself is. (autocorrelation I think that's called?)
Direct questions:
why the crash?
is the crash indicating a fundamental conceptual mistake?
or (hopefully) is it just some (silly) fault in the code?
Thanks in advance for any help :)
The assertion error should be posted which would help someone to answer you other than questioning why is the crash. Anyways, I have posted below the possible errors and solution for convolution filter2D.
Error 1:
OpenCV Error: Assertion failed (src.channels() == 1 && func != 0) in cv::countNo
nZero, file C:\builds\2_4_PackSlave-win32-vc12-shared\opencv\modules\core\src\st
at.cpp, line 549
Solution : Your input Image and the kernel should be grayscales. You can use the flag 0 in imread. (ex. cv::imread("kernel.png",0) to read the image as grayscale.) If you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually.
I don't see anything other than the obove error that may crash. Kernel size should in odd numbers and your kernel image is 11X11 which is fine. If it stills crashes kindly provide more information in order to help you out.

OpenCV C++ Mat to Integer

I'm pretty new to OpenCV, so bear with me. I'm running a Mac Mini with OSX 10.8. I have a program that recognizes colors and displays them in binary picture (black and white). However, I want to store the number of white pixels as an integer (or float, etc.) to compare with other number of pixels. How can I do this? Here is my current code-
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
VideoCapture cap(0); //capture the video from webcam
if ( !cap.isOpened() ) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
return -1;
}
namedWindow("HSVLeftRed", CV_WINDOW_AUTOSIZE);
namedWindow("HSVLeftGreen", CV_WINDOW_AUTOSIZE);
while (true) {
Mat image;
cap.read(image);
Mat HSV;
Mat leftgreen;
Mat leftred;
//Left Cropping
Mat leftimg = image(Rect(0, 0, 640, 720));
//Left Red Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(0,0,150),Scalar(0,0,255), leftgreen);
//imshow("HSVLeftRed", leftgreen);
//print pixel type
//Left Green Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(still need to find proper min values),Scalar(still need to find proper max values), leftgreen);
//imshow("HSVLeftGreen", leftgreen);
//compare pixel types
}
return 0;
}
Thanks in advance!
To count the non-zero pixels, OpenCV has this function cv::countNonZero. It takes input the image, whose number of non-zero pixels, we want to calculate and output is number of non-zero pixels(int). Here is the documentation.
In your case, since all the pixels are either black or white, all the non zero pixels will be white pixels.
This is how to use it,
int cal = countNonZero(image);
Change image, as per your code.

Multiplication of 2 images where the images have different models in OPEN CV

I'm trying to multiply two images of different models, in my case HSV and YCRCB.
I get the "vector is out of bound error" every time.
I have checked the sizes of the input images being multiplied, the number of rows and columns. I know the value is exceeding over 255.
I tried to implement this method opencv - image multiplication, but the code has way to many MAT's that have to be initialized. This also leads me to ask the question if images with more than 1 channel can be multiplied. Also tried direct multiplication and it doesn't work, so tried multiplying channel wise. To make things easier, I used the loop method but then the error occured.
A short summary about the code and reason for doing it : I'm using it for skin detection but want to further reduce noise. I think this can be done by multiplying the 2 output images generated by the threshold operations (for HSV & YCRCB). Since these images have different noises in the image, the output of the multiplication will have even less noise (I have seen the output on different screens, the overlapping regions are very small) hence this can detect skin color at all almost times and noise will be minimal and thus will help in tracking skin better.
The code given below is not complete cause it never executes till the end. After this there are morphological and dilation operations being done, that's it.
This is my first time asking a question on Stack Overflow and I'm still learning Open CV . Sorry If I have been over-descriptive and all suggestions are welcome. Thank You.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv\cv.h>
#include <opencv\highgui.h>
#include <iostream>
#include <opencv2\imgproc\imgproc.hpp>
using namespace cv;
using namespace std;
char key;
Mat image,hsv,ycr;
vector<Mat> channels,ycrs,threshold_output;
int main()
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
{
cout << "Cannot open the web cam" << endl;
return -1;
}
while(1)
{
cap>>image;
cvtColor( image, ycr, CV_BGR2YCrCb ); //Converts into YCRCB
cvtColor( image, hsv, CV_BGR2HSV ); //Converts into HSV
Mat imgThresholded;
Mat imgThresholded1;
inRange(ycr, Scalar(0, 140,105 ), Scalar(255, 165,135), imgThresholded1); //for yrcrcb range
inRange(hsv, Scalar(0, 48,150 ), Scalar(20, 150,255), imgThresholded); //for hsv range
split(imgThresholded1, channels);
split(imgThresholded, ycrs);
for( int i = 0; i <3 ; i++ )
{
multiply(channels[i],ycrs[i], threshold_output[i], 1,-1 );
}//code breaks here
Even if the input to inRange are multi-channeled, the output of inRange will be a single-channel CV_8UC1.
The reason is that inRange computes a Cartesian intersection:
Result (x, y) is true (uchar of 255) if ALL of these are true:
For first channel, lower[0] <= img(x, y)[0] <= upper[0], AND
For second channel, lower[1] <= img(x, y)[1] <= upper[1], AND
And so on.
In other words, after it has checked each channel's pixel values against the lower and upper bound, the logical result is then "boiled down" Logical-And operation over the channels of the image.
"Boiled down" is my colloquial way of referring to reduction, or fold, where a function can accept arbitrary number of arguments and it can be "reduced" down to a single value. Summation, multiplication, string concatenation, etc.
It is therefore not necessary to use cv::split on the output of cv::inRange. In fact, because the output has only one channel, calling channels[1] or ycrs[1] will be an undefined behavior, which will either cause an exception for a debug-build and undefined behavior or crash or memory corruption for a release-build.

C/C++ OpenCV video processing

Good day everyone! So currently I'm working on a project with video processing, so I decided to give a try to OpenCV. As I'm new to it, I decided to find few sample codes and test them out. First one, is C OpenCV and looks like this:
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <stdio.h>
int main( void ) {
CvCapture* capture = 0;
IplImage *frame = 0;
if (!(capture = cvCaptureFromCAM(0)))
printf("Cannot initialize camera\n");
cvNamedWindow("Capture", CV_WINDOW_AUTOSIZE);
while (1) {
frame = cvQueryFrame(capture);
if (!frame)
break;
IplImage *temp = cvCreateImage(cvSize(frame->width/2, frame->height/2), frame->depth, frame->nChannels); // A new Image half size
cvResize(frame, temp, CV_INTER_CUBIC); // Resize
cvSaveImage("test.jpg", temp, 0); // Save this image
cvShowImage("Capture", frame); // Display the frame
cvReleaseImage(&temp);
if (cvWaitKey(5000) == 27) // Escape key and wait, 5 sec per capture
break;
}
cvReleaseImage(&frame);
cvReleaseCapture(&capture);
return 0;
}
So, this one works perfectly well and stores image to hard drive nicely. But problems begin with next sample, which uses C++ OpenCV:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
//namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_RGB2XYZ);
imshow("edges", edges);
//imshow("edges2", frame);
//imwrite("test1.jpg", frame);
if(waitKey(1000) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
So, yeah, generally, in terms of showing video (image frames) there is practically no changes, but when it comes to using im** functions, some problems arise.
Using cvSaveImage() works out nicely, but the moment I try to use imwrite(), unhandled exception arises in regards of 'access violation reading location'. Same goes for imread(), when I'm trying to load image.
So, the thing I wanted to ask, is it possible to use most of the functionality with C OpenCV? Or is it necessary to use C++ OpenCV. If yes, is there any solution for the problem I described earlier.
Also as stated here, images initially are in BGR-format, so conversion needed. But doing BGR2XYZ conversion seems to invert colors, while RGB2XYZ preserve them. Examples:
images
Or is it necessary to use C++ OpenCV?
No, there is no necessity whatsoever. You can use any interface you like and you think you are good with it (OpenCV offers C, C++, Python interfaces).
For your problem about imwrite() and imread() :
For color images the order channel is normally Blue, Green, Red , this
is what imshow() , imread() and imwrite() expect
Quoted from there