I'm pretty new to OpenCV, so bear with me. I'm running a Mac Mini with OSX 10.8. I have a program that recognizes colors and displays them in binary picture (black and white). However, I want to store the number of white pixels as an integer (or float, etc.) to compare with other number of pixels. How can I do this? Here is my current code-
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
VideoCapture cap(0); //capture the video from webcam
if ( !cap.isOpened() ) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
return -1;
}
namedWindow("HSVLeftRed", CV_WINDOW_AUTOSIZE);
namedWindow("HSVLeftGreen", CV_WINDOW_AUTOSIZE);
while (true) {
Mat image;
cap.read(image);
Mat HSV;
Mat leftgreen;
Mat leftred;
//Left Cropping
Mat leftimg = image(Rect(0, 0, 640, 720));
//Left Red Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(0,0,150),Scalar(0,0,255), leftgreen);
//imshow("HSVLeftRed", leftgreen);
//print pixel type
//Left Green Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(still need to find proper min values),Scalar(still need to find proper max values), leftgreen);
//imshow("HSVLeftGreen", leftgreen);
//compare pixel types
}
return 0;
}
Thanks in advance!
To count the non-zero pixels, OpenCV has this function cv::countNonZero. It takes input the image, whose number of non-zero pixels, we want to calculate and output is number of non-zero pixels(int). Here is the documentation.
In your case, since all the pixels are either black or white, all the non zero pixels will be white pixels.
This is how to use it,
int cal = countNonZero(image);
Change image, as per your code.
Related
http://inside.mines.edu/~whoff/courses/EENG512/lectures/HoughInOpenCV.pdf
Hi, i am going through the pdf tutorial in the link above.
I encounter problem on page 6 of the slides.
As we seee that the output of the code after inserting the canny edge detector, it should trace out all the edges on a photo.
I cannot get what is shown at page 6.
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char* argv[])
{
printf("Hello world\n");
// read an image
Mat imgInput = imread("a.png");
// create image window named "My Image"
namedWindow("My Image");
// Convert to gray if necessary
if (imgInput.channels() == 3)
cv::cvtColor(imgInput, imgInput, CV_BGR2GRAY);
// Apply Canny edge detector
Mat imgContours;
double thresh = 105; // try different values to see effect
Canny(imgInput, imgContours, 0.4*thresh, thresh); // low, high threshold
// show the image on window
imshow("My Image", imgInput);
// wait for xx ms (0 means wait until keypress)
waitKey(5000);
return 0;
}
And also, there is a line double thresh = xxx;//try different values
What values should i put? and what are the values mean?
Thank you
Just replace your imshow function with ,
imshow("My Image", imgContours);
and you can use thresh value approximately around 200.
Change threshold value and see effect of it and according to that you can select your threshold value.
The imgContours is your output map with all the edges. You should use imshow with imgContours.
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char* argv[])
{
printf("Hello world\n");
// read an image
Mat imgInput = imread("a.png");
// create image window named "My Image"
namedWindow("My Image");
// Convert to gray if necessary
if (imgInput.channels() == 3)
cv::cvtColor(imgInput, imgInput, CV_BGR2GRAY);
// Apply Canny edge detector
Mat imgContours;
double thresh = 105; // try different values to see effect
Canny(imgInput, imgContours, 0.4*thresh, thresh); // low, high threshold
// show the image on window
imshow("My Image", imgContours);
// wait for xx ms (0 means wait until keypress)
waitKey(5000);
return 0;
}
Reference:
http://docs.opencv.org/modules/imgproc/doc/feature_detection.html?highlight=canny#canny
I've switched from Ubuntu to Windows for my opencv project and while displaying an image using the imshow function the image is displayed but the other details like x axis and y axis information and the intensity values were not shown in the window.
The same code under the Ubuntu build works perfectly. Here is my code:
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
int main()
{
cv::Mat imgrgb = imread("C:\\Users\\Len\\Documents\\project\\Images\\1-11.jpg", CV_LOAD_IMAGE_COLOR);
// Check that the image read is a 3 channels image and not empty
CV_Assert(imgrgb.channels() == 3);
if (imgrgb.empty()) {
cout << "Image is empty. Specify correct path" << endl;
return -1;
}
cv::cvtColor(imgrgb, img, CV_BGR2GRAY);
namedWindow("Test", cv::WINDOW_AUTOSIZE);
imshow("Test", imgrgb);
waitKey(0);
}
So, how can I display the intensity values along with the current x and y axis information?
I'm new to OpenCV and am working on a video analysis project. Basically, I want to split my webcam into two sides (left and right), and have already figured out how to do this. However, I also want to analyze each side for red and green colors, and print out the amount of pixels that are red/green. I must have gone through every possible blog to figure this out, but alas it still doesn't work. The following code runs, however instead of detecting red as the code might suggest it seems to pick up white (all light sources and white walls). I have spent hours combing through the code but still cannot find the solution. Please help! Also note that this is being run on OSX 10.8, via Xcode. Thanks!
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
VideoCapture cap(0); //capture the video from webcam
if ( !cap.isOpened() ) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
return -1;
}
namedWindow("HSVLeftRed", CV_WINDOW_AUTOSIZE);
namedWindow("HSVLeftGreen", CV_WINDOW_AUTOSIZE);
while (true) {
Mat image;
cap.read(image);
Mat HSV;
Mat threshold;
//Left Cropping
Mat leftimg = image(Rect(0, 0, 640, 720));
//Left Red Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(0,0,150),Scalar(0,0,255),threshold);
imshow("HSVLeftRed",threshold);
//Left Green Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(still need to find proper min values),Scalar(still need to find proper max values),threshold);
imshow("HSVLeftGreen",threshold);
}
return 0;
}
You're cropping a 640x720 area, which might not fit exactly your contents. Tip: Check your actual capture resolution with capture.get(CAP_PROP_FRAME_WIDTH) and capture.get(CAP_PROP_FRAME_HEIGHT). You might want to consider Mat threshold --> Mat thresholded. This is just some ranting :)
What I suspect is the actual issue is the threshold you use for HSV. According to cvtolor, section on RGB to HSV conversion,
On output 0 <= V <= 1.
so you should use a float representing your V threshold, i.e. 150 -> 150/255 ~= 0.58 etc.
I was recently assigned a school project to do some filtering on a live video feed. The Idea is that only the biggest object currently in camera view is shown, I am planning to do this through the use of bounding boxes (deleting all objects except for the biggest one).
However I have very limited coding experience with C++ & opencv, so Im basically just picking code off the net where I can and (attempting to) editing it to fit my purpose.
At the moment I have attempted to combine the bounding boxes tutorial code found here :http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.html
And this code right here (to greyscale each frame caught from the camera) without any success (probably because its horrible).
#include <stdlib.h>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
void thresh_callback(int, void* );
int main(int argc, char** argv)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
cout << "Hello Application\n" << endl;
for(;;)
{
Mat frame, GBlur;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, COLOR_BGR2GRAY);
GBlur : GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
return(0);
If you guys have any advice on how I should go about this then please tell! Im kind of in a desperate mood lol.
I am just starting to use the Open CV library and one of my first code is a simple negative transform function.
#include <stdio.h>
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
void negative(Mat& input,Mat& output)
{
int row = input.rows;
int col = input.cols;
int x,y;
uchar *input_data=input.data;
uchar *output_data= output.data;
for( x=0;x<row;x++)
for( y=0;y<col;y++)
output_data[x*col+y]=255-input_data[x*col+y];
cout<<x<<y;
}
int main( int argc, char** argv )
{
Mat image;
image = imread( argv[1], 1 );
Mat output=image.clone();
negative(image,output);
namedWindow( "Display Image", CV_WINDOW_AUTOSIZE );
imshow( "Display Image", output );
waitKey(0);
return 0;
}
I have added the extra line to check if the entire image is processed. The problem i am facing with my output image is that negative transform is applied only to top half of the image.
Now what happens is that the values for x and y are displayed only after i press a key (i.e. once the image is shown)
My question is why is the window being called before the function is executed ?
The fundamental problem in your code is that you are reading in a color image but you try to process it as grayscale. Therefore the indices shift and what really happens is that you only process the first third of the image (because of the 3-channel format).
See opencv imread manual
flags –
Specifies color type of the loaded image:
>0 the loaded image is forced to be a 3-channel color image
=0 the loaded image is forced to be grayscale
You've specified flags=1.
Here's a way of doing it:
Vec3b v(255, 255, 255);
for(int i=0;i<input.rows;i++) //search for edges
{
for (int j=0 ;j<input.cols;j++)
{
output.at<Vec3b>(i,j) = v - input.at<Vec3b>(i,j);
}
}
Note that here Vec3b is a 3-channel pixel value as opposed to uchar which is a 1-channel value.
For a more efficient implementation you can have a look at Mat.ptr<Vec3b>(i).
EDIT:
If you are processing lots of images,
for a general iteration over the pixels the fastest way is:
Vec3b v(255, 255, 255); // or maybe Scalar v(255,255,255) Im not sure
for(int i=0;i<input.rows;i++) //search for edges
{
Vec3b *p=input.ptr<Vec3b>(i);
Vec3b *q=output.ptr<Vec3b>(i);
for (int j=0 ;j<input.cols;j++)
{
q[j] = v - p[j];
}
}
See "The OpenCV Tutorials" -- "The efficient way" section.
Try to write:
cout << x << y << endl;
The function is called before, but the output is not flushed directly, which results in your image appearing before the text is written. By adding an "endline", you force a flush. You could also use flush(cout); instead of adding and endline.
For the negative, you can use the OpenCV function subtract() directly:
subtract(Scalar(255, 255, 255), input, output);