draw a vertical line in a videocapture opencv - c++

i followed a tutorial about facedetection using c++ and visual studio 2012 it worked well for , but then i wanted to add vertical lines to the video capture (from webcam) but nothing happened i dont know what exactly went wrong, i could really appreciate your help with this .here is the code i'm working on :
int main() {
VideoCapture cap(0); // Open default camera
Mat frame;
cap.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
line(frame, Point(frame.cols / 2 + 1, 0),
Point(frame.cols / 2 + 1, frame.rows - 1),
Scalar(255, 0, 128));
// Load preconstructed classifier
face_cascade.load("C:\\opencv24\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_alt.xml");
while (cap.read(frame)) {
detectFaces(frame); // Call function to detect faces
if (waitKey(30) >= 0) // Pause key
break;
}
return 0;
}

after some modification in the code i finally arrived to get the line drawn ,here is the running code
while (cap.read(frame)) {
// Call function to detect faces
Mat frame;
cap >> frame; // get a new frame from camera
//cvtColor(frame, frame, COLOR_BGR2GRAY);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
line(frame, Point(frame.cols / 2 + 1, 0),
Point(frame.cols / 2 + 1, frame.rows - 1),
Scalar(255, 0, 0));
imshow("edges", frame);
detectFaces(frame);
if (waitKey(30) >= 0) // Pause key
break;
}
return 0;
}

Related

OpenCV findContours() error VS2015 Opencv2413

I'm having a problem with OpenCV findContours when running. I don't quite understand what the error is. During building, there is no error.
Here is the error message:
]1
Here is my code:
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
VideoCapture cap(0); //capture the video from web cam
if (!cap.isOpened()) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
return -1;
}
namedWindow("Control", CV_WINDOW_AUTOSIZE); //create a window called "Control"
int iLowH = 0;
int iHighH = 179;
int iLowS = 0;
int iHighS = 255;
int iLowV = 0;
int iHighV = 255;
//Create trackbars in "Control" window
cvCreateTrackbar("LowH", "Control", &iLowH, 179); //Hue (0 - 179)
cvCreateTrackbar("HighH", "Control", &iHighH, 179);
cvCreateTrackbar("LowS", "Control", &iLowS, 255); //Saturation (0 - 255)
cvCreateTrackbar("HighS", "Control", &iHighS, 255);
cvCreateTrackbar("LowV", "Control", &iLowV, 255); //Value (0 - 255)
cvCreateTrackbar("HighV", "Control", &iHighV, 255);
while (true)
{
Mat imgOriginal;
bool bSuccess = cap.read(imgOriginal); // read a new frame from video
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read a frame from video stream" << endl;
break;
}
Mat imgHSV;
cvtColor(imgOriginal, imgHSV, COLOR_BGR2HSV); //Convert the captured frame from BGR to HSV
Mat imgThresholded;
inRange(imgHSV, Scalar(iLowH, iLowS, iLowV), Scalar(iHighH, iHighS, iHighV), imgThresholded); //Threshold the image
//morphological opening (remove small objects from the foreground)
erode(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)));
dilate(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)));
//morphological closing (fill small holes in the foreground)
dilate(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)));
erode(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)));
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(imgThresholded, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
imshow("Thresholded Image", imgThresholded); //show the thresholded image
imshow("Original", imgOriginal); //show the original image
if (waitKey(30) == 27) //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop
{
cout << "esc key is pressed by user" << endl;
break;
}
}
return 0;
}
Actually, this is a known compatibility error between VS2013, VS2015 and OpenCV. The temporary solution worked for me is to add the 2 lines shown below before return.
cv::imshow("img", threshold);
cv::destroyAllWindows();
I also met these mistakes, but when developing on visual studio 2010. and spent a lot of time with them.
In the end I found that the reason for this is just that I used opencv - lib instead of vc10 (for vs2010) vc14 (for vs2015).
Changing the opencv-lib corrects the error.
Build of vc10 - opencv lib also costs me a lot of time. When someone has tried but still doesn't get opencv-build successful. please contact me. I can maybe help you.

Opencv: Changing a code that uses video to an image for color detection using trackbars

I have a project where i need to detect specific colors from leaves images, like green, brown and yellow.
I found this tutorial (http://opencv-srf.blogspot.com.br/2010/09/object-detection-using-color-seperation.html) that explains how to create a real time trackbar to find the best values for that, but it uses images from a webcam, and i want to use it with pictures.
Can you guys please help me do that?
Thank you.
Here is the code for thresholding an HSV image, selecting the ranges with trackbars.
Note that, differently from a video (as described here), I used morphologyEx to perform morphological operations, and replaced C style cvCreateTrackbar with the C++ function createTrackbar.
The comments in the code should be clear. Please ping me if something is not clear:
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
// Load BGR image
Mat3b bgr = imread("path_to_image");
if (bgr.empty())
{
cout << "Cannot open the image" << endl;
return -1;
}
// Transform to HSV
Mat3b hsv;
cvtColor(bgr, hsv, COLOR_BGR2HSV);
// Create a window called "Control"
namedWindow("Control", CV_WINDOW_AUTOSIZE);
// Set starting values for ranges
int iLowH = 0;
int iHighH = 179;
int iLowS = 0;
int iHighS = 255;
int iLowV = 0;
int iHighV = 255;
//Create trackbars in "Control" window
createTrackbar("LowH", "Control", &iLowH, 179); //Hue (0 - 179)
createTrackbar("HighH", "Control", &iHighH, 179);
createTrackbar("LowS", "Control", &iLowS, 255); //Saturation (0 - 255)
createTrackbar("HighS", "Control", &iHighS, 255);
createTrackbar("LowV", "Control", &iLowV, 255); //Value (0 - 255)
createTrackbar("HighV", "Control", &iHighV, 255);
//Show the original image
imshow("Original", bgr);
// Create kernel for morphological operation
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(5, 5));
// Infinte loop, until user press "esc"
while (true)
{
Mat mask;
inRange(hsv, Scalar(iLowH, iLowS, iLowV), Scalar(iHighH, iHighS, iHighV), mask); //Threshold the image
//morphological opening (remove small objects from the foreground)
morphologyEx(mask, mask, MORPH_OPEN, kernel);
//morphological closing (fill small holes in the foreground)
morphologyEx(mask, mask, MORPH_CLOSE, kernel);
//Show the thresholded image
imshow("Thresholded Image", mask);
if (waitKey(30) == 27) //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop
{
cout << "esc key is pressed by user" << endl;
break;
}
}
return 0;
}

simple way to get Z "depth" in OpenCV

I need to detect a blue object and a red object from two different cameras, the required task for now is to locate each object position in 3D space, this means for each object we have to have its x,y,z coordinates, I've seen this video and here witch does exactly what i'm trying to do but there was no sample code in case of the first video, my code looks like this for now it gets me x,y of red/blue object but no depth:
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <opencv/highgui.h>
using namespace cv;
using namespace std;
int main(int argc, char** argv)
//int func(int argc, char** argv)
{
VideoCapture cap(0); //capture the video from webcam
VideoCapture cap1(1); //capture the video from extrenal camers
if (!cap.isOpened()) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
return -1;
}
if (!cap1.isOpened()) // if not success, exit program
{
cout << "Cannot open the External camera" << endl;
return -1;
}
namedWindow("Control", CV_WINDOW_AUTOSIZE); //create a window called "Control"
int iLowH = 170;
int iHighH = 179;
int iLowS = 150;
int iHighS = 255;
int iLowV = 60;
int iHighV = 255;
//Create trackbars in "Control" window to control the range of red detection
createTrackbar("LowH", "Control", &iLowH, 179); //Hue (0 - 179)
createTrackbar("HighH", "Control", &iHighH, 179);
createTrackbar("LowS", "Control", &iLowS, 255); //Saturation (0 - 255)
createTrackbar("HighS", "Control", &iHighS, 255);
createTrackbar("LowV", "Control", &iLowV, 255);//Value (0 - 255)
createTrackbar("HighV", "Control", &iHighV, 255);
int iLastX = -1; //last known co-ordinates of red object
int iLastY = -1;
int iLastX1 = -1;
int iLastY1 = -1;
//Capture a temporary image from both cameras to obtain size
Mat imgTmp;
cap.read(imgTmp);
cap1.read(imgTmp);
//Create a black image with the size as the camera output
Mat imgLines = Mat::zeros(imgTmp.size(), CV_8UC3);;
Mat imgLines1 = Mat::zeros(imgTmp.size(), CV_8UC3);;
//loop of continuously capturing frames from video
while (true)
{
Mat imgOriginal;
Mat imgOriginal1;
bool bSuccess = cap.read(imgOriginal); // read a new frame from video webcam
bool bSuccess1 = cap1.read(imgOriginal1); // read a new frame from video external cam
if (!bSuccess || !bSuccess1) //if not success, break loop
{
cout << "Cannot read a frame from video stream" << endl;
break;
}
//WebCam code for image and tracking/detecting
Mat imgHSV;
cvtColor(imgOriginal, imgHSV, COLOR_BGR2HSV); //Convert the captured frame from BGR to HSV to control range of color to obtain and be able to detect it
Mat imgThresholded;
inRange(imgHSV, Scalar(iLowH, iLowS, iLowV), Scalar(iHighH, iHighS, iHighV), imgThresholded); //Threshold the image at the colors within specified range
//morphological opening (removes noise and similar colored objects appearing in thresholded image)
erode(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)));
dilate(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)));
//morphological closing (removes noise appearing inside our object in the thresholded image)
dilate(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)));
erode(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)));
//Calculate the moments of the thresholded image to calculate the object position
Moments oMoments = moments(imgThresholded);
double dM01 = oMoments.m01;
double dM10 = oMoments.m10;
double dArea = oMoments.m00;
// if the area <= 10000, I consider that the there are no object in the image and it's because of the noise, the area is not zero
if (dArea > 10000)
{
//calculate the position of the ball
int posX = dM10 / dArea;
int posY = dM01 / dArea;
if (iLastX >= 0 && iLastY >= 0 && posX >= 0 && posY >= 0)
{
//Draw a red line from the previous point to the current point
line(imgLines, Point(posX, posY), Point(iLastX, iLastY), Scalar(0, 0, 255), 2);
}
iLastX = posX; //current point becomes last known point and loop continues
iLastY = posY;
}
imshow("Thresholded Image", imgThresholded); //show the thresholded image
imgOriginal = imgOriginal + imgLines;
imshow("Original", imgOriginal); //show the original image with the tracking lines if exist
//External Cam code track/detect
Mat imgHSV1;
cvtColor(imgOriginal1, imgHSV1, COLOR_BGR2HSV); //Convert the captured frame from BGR to HSV
Mat imgThresholded1;
inRange(imgHSV1, Scalar(iLowH, iLowS, iLowV), Scalar(iHighH, iHighS, iHighV), imgThresholded1); //Threshold the image
//morphological opening (removes noise and similar colored objects appearing in thresholded image)
erode(imgThresholded1, imgThresholded1, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)));
dilate(imgThresholded1, imgThresholded1, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)));
//morphological closing (removes noise appearing inside our object in the thresholded image)
dilate(imgThresholded1, imgThresholded1, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)));
erode(imgThresholded1, imgThresholded1, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)));
//Calculate the moments of the thresholded image to calculate the object position
Moments oMoments1 = moments(imgThresholded1);
double dM011 = oMoments1.m01;
double dM101 = oMoments1.m10;
double dArea1 = oMoments1.m00;
// if the area <= 10000, I consider that the there are no object in the image and it's because of the noise, the area is not zero
if (dArea1 > 10000)
{
//calculate the position of the ball
int posX1 = dM101 / dArea1;
int posY1 = dM011 / dArea1;
if (iLastX1 >= 0 && iLastY1 >= 0 && posX1 >= 0 && posY1 >= 0)
{
//Draw a red line from the previous point to the current point
line(imgLines1, Point(posX1, posY1), Point(iLastX1, iLastY1), Scalar(0, 0, 255), 2);
}
iLastX1 = posX1;
iLastY1 = posY1;
}
imshow("Thresholded Image 2", imgThresholded1); //show the thresholded image
imgOriginal1 = imgOriginal1 + imgLines1;
imshow("Original 2", imgOriginal1); //show the original image
if (waitKey(30) == 27) //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop
{
cout << "esc key is pressed by user" << endl;
break;
}
}
return 0;
}
The answer to your question will be Stereo Vision. You need to do the Stereo Calibration of the two cameras in order to obtain the transformation matrix that allows to produce the depth map of the scene from the 2 views. OpenCV provides some functions to do that.
Here is a tutorial to begin with.

OpenCV crop live feed from camera

How can I get properly one resolution feed from camera in OpenCV (640x320) but cut it into half and display only one half of the frame (320x240). So not to scale down, but to actually crop. I am using OpenCV 2.4.5, VS2010 and C++
This quite standard code gets 640x480 input resolution and I made some changes to crop resolution to 320x240. Should I use Mat instead of IplImage, and if so what would be the best way?
#include "stdafx.h"
#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
using namespace std;
char key;
int main()
{
cvNamedWindow("Camera_Output", 1); //Create window
CvCapture* capture = cvCaptureFromCAM(1); //Capture using camera 1 connected to system
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, 640 );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, 480 );
while(1){ //Create loop for live streaming
IplImage* framein = cvQueryFrame(capture); //Create image frames from capture
/* sets the Region of Interest - rectangle area has to be __INSIDE__ the image */
cvSetImageROI(framein, cvRect(0, 0, 320, 240));
/* create destination image - cvGetSize will return the width and the height of ROI */
IplImage *frameout = cvCreateImage(cvGetSize(framein), framein->depth, framein->nChannels);
/* copy subimage */
cvCopy(framein, frameout, NULL);
/* always reset the Region of Interest */
cvResetImageROI(framein);
cvShowImage("Camera_Output", frameout); //Show image frames on created window
key = cvWaitKey(10); //Capture Keyboard stroke
if (char(key) == 27){
break; //ESC key loop will break.
}
}
cvReleaseCapture(&capture); //Release capture.
cvDestroyWindow("Camera_Output"); //Destroy Window
return 0;
}
I think you don't check whether you are getting a CvCapture. On my system with only one camera your code doesn't work because you query camera 1. But the first camera should be 0 Thus change this code.
CvCapture* capture = cvCaptureFromCAM(1); //Capture using camera 1 connected to system
to (note I change 1 to 0):
CvCapture* capture = cvCaptureFromCAM(0); //Capture using camera 1 connected to system
if (! capture ){
/*your error handling*/
}
Further than that your code seems to be working for me. You might also check the other pointer values whether you are not getting NULL.
You can easily crop a video by calling the following function.
cvSetMouseCallback("image", mouseHandler, NULL);
The mouseHandler function is like that.
void mouseHandler(int event, int x, int y, int flags, void* param){
if (event == CV_EVENT_LBUTTONDOWN && !drag)
{
/* left button clicked. ROI selection begins */
select_flag=0;
point1 = Point(x, y);
drag = 1;
}
if (event == CV_EVENT_MOUSEMOVE && drag)
{
/* mouse dragged. ROI being selected */
Mat img1 = img.clone();
point2 = Point(x, y);
rectangle(img1, point1, point2, CV_RGB(255, 0, 0), 3, 8, 0);
imshow("image", img1);
}
if (event == CV_EVENT_LBUTTONUP && drag)
{
point2 = Point(x, y);
rect = Rect(point1.x,point1.y,x-point1.x,y-point1.y);
drag = 0;
roiImg = img(rect);
}
if (event == CV_EVENT_LBUTTONUP)
{
/* ROI selected */
select_flag = 1;
drag = 0;
}
}
For the details you can visit the following link.:How to Crop Video from Webcam using OpenCV
this is easy in python... but the key idea is that cv2 arrays can be referenced and sliced. all you need is a slice of framein.
the following code takes a slice from (0,0) to (320,240). note that numpy arrays are indexed with column priority.
# Required modules
import cv2
# Constants for the crop size
xMin = 0
yMin = 0
xMax = 320
yMax = 240
# Open cam, decode image, show in window
cap = cv2.VideoCapture(0) # use 1 or 2 or ... for other camera
cv2.namedWindow("Original")
cv2.namedWindow("Cropped")
key = -1
while(key < 0):
success, img = cap.read()
cropImg = img[yMin:yMax,xMin:xMax] # this is all there is to cropping
cv2.imshow("Original", img)
cv2.imshow("Cropped", cropImg)
key = cv2.waitKey(1)
cv2.destroyAllWindows()
Working Example of cropping Faces from live camera
void CropFaces::DetectAndCropFaces(Mat frame, string locationToSaveFaces) {
std::vector<Rect> faces;
Mat frame_gray;
// Convert to gray scale
cvtColor(frame, frame_gray, COLOR_BGR2GRAY);
// Equalize histogram
equalizeHist(frame_gray, frame_gray);
// Detect faces
face_cascade.detectMultiScale(frame_gray, faces, 1.1, 3,
0 | CASCADE_SCALE_IMAGE, Size(30, 30));
// Iterate over all of the faces
for (size_t i = 0; i < faces.size(); i++) {
// Find center of faces
Point center(faces[i].x + faces[i].width / 2, faces[i].y + faces[i].height / 2);
Mat face = frame_gray(faces[i]);
std::vector<Rect> eyes;
Mat croppedRef(frame, faces[i]);
cv::Mat cropped;
// Copy the data into new matrix
croppedRef.copyTo(cropped);
string fileName = locationToSaveFaces+ "\\face_" + to_string(faces[i].x) + ".jpg";
resize(cropped, cropped, Size(65, 65));
imwrite(fileName, cropped);
}
// Display frame
imshow("DetectAndSave", frame);
}
void CropFaces::PlayVideoForCropFaces(string locationToSaveFaces) {
VideoCapture cap(0); // Open default camera
Mat frame;
face_cascade.load("haarcascade_frontalface_alt.xml"); // load faces
while (cap.read(frame)) {
DetectAndCropFaces(frame, locationToSaveFaces); // Call function to detect faces
if (waitKey(30) >= 0) // pause
break;
}
}

OpenCV unable to capture image from isight webcam

I can not capture image from my webcam using following OpenCV code.
The code can show images from a local AVI file or a video device. It works fine on a "test.avi" file.
When I make use my default webcam(CvCapture* capture =cvCreateCameraCapture(0)), the program can detected the size of the image from webcam,but just unable to display the image.
/I forgot to mention that I can see the iSight is working because the LED indicator is turn on/
Anyone encounter the same problem?
cvNamedWindow( "Example2", CV_WINDOW_AUTOSIZE );
CvCapture* capture =cvCreateFileCapture( "C:\\test.avi" ) ;// display images from avi file, works well
// CvCapture* capture =cvCreateCameraCapture(0); //display the frame(images) from default webcam not work
assert( capture );
IplImage* image;
while(1) {
image = cvQueryFrame( capture );
if( !image ) break;
cvShowImage( "Example2", image );
char c = cvWaitKey(33);
if( c == 27 ) break;
}
cvReleaseCapture( &capture );
cvDestroyWindow( "Example2" );
opencv 2.2
Debug library *d.lib
WebCam isight
Macbook OS win7 32
VS2008
I'm working on opencv 2.3 with Macbook pro Mid 2012 and I had that problem with the Isight cam. Somehow I managed to make it work on opencv by simply adjusting the parameters of the Cvcapture and adjusting the frame width and height:
CvCapture* capture = cvCaptureFromCAM(0);
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, 500 );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, 600 );
You can also change these numbers to the frame width and height you want.
Did you try the example from the opencv page?
namely,
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
Works on a macbook pro for me (although on OS X). If it doesn't work, some kind of error message would be helpful.
Try this:
int main(int, char**) {
VideoCapture cap(0); // open the default camera
if (!cap.isOpened()) { // check if we succeeded
cout << "===couldn't open camera" << endl;
return -1;
}
Mat edges, frame;
frame = cv::Mat(10, 10, CV_8U);
namedWindow("edges", 1);
for (;;) {
cap >> frame; // get a new frame from camera
cout << "frame size: " << frame.cols << endl;
if (frame.cols > 0 && frame.rows > 0) {
imshow("edges", frame);
}
if (waitKey(30) >= 0)
break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
Latest update! Problem solved!
This happen to be one of OpenCV 2.2′s bug
Here is how to fix it:
http://dusijun.wordpress.com/2011/01/11/opencv-unable-to-capture-image-from-isight-webcam/
Why dont you try
capture=cvCaptureFromCam(0);
I think this may work.
Let me know about wheather its working or not.