I have a problem with a webcam capturing in opencv.
This can be built successfully:
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
using namespace cv;
int main() {
// VideoCapture cap(0);
// while(true){
// Mat Webcam;
// cap.read(Webcam);
// imshow("Webcam", Webcam);
// }
}
However, this is not:
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
using namespace cv;
int main() {
VideoCapture cap(0);
while(true){
Mat Webcam;
cap.read(Webcam);
imshow("Webcam", Webcam);
}
}
There is no error or warning message popped out, so I can't solve it by myself.
Any idea is appreciated!
Updates:
Error message
It seems something went wrong when I updated xcode.....
Error message 2
Your code is perfectly ok except following line after imshow:
waitKey(10);
It will provide ui thread to draw the frames. Without this delay ui thread can not be updated or get time slice from cpu.
Related
I am going through the book Learning OpenCV 3 and test out the video example 2.3. I could edit, compile and run it, but the problem is that it closed down immediately.
// DisplayPicture.cpp : Defines the entry point for the console application.
//
//#include "opencv2/opencv.hpp" // Include file for every supported OpenCV function
#include "opencv2\imgproc\imgproc.hpp"
#include "opencv2\highgui\highgui.hpp"
#include <opencv2/videoio.hpp>
#include <stdio.h>
#include <string.h>
using namespace cv;
using namespace std;
int main(int argc, char** argv) {
namedWindow("video3", WINDOW_AUTOSIZE);
VideoCapture cap;
cap.open( string(argv[1]));
int tell = 0;
Mat frame;
for (;;) {
cap >> frame;
//waitKey(30);
if (frame.empty())
{
break;
//end of film
}
imshow("video3", frame);
}
return 0;
}
I found that my computer processed the data too fast. It could not read the next frame fast enough. if (frame.empty()) became true the program reached the break statement and ended.
By adding a waitkey of 30 millisec before viewing the image frame, the video program works very well. At least I can view the video. Since this example is from the 'bible' it should work, but not with my computer.
I am running a MSI gt72 2PE computer with nvidia gtx880m. Not sure if that matters.
I assume that adding a waitKey(30) is not appropriate, so I am seeking suggestions as to what could be done differently.
I am using openCV to access the camera- and for some reason it will not connect to the pi camera, Can anyone see any reason why that may be?
my code to test camera is as follows:
#include <stdio.h>
#include <pigpio.h>
#include <stdlib.h>
#include <unistd.h>
#include <wiringPi.h>
#include <softPwm.h>
#include "move.h"
#include "init.h"
#include <iostream>
using namespace std;
#include "distance.h"
#include <linux/videodev2.h>
#include <../include/libv4l2.h>
#include <opencv2/core/core.hpp>
#include <opencv2/opencv.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/videoio.hpp>
using namespace cv;
void TestCamera(){
Mat initframe,blurr,B_W,edges;
//Mat image;
VideoCapture cap;
cap.open(0);
cap.set(CAP_PROP_FRAME_WIDTH,500);
cap.set(CAP_PROP_FRAME_HEIGHT,500);
if (cap.isOpened()){
for(;;)
{
cap.read(initframe);
if (initframe.empty()){ printf("error frame empty/n"); break;}
imshow("Live Stream",initframe); //normal bgr output
cvtColor(initframe,B_W,CV_BGR2GRAY);//b&W stream
imshow("greyscale",B_W);
GaussianBlur(B_W,blurr,Size(9,9),1.5,1.5);//blur applied so edge detction is smoother (less hard edges)
imshow("blurr",blurr);
Canny(blurr,edges,0,30,3);//edge detection
imshow("edges",edges);
if(waitKey(30)>= 0) break;
}
}else printf("error! unable to use camera\n");
}
and the build command is:
g++ -Wall -I/usr/local/include $(pkg-config opencv --libs) $(pkg-config opencv --cflags) -o "SdCar" "SdCar.cpp" -lwiringPi -lpigpio -lv4l2 (in directory: /home/pi/selfdrivingcar-17)
I cant figure out why it builds but whenever the camera should run it says "error!unable to open camera" which is the error associated with cap.IsOpened() function.
any ideas?
I have this code which captures image from Webcam using OpenCV:
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
int main( )
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat meter_image;
cap >> meter_image;
imwrite("/boneCV-master/img.jpg", meter_image);
return 0;
}
I get following image as output.
Previously it was working fine. I don't know what is happening. I tried simplest of all codes upon googling but nothing worked. Please let me know what could be wrong with it.
thanks in advance.
EDIT
One thing I forgot to mention is that i am working on beagleBone Black. this same piece of codes works great with my mac.
Maybe adding frame checking will be helpful.
Mat meter_image;
while(meter_image.empty()){
cap >> meter_image;
}
But there's a risk of infinite loop.
Every time i compile this code the webcam streaming is mirrored like I lift up my right hand and it appears like if it was my left hand on the screen instead and after a couple of re-compiling an error message appeared and the code never worked again.
The error:
Unhandled exception at 0x00007FFB3C6DA1C8 in Camera.exe: Microsoft C++ exception: cv::Exception at memory location 0x000000D18AD5F610.
And no other option left except to break the process.
The code:
#include <opencv2/highgui/highgui.hpp>
#include <opencv\cv.h>
using namespace cv;
int main(){
Mat image;
VideoCapture cap;
cap.open(1);
namedWindow("Window", 1);
while (1){
cap >> image;
imshow("window", image);
waitKey(33);
}
}
I don't know if something wrong with my code I've just started learning opencv !
#include <opencv2/highgui/highgui.hpp>
#include <opencv\cv.h>
using namespace cv;
int main(){
Mat image;
VideoCapture cap;
cap.open(1);
namedWindow("Window", 1);
while (1){
cap >> image;
flip(image,image,1)
imshow("window", image);
waitKey(33);
}
}
just Flip the image horizontally that will do Find More Here
There is nothing wrong when your image is mirrored on the vertical (/x) axis (I guess in your example you are using a built-in (laptop) webcam).
A (very) simple code for capturing and showing your image could be the following:
// imgcap in opencv3
#include "opencv2/highgui.hpp"
#include "opencv2/videoio.hpp"
int main() {
cv::VideoCapture cap(0); // camera 0
cv::Mat img;
while(true) {
cap >> img;
cv::flip(img,img,1);
cv::imshow("live view", img);
cv::waitKey(1);
}
return 0;
}
When using OpenCV 3 you should include headers like:
#include <opencv2/highgui.hpp>
I'm just now starting to learn how to use the openCV libraries. I've downloaded and installed openCV 2.4.0, and have run a few example projects. In this block of code, I'm trying to get the output from goodFeaturesToTrack and plot the points on an image. The code compiles, but every time I run it, it crashes, and I get the following error:
Windows has triggered a breakpoint in Corner.exe.
This may be due to a corruption of the heap, which indicates a bug in Corner.exe or any of the DLLs it has loaded.
This may also be due to the user pressing F12 while Corner.exe has focus.
The output window may have more diagnostic information.
The output window does not have more diagnostic information. I've traced the error to the goodFeaturesToTrack function. Here's the offending code:
// Corner.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <opencv.hpp>
#include <opencv_modules.hpp>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <iostream>
#include <string>
#include <iomanip>
#include <sstream>
using namespace cv; //If you don't have this, you won't be able to create a mat...
using namespace std;
#include <stdio.h>
#include <cv.h>
#include <highgui.h>
#include <math.h>
//Whole bunch of #defines to make editing the code a lot easier
#define MAX_FEATURES 5
#define FILENAME "C:/Users/Mitchell/Desktop/lol.jpg"
int main(void)
{
namedWindow("Out", CV_WINDOW_AUTOSIZE);
namedWindow("In", CV_WINDOW_AUTOSIZE);
Mat Img;
Img = cvLoadImage(FILENAME, CV_LOAD_IMAGE_GRAYSCALE);
if(!Img.data)
{
fprintf(stderr, "ERROR: Couldn't open picture.");
waitKey();
return -1;
}
else
{
imshow("In", Img);
waitKey();
}
std::vector<cv::Point2f> Img_features;
int number_of_features = MAX_FEATURES;
Mat Out = Mat::zeros(Img.cols, Img.rows, CV_32F);
goodFeaturesToTrack(Img, Img_features, MAX_FEATURES, .01, .1, noArray(), 3, false);
fprintf(stdout, "Got here...");
/*for (int i = 0; i < MAX_FEATURES; i++)
{
Point2f p = Img_features[i];
ellipse(Img, p, Size(1,1), 0, 0, 360, Scalar(255,0,0));
}*/
imshow("Out", Out);
waitKey(0);
return 0;
}
Is this a bug in the library, or am I doing something dumb?
May be Img_features vector should have MAX_FEATURES items before calling goodFeatures? i.e. try Img_features.resize(MAX_FEATURES) before goodFeatures call.