I am using console linux and I have a camera capture application. I need to capture an image without GUI(The camera should start and capture some images, save it to disk and close). The following code works well on my laptop but doesn't start on console. Any suggestions?
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
Mat frame;
namedWindow("feed",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("feed", frame);
imwrite("/home/zaif/output.png", frame);
if(waitKey(1) >= 0) break;
}
return 0;
}
After the release of OpenCV 2.4.6 there were bug fixes for video capture on Linux. Go straight to 2.4.6.2 and you should get the fixes. Specifically, this revision is probably the relevant fix for you, although there were a number of other revisions pertaining to video capture on android that might effect Linux compilation too.
Related
I want to show a live stream of the camera connected to raspberry in qt application (OS Linux). After googling it, I found out I must display the video inside QLabel. When displaying an image there's no problem and everything works fine, but when I want to display the live stream inside QLabel, the live stream window opens separately (not inside QLabel). would you tell me how to solve this problem? here's my code :
void Dialog::on_Preview_clicked()
{
command = "raspistill";
args<<"-o"<<"/home/pi/Pictures/Preview/"+Date1.currentDateTime().toString()+".jpg"<<"-t"<<QString::number(20000);
Pic.start(command,args,QIODevice::ReadOnly);
QPixmap pix("//home//pi//Pictures//Preview//test.jpg");
ui->label_2->setPixmap(pix);
ui->label_2->setScaledContents(true);
}
This code opens video capturing screen and captures an image after 20 seconds. the only problem is that the capture screen (which could be used as a live stream). isn't being displayed inside the "Lable_2". Is there anyway to do this without using OpenCV library? If not, tell me how to do it using OpenCV.
Thanks
It is pretty simple in opencv
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("edges", frame);
if(waitKey(30) >= 0) break;
}
return 0;
}
Stream the camera using OpenCV, and show it in QLabel is possible.
When QCamera not working, and also use OpenCV in the project, could use VideoCapture to stream the video instead of QCamera.
The problem can be decomposed into several steps. Basically, We need:
Create a QThread for streaming(Don't let the GUI thread blocked).
In the sub-thread, using cv::VideoCapture to capture the frame into a cv::Mat.
Convert the cv::Mat to QImage(how to convert an opencv cv::Mat to qimage).
Pass QImage frame from sub-thread to the main GUI thread.
Paint the QImage on the QLabel.
I put the complete demo code in Github. it could paint the frame on the QLabel and QML VideoOutput.
I am taking my first steps with OpenCV and I am trying to run this piece of code. It is supposed to open the specified video in a new window and wait for the user to press ESC. I tried passing both the relative and absolute path to VideoCapture but VideoCapture::isOpened() always fails. Why is this happening?
If I pass 0 to VideoCapture and do NOT call isOpened(), then I get a nice little window.
Note that I am using VS15 and OpenCV 2.4 (with the x86 libs)
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(path_to_video); // open the video file
// VideoCapture cap(0);
if(!cap.isOpened()) // check if we succeeded
return -1;
namedWindow("Video",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("Video", frame);
if(waitKey(30) >= 0) break;
}
return 0;
}
EDIT: I solved this by reinstalling OpenCV and creating a new Visual Studio project. The above code miraculously started working.
If I pass 0 to VideoCapture and do NOT call isOpened(), then I get a
nice little window. Why is this happening?
Because the VideoCapture class has two different constructors. The one that takes a string attempts to read from a file. The one that takes an integer attempts to read from a device. Passing 0 to the second version specifies the default device / camera.
VideoCapture::open¶ Open video file or a capturing device for video
capturing
C++: bool VideoCapture::open(const string& filename)
C++: bool VideoCapture::open(int device)
Parameters:
filename – name of the opened video file (eg. video.avi)
or image sequence (eg. img_%02d.jpg, which will read samples like
img_00.jpg, img_01.jpg, img_02.jpg, ...)
device – id of the opened video capturing device (i.e. a camera index).
I'm using OpenCV 3.1, I try to run a simple code as the following one (main function):
cv::VideoCapture cam;
cv::Mat matTestingNumbers;
cam.open(0);
if (!cam.isOpened()) { printf("--(!)Error opening video capture\n"); return -1; }
while (cam.read(matTestingNumbers))
{
cv::imshow("matTestingNumbers", matTestingNumbers);
cv::waitKey(5000);
}
When I move the camera it seems that the code does not capture and show the current frame but shows all the captured frames from the previous position and only then from the new one.
So when I capture the wall it shows the correct frames (the wall itself) in the correct delay, but, when I twist the camera to my computer, I first see about 3 frames of the wall and only then the computer, it seems that the frames are stuck.
I've tried to use videoCapture.set() functions and set the FPS to 1, and I tried to switch the method of capturing to cam >> matTestingNumbers (and the rest of the main function according to this change) but nothing helped, I still got "stuck" frames.
BTW, These are the solutions I found on web.
What can I do to fix this problem?
Thank you, Dan.
EDIT:
I tried to retrieve frames as the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
namedWindow("edges",1);
for(;;)
{
cap.grab();
if (waitKey(11) >= 0)
{
cap.retrieve(frame);
imshow("edges", frame);
}
}
return 0;
}
But, it gave the result (when I pointed the camera on one spot and pressed a key it showed one more of the previous frames that were captured of the other point).
It is just like you're trying to picture one person then another but when you picture the second you get the photo of the first person what doesn't make sense.
Then, I tried the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
namedWindow("edges",1);
for(;;)
{
cap >> frame;
if (waitKey(33) >= 0)
imshow("edges", frame);
}
return 0;
}
And it worked as expected.
One of the problems is that you are not calling cv::waitKey(X) to properly freeze the window for X amount of milliseconds. Get rid of usleep()!
Good day everyone! So currently I'm working on a project with video processing, so I decided to give a try to OpenCV. As I'm new to it, I decided to find few sample codes and test them out. First one, is C OpenCV and looks like this:
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <stdio.h>
int main( void ) {
CvCapture* capture = 0;
IplImage *frame = 0;
if (!(capture = cvCaptureFromCAM(0)))
printf("Cannot initialize camera\n");
cvNamedWindow("Capture", CV_WINDOW_AUTOSIZE);
while (1) {
frame = cvQueryFrame(capture);
if (!frame)
break;
IplImage *temp = cvCreateImage(cvSize(frame->width/2, frame->height/2), frame->depth, frame->nChannels); // A new Image half size
cvResize(frame, temp, CV_INTER_CUBIC); // Resize
cvSaveImage("test.jpg", temp, 0); // Save this image
cvShowImage("Capture", frame); // Display the frame
cvReleaseImage(&temp);
if (cvWaitKey(5000) == 27) // Escape key and wait, 5 sec per capture
break;
}
cvReleaseImage(&frame);
cvReleaseCapture(&capture);
return 0;
}
So, this one works perfectly well and stores image to hard drive nicely. But problems begin with next sample, which uses C++ OpenCV:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
//namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_RGB2XYZ);
imshow("edges", edges);
//imshow("edges2", frame);
//imwrite("test1.jpg", frame);
if(waitKey(1000) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
So, yeah, generally, in terms of showing video (image frames) there is practically no changes, but when it comes to using im** functions, some problems arise.
Using cvSaveImage() works out nicely, but the moment I try to use imwrite(), unhandled exception arises in regards of 'access violation reading location'. Same goes for imread(), when I'm trying to load image.
So, the thing I wanted to ask, is it possible to use most of the functionality with C OpenCV? Or is it necessary to use C++ OpenCV. If yes, is there any solution for the problem I described earlier.
Also as stated here, images initially are in BGR-format, so conversion needed. But doing BGR2XYZ conversion seems to invert colors, while RGB2XYZ preserve them. Examples:
images
Or is it necessary to use C++ OpenCV?
No, there is no necessity whatsoever. You can use any interface you like and you think you are good with it (OpenCV offers C, C++, Python interfaces).
For your problem about imwrite() and imread() :
For color images the order channel is normally Blue, Green, Red , this
is what imshow() , imread() and imwrite() expect
Quoted from there
I need a program to capture pictures from multiple webcams and save them automatically in Windows Vista. I got the basic code from this link. The code runs in Window XP, but when I tried using it on Vista it says "failed." Different errors pop up every time it is executed. Would it help if I used the SDK platform? Does anyone have any suggestions?
I can't test this on multiple webcams since I only have one, but I'm sure OpenCV2.0 should be able to handle it. Here's some sample code (I use Vista) with one webcam to get you started.
#include <cv.h>
#include <highgui.h>
using namespace cv;
int main()
{
// Start capturing on camera 0
VideoCapture cap(0);
if(!cap.isOpened()) return -1;
// This matrix will store the edges of the captured frame
Mat edges;
namedWindow("edges",1);
for(;;)
{
// Acquire the frame from cap into frame
Mat frame;
cap >> frame;
// Now, find the edges by converting to grayscale, blurring and then Canny edge detection
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
// Display the edges and the frame
imshow("edges", edges);
imshow("frame", frame);
// Terminate by pressing a key
if(waitKey(30) >= 0) break;
}
return 0;
}
Note:
The matrix edges is allocated during
the first frame processing and unless
the resolution will suddenly change,
the same buffer will be reused for
every next frame’s edge map.
As you can see, the code is quite clean and readable! I lifted this from the OpenCV 2.0 documentation (opencv.pdf).
The code not only displays the image from the webcam (under frame) but also does real-time edge detection! Here's a screenshot when I pointed the webcam at my monitor :)
screenshot http://img245.imageshack.us/img245/5014/scrq.png
If you want code to just display the frames from one camera:
#include <cv.h>
#include <highgui.h>
using namespace cv;
int main()
{
VideoCapture cap(0);
if(!cap.isOpened()) return -1;
for(;;)
{
Mat frame;
cap >> frame;
imshow("frame", frame);
if(waitKey(30) >= 0) break;
}
return 0;
}
If the program works with UAC off or when running administrator, make sure the place you choose to save the results are in writable places like the user's my documents folder. Generally speaking root folders and the program files folder is read only for normal users.