Good Day,
I am trying to figure out how to close the camera on a beaglebone in openCV. I have tried numerous commands such as release(&camera) but none exist and the camera continues to stay on when I don't want it to.
VideoCapture capture(0);
capture.set(CV_CAP_PROP_FRAME_WIDTH,320);
capture.set(CV_CAP_PROP_FRAME_HEIGHT,240);
if(!capture.isOpened()){
cout << "Failed to connect to the camera." << endl;
}
Mat frame, edges, cont;
while(1){
cout<<sending<<endl;
if(sending){
for(int i=0; i<frames; i++){
capture >> frame;
if(frame.empty()){
cout << "Failed to capture an image" << endl;
return 0;
}
cvtColor(frame, edges, CV_BGR2GRAY);
Code is something like this, at the end of the for loop, I want to close the camera, but of course it still stays open
The camera will be deinitialized automatically in VideoCapture destructor.
Check this example from opencv docu:
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
Also
cvQueryFrame
Grabs and returns a frame from camera or file
IplImage* cvQueryFrame( CvCapture* capture );
capture video capturing structure.
The function cvQueryFrame grabs a frame from camera or video file, decompresses and > returns it. This function is just a combination of cvGrabFrame and cvRetrieveFrame in one > call. The returned image should not be released or modified by user.
Also check: http://derekmolloy.ie/beaglebone/beaglebone-video-capture-and-image-processing-on-embedded-linux-using-opencv/
I hope this works for you. Best of luck.
Related
I would like to put an image on video and i'm wondering if it's possible in opencv without multithreading.
I would like to avoid it because in my project i am operating on RPI 0W(that's whyi don't want multithreading) .
i can't find anything about it on internet. I got some basic code in c++ . I'm new to open cv.
int main(){
VideoCapture cap(0);
if (!cap.isOpened())
{
cout << "error"<<endl;
return -1;
}
Mat edges;
namedWindow("edges", 1);
Mat img = imread("logo.png");
for (;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("edges", WINDOW_AUTOSIZE );
imshow("edges", img);
imshow("edges", frame);
if (waitKey(30) >= 0) break;
}
}
In OpenCV showing two things in the same window overwrites the previous one which I think is happening in your case.
You can use OpenCV addWeighted() function or bitwise operations.
OpenCV has good documentation on this. You can find it here
I'm attempting to stitch two videos together though matching there key points though finding the homography between the overlapping video. I have successfully got this to work with two different images.
With the video I have loaded the two separate video files and looped the frames and copied them to the blank matrix cap1frame and cap2frame for each video.
Then I send each frame from each video to the stitching function which matches the keypoints based on the homography between the two frames and stitch them and display the resultant image. (matching based on openCV example)
The stitching is successful however, it results in a very slow playback of the video and some sort of graphical anomalies on the side of the frame. Seen in the photo.
I'm wondering how I can make this more efficient with fast video playback.
int main(int argc, char** argv){
// Create a VideoCapture object and open the input file
VideoCapture cap1("left.mov");
VideoCapture cap2("right.mov");
// Check if camera opened successfully
if(!cap1.isOpened() || !cap2.isOpened()){
cout << "Error opening video stream or file" << endl;
return -1;
}
//Trying to loop frames
for (;;){
Mat cap1frame;
Mat cap2frame;
cap1 >> cap1frame;
cap2 >> cap2frame;
// If the frame is empty, break immediately
if (cap1frame.empty() || cap2frame.empty())
break;
//sending each frame from each video to the stitch function then displaying
imshow( "Result", Stitching(cap1frame,cap2frame));
if(waitKey(30) >= 0) break;
//destroyWindow("Stitching");
// waitKey(0);
}
return 0;
}
I was able to resolve my issue by pre-calculating the homography with just the first frame of video. This is so the function was only called once.
I then looped through the rest of the video to apply the warping of the video frames so they could be stitched together based on the pre-calculated homography. This bit was initially within my stitching function.
I still had an issue at this point with playback still being really slow when calling imshow. But I decided to export the resultant video and this worked when the correct fps was set in the VideoWriter object. I wonder if I just needed to adjust the fps playback of imshow but I'm not sure on that bit.
I've got my full code below:
#include <stdio.h>
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/calib3d.hpp"
#include "opencv2/xfeatures2d.hpp"
#include <opencv2/xfeatures2d/nonfree.hpp>
#include <opencv2/xfeatures2d/cuda.hpp>
#include <opencv2/opencv.hpp>
#include <vector>
//To get homography from images passed in. Matching points in the images.
Mat Stitching(Mat image1,Mat image2){
Mat I_1 = image1;
Mat I_2 = image2;
//based on https://docs.opencv.org/3.3.0/d7/dff/tutorial_feature_homography.html
cv::Ptr<Feature2D> f2d = xfeatures2d::SIFT::create();
// Step 1: Detect the keypoints:
std::vector<KeyPoint> keypoints_1, keypoints_2;
f2d->detect( I_1, keypoints_1 );
f2d->detect( I_2, keypoints_2 );
// Step 2: Calculate descriptors (feature vectors)
Mat descriptors_1, descriptors_2;
f2d->compute( I_1, keypoints_1, descriptors_1 );
f2d->compute( I_2, keypoints_2, descriptors_2 );
// Step 3: Matching descriptor vectors using BFMatcher :
BFMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
// Keep best matches only to have a nice drawing.
// We sort distance between descriptor matches
Mat index;
int nbMatch = int(matches.size());
Mat tab(nbMatch, 1, CV_32F);
for (int i = 0; i < nbMatch; i++)
tab.at<float>(i, 0) = matches[i].distance;
sortIdx(tab, index, SORT_EVERY_COLUMN + SORT_ASCENDING);
vector<DMatch> bestMatches;
for (int i = 0; i < 200; i++)
bestMatches.push_back(matches[index.at < int > (i, 0)]);
// 1st image is the destination image and the 2nd image is the src image
std::vector<Point2f> dst_pts; //1st
std::vector<Point2f> source_pts; //2nd
for (vector<DMatch>::iterator it = bestMatches.begin(); it != bestMatches.end(); ++it) {
//cout << it->queryIdx << "\t" << it->trainIdx << "\t" << it->distance << "\n";
//-- Get the keypoints from the good matches
dst_pts.push_back( keypoints_1[ it->queryIdx ].pt );
source_pts.push_back( keypoints_2[ it->trainIdx ].pt );
}
Mat H_12 = findHomography( source_pts, dst_pts, CV_RANSAC );
return H_12;
}
int main(int argc, char** argv){
//Mats to get the first frame of video and pass to Stitching function.
Mat I1, h_I1;
Mat I2, h_I2;
// Create a VideoCapture object and open the input file
VideoCapture cap1("left.mov");
VideoCapture cap2("right.mov");
cap1.set(CV_CAP_PROP_BUFFERSIZE, 10);
cap2.set(CV_CAP_PROP_BUFFERSIZE, 10);
//Check if camera opened successfully
if(!cap1.isOpened() || !cap2.isOpened()){
cout << "Error opening video stream or file" << endl;
return -1;
}
//passing first frame to Stitching function
if (cap1.read(I1)){
h_I1 = I1;
}
if (cap2.read(I2)){
h_I2 = I2;
}
Mat homography;
//passing here.
homography = Stitching(h_I1,h_I2);
std::cout << homography << '\n';
//creating VideoWriter object with defined values.
VideoWriter video("video/output.avi",CV_FOURCC('M','J','P','G'),30, Size(1280,720));
//Looping through frames of both videos.
for (;;){
Mat cap1frame;
Mat cap2frame;
cap1 >> cap1frame;
cap2 >> cap2frame;
// If the frame is empty, break immediately
if (cap1frame.empty() || cap2frame.empty())
break;
Mat warpImage2;
//warping the second video cap2frame so it matches with the first one.
//size is defined as the final video size
warpPerspective(cap2frame, warpImage2, homography, Size(1280,720), INTER_CUBIC);
//final is the final canvas where both videos will be warped onto.
Mat final (Size(1280,720), CV_8UC3);
//Mat final(Size(cap1frame.cols*2 + cap1frame.cols, cap1frame.rows*2),CV_8UC3);
//Using roi getting the relivent areas of each video.
Mat roi1(final, Rect(0, 0, cap1frame.cols, cap1frame.rows));
Mat roi2(final, Rect(0, 0, warpImage2.cols, warpImage2.rows));
//warping images on to the canvases which are linked with the final canvas.
warpImage2.copyTo(roi2);
cap1frame.copyTo(roi1);
//writing to video.
video.write(final);
//imshow ("Result", final);
if(waitKey(30) >= 0) break;
}
video.release();
return 0;
}
I need to access the pixel data from a video camera attached to my Windows PC in real time. Once accessed, I will modify it and output it as part of the video stream. In other words, I need to find the easiest way to modify a video stream in real time. I know about OpenCV and Matlab functionality, but I am wondering if anyone has found a simpler way to do this.
If you want to do this with C++, OpenCV, as long as it works with your camera, is one of the simplest ways there is. The code below is from the OpenCV documentation VideoCapture. The only trick is instantiating the VideoCapture instance. How much simpler can it be?
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, COLOR_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
There is even a python version at Capture Video from Camera that looks very similar to the C++ version above.
This is my code which I copy/paste from here:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
But I get this error:
OpenCV error: Assertion failed (scn==3 || scn==4)
in unknown function, file ..\..\..\..\opencv\modules\imgproc\src\color.cpp, line 3737
I am using Windows 7 x64, Visual Studio 2008, OpenCV 2.4.7
What can be the problem?
EDIT:
It is sometimes it works, sometimes it does not.
EDIT 2:
I edited VideoCapture cap(0); to cv::VideoCapture cap(0); then, I rebuild my solution and run it. It worked for the first time, I tried to run for the second time, it gave me the same error.
EDIT 3:
I have even edited for(;;):
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("edges", frame);
if(waitKey(30) >= 0) break;
}
This time I receive another error:
OpenCV error: Assertion failed (size.width>0 && size.height>0)
in unknown function, file ..\..\..\..\opencv\modules\highgui\src\window.cpp, line 261
I guess the problem is with imshow.
I get similar problem. I solve it by putting everything after cap >> frame into an if-statement:
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
if (!frame.empty()) {
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
}
if(waitKey(30) >= 0) break;
}
I tested your code in my environment (Win XP 32 bit OS, VS2008, OpenCV2.4.7). It works normally every time. And you can also do it like this:
IplImage* frame,*edges;
CvCapture* pcapture = cvCreateCameraCapture(0);
cvNamedWindow("edges",CV_WINDOW_AUTOSIZE);
while (1)
{
frame = cvQueryFrame(pcapture);
if (!frame) break;
edges = cvCreateImage(cvGetSize(frame),8,1);
cvCvtColor(frame, edges, CV_BGR2GRAY);
cvSmooth(edges,edges,CV_GAUSSIAN,7,7,1.5,1.5);
cvCanny(edges,edges,0,30,3);
cvShowImage("edges",edges);
cvReleaseImage(&edges);
if (cvWaitKey(30)>=0) break;
}
cvReleaseCapture(&pcapture);
cvDestroyWindow("edges");
You can have a try whether its also have some problems in your environment or not.
Hope a little help to you!
I am using a Mac OS X 10.6 machine. I have OpenCV 2.1 x64 compiled from source using Xcode and its GCC compiler.
I am having trouble using the C++ video reading features of OpenCV. Here is the simple test code I am using (came straight from OpenCV documentation):
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(200) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
The program compiles fine, but when I try to run it, I see the green light on my webcam come on for a few seconds, then the program exits with the error message:
OpenCV Error: Bad flag (parameter or structure field) (Unrecognized or unsupported array type) in cvGetMat, file /Users/mark/Downloads/OpenCV-2.1.0/src/cxcore/cxarray.cpp, line 2476
terminate called after throwing an instance of 'cv::Exception'
what(): /Users/mark/Downloads/OpenCV-2.1.0/src/cxcore/cxarray.cpp:2476: error: (-206) Unrecognized or unsupported array type in function cvGetMat
Under debug mode, the matrix still seems to be empty after the cap >> frame line.
I get similar behavior when I try to capture from a video file or an image, so it's not the camera. What is wrong, do you think? Anything I can do to make this work?
EDIT: I'd like to add that if I use the C features, everything works fine. But I would like to stick with C++ if I can.
Thanks
I've seen the same problem. When I use the C features, sometimes the similar question also comes up. From the error message of the C code, I think it happened because the camera got a NULL frame. So I think it can be solved in this way:
do
{
capture>>frame;
}while(frame.empty());
That way it works on my machine.
I encountered the same problem, it seems that the first two attempts to get the video wont return any signal, so if you try to use it you'll get an error, here is how I got around this, simply by adding a counter and checking the size of the video.
int cameraNumber = 0;
if ( argc > 1 )
cameraNumber = atoi(argv[1]);
cv::VideoCapture camera;
camera.open(cameraNumber);
if ( !camera.isOpened() ) {
cerr << "ERROR: Could not access the camera or video!" << endl;
exit(1);
}
//give the camera 40 frames attempt to get the camera object,
//if it fails after X (40) attemts the app will terminatet,
//till then it will display 'Accessing camera' note;
int CAMERA_CHECK_ITERATIONS = 40;
while (true) {
Mat cameraFrame;
camera >> cameraFrame;
if ( cameraFrame.total() > 0 ) {
Mat displayFrame( cameraFrame.size(), CV_8UC3 );
doSomething( cameraFrame, displayFrame );
imshow("Image", displayFrame );
} else {
cout << "::: Accessing camera :::" << endl;
if ( CAMERA_CHECK_ITERATIONS > 0 ) CAMERA_CHECK_ITERATIONS--;
if ( CAMERA_CHECK_ITERATIONS < 0 ) break;
}
int key = waitKey(200);
if (key == 27) break;
}
Try simplifying the program so that you can identify the exact location of the problem, e.g. change your loop so that it looks like this:
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
// cvtColor(frame, edges, CV_BGR2GRAY);
// GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
// Canny(edges, edges, 0, 30, 3);
// imshow("edges", edges);
imshow("edges", frame);
if(waitKey(200) >= 0) break;
}
If that works OK then try adding the processing calls back in, one at a time, e.g
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_BGR2GRAY);
// GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
// Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(200) >= 0) break;
}
and so on...
Once you've identified the problematic line you can then focus on that and investigate further.
Go to project->project properties->configuration properties->linker->input
In the additional dependencies paste cv210.lib cvaux210.lib cxcore210.lib highgui210.lib
Hi I got the solution for you :)
VideoCapture san_cap(0);
if (san_cap.isOpened()) {
while (1) {
san_cap.read(san);
imshow("Video", san);
Mat frame;
san_cap.read(frame); // get a new frame from camera
cvtColor(frame, edges, CV_BGR2GRAY);
imshow("Video2", edges);
int key = cv::waitKey(waitKeyValue);
if (key == 27 ) {
break;
}
}
}