First of all, sorry for my bad English.
I am new to OpenCV and I wish to save image captured from my webcam.
I got the code from this link http://www.samundra.com.np/opencv-code-to-save-image/403
#include <iostream>
#include <opencv/cv.h>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2/imgproc/imgproc.hpp"
int main(int argc,char** argv)
{
char path[255];
// For Linux, path to save the captured image
//strcpy(path,"/home/samundra/sample.jpg");
// For Windows, path to save the captured image
strcpy(path,"c:\\home.jpg");
// Pointer to current frame
IplImage *frame;
//Resize image to small size
IplImage *small;
// Create capture device ready
// here 0 indicates that we want to use camera at 0th index
CvCapture *capture = cvCaptureFromCAM(0);
// highgui.h is required
cvNamedWindow("capture",CV_WINDOW_AUTOSIZE);
frame = cvQueryFrame(capture);
// Must be initialized before actually cvResize is used
// we are creating image of size that is half of the original frame captured
// 8 = Bits 3 = Channel
small = cvCreateImage(cvSize(frame->width/2,frame->height/2), 8, 3);
while(1)
{
// Query for Frame From Camera
frame = cvQueryFrame(capture);
// Resize the grabbed frame
cvResize(frame, small); //cvResize(source, destination)
// Display the captured image
cvShowImage("capture", small);
char ch = cvWaitKey(25); // Wait for 25 ms for user to hit any key
if(ch==27) break; // If Escape Key was hit just exit the loop
// Save image if s was keyboard
if(ch=='s')
{
cvSaveImage(path,small);
}
}
// Release All Images and Windows
cvReleaseImage(&frame);
cvReleaseImage(&small);
cvDestroyWindow("capture");
return 0;
}
I am using OpenCV 2.3.1 and Eclipse and after I compiled this code, the webcam is successfully opened but my image is not saved. Thanks for the help.
Related
I am using OpenCV 3.0.0-rc1 on Ubuntu 14.04 LTS Guest in VirtualBox with Windows 8 Host. I have an extremely simple program to read in frames from a webcam (Logitech C170) (from the OpenCV documentation). Unfortunately, it doesn't work (I have tried 3 different webcams). It throws an error "select timeout" every couple of seconds and reads a frame, but the frame is black. Any ideas?
The code is the following:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
using namespace std;
using namespace cv;
// Main
int main(int argc, char **argv) {
/* webcam setup */
VideoCapture stream;
stream.open(0);
// check if video device has been initialized
if (!stream.isOpened()) {
fprintf(stderr, "Could not open Webcam device");
return -1;
}
int image_width = 640; // image resolution
int image_height = 480;
Mat colorImage,currentImage;
bool loop = true;
/* infinite loop for video stream */
while (loop) {
loop = stream.read(colorImage); // read webcam stream
cvtColor(colorImage, currentImage, CV_BGR2GRAY); // color to gray for current image
imshow("Matches", currentImage);
if(waitKey(30) >= 0) break;
// end stream while-loop
}
return 0;
}
I found the problem: When using a webcam, make sure to connect it to the Virtual Machine using Devices->Webcams and NOT Devices->USB. Even though the webcam is detected as video0 when attaching it via Devices->USB, for some reasons it does not work.
I am trying to write a very basic video capturing program using opencv, but despite all my efforts, nothing gets written. I am fairly sure that i am following all the tutorials one can find on the subject.
Here is the code i am using:
#include <opencv2\core\core.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <videoInput.h>
static const cv::Size _SIZE(640, 480);
static const int FPS = 20;
int webcam = 0;
std::auto_ptr<videoInput> VI(NULL);
std::auto_ptr<cv::VideoWriter> outputVideo(NULL);
cv::Mat frame(_SIZE.height, _SIZE.width, CV_8UC3);
// get device list create videoInput
videoInput::listDevices();
VI.reset(new videoInput())
// choose first device
VI->setupDevice(webcam, _SIZE.width, _SIZE.height);
// always check
if(!VI->isDeviceSetup(webcam))
return -1; //-->
// set device frame rate
VI->setIdealFramerate(webcam, FPS);
// create named window and image
cv::namedWindow("CAM");
do
if (VI->isFrameNew(webcam))
{
VI->getPixels(webcam, (unsigned char*)frame.data, false, true); // получение пикселей в BGR
if (outputVideo.get())
{
(*outputVideo).write( frame);
}
char c = cv::waitKey(5);
if (!IsWindowVisible((HWND)cvGetWindowHandle("CAM")) || c==27)
{
exit(0);
}
cv::imshow("CAM", frame);
} while(1);
I have tried various extension and various Fourcc values. Usually, on any extension except avi, writer is created but does nothing. on the contrary, avi files writers simply fail to be created.
I have read that probably codecs are missing, but what does that mean - what exactly do i need to put where for them to be found by opencv?
All in all, i am very confused. This is tutorial code, it should work.
i'm working on a project using OpenCV243, I need to get the foreground during a stream, my Problem is that I use the cv::absdiff to get it doesn't really help, here is my code and the result .
#include <iostream>
#include<opencv2\opencv.hpp>
#include<opencv2\calib3d\calib3d.hpp>
#include<opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
int main (){
cv::VideoCapture cap(0);
cv::Mat frame,frame1,frame2;
cap >> frame;
frame.copyTo(frame1);
cv::imwrite("background.jpeg",frame1);
int key = 0;
while(key!=27){
cap >> frame;
cv::absdiff(frame, frame1, frame2); // frame2 = frame -frame1
cv::imshow("foreground", frame2);
if(key=='c'){
//frame.copyTo(frame2);
cv::imwrite("foreground.jpeg", frame2);
key = 0;
}
cv::imshow("frame",frame);
key = cv::waitKey(10);
}
cap.release();
return 0;
}
as you can see the subtraction work but what I want to get is only the values of that changed for example if have a Pixel in the background with [130,130,130] and the same pixel has [200,200,200] in the frame I want to get exactly the last values and not [70,70,70]
I've already seen this tutorial : http://mateuszstankiewicz.eu/?p=189
but I can't understand the code and I have problems setting cv::BackgroundSubtractorMOG2 with my openCV version
thanks in advance for you help
BackgroundSubtractorMOG2 should work with #include "opencv2/video/background_segm.hpp"
The samples with OpenCV have two nice c++ examples (in the samples\cpp directory).
bgfg_segm.cpp shows how to use the BackgroundSubtractorMOG2
bgfg_gmg.cpp uses BackgroundSubtractorGMG
To get the last values (and asuming you meant to get the foreground pixel values) you could copy the frame using the foreground mask. This is also done in the first example, in the following snippet:
bg_model(img, fgmask, update_bg_model ? -1 : 0);
fgimg = Scalar::all(0);
img.copyTo(fgimg, fgmask);
Good day everyone! So currently I'm working on a project with video processing, so I decided to give a try to OpenCV. As I'm new to it, I decided to find few sample codes and test them out. First one, is C OpenCV and looks like this:
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <stdio.h>
int main( void ) {
CvCapture* capture = 0;
IplImage *frame = 0;
if (!(capture = cvCaptureFromCAM(0)))
printf("Cannot initialize camera\n");
cvNamedWindow("Capture", CV_WINDOW_AUTOSIZE);
while (1) {
frame = cvQueryFrame(capture);
if (!frame)
break;
IplImage *temp = cvCreateImage(cvSize(frame->width/2, frame->height/2), frame->depth, frame->nChannels); // A new Image half size
cvResize(frame, temp, CV_INTER_CUBIC); // Resize
cvSaveImage("test.jpg", temp, 0); // Save this image
cvShowImage("Capture", frame); // Display the frame
cvReleaseImage(&temp);
if (cvWaitKey(5000) == 27) // Escape key and wait, 5 sec per capture
break;
}
cvReleaseImage(&frame);
cvReleaseCapture(&capture);
return 0;
}
So, this one works perfectly well and stores image to hard drive nicely. But problems begin with next sample, which uses C++ OpenCV:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
//namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_RGB2XYZ);
imshow("edges", edges);
//imshow("edges2", frame);
//imwrite("test1.jpg", frame);
if(waitKey(1000) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
So, yeah, generally, in terms of showing video (image frames) there is practically no changes, but when it comes to using im** functions, some problems arise.
Using cvSaveImage() works out nicely, but the moment I try to use imwrite(), unhandled exception arises in regards of 'access violation reading location'. Same goes for imread(), when I'm trying to load image.
So, the thing I wanted to ask, is it possible to use most of the functionality with C OpenCV? Or is it necessary to use C++ OpenCV. If yes, is there any solution for the problem I described earlier.
Also as stated here, images initially are in BGR-format, so conversion needed. But doing BGR2XYZ conversion seems to invert colors, while RGB2XYZ preserve them. Examples:
images
Or is it necessary to use C++ OpenCV?
No, there is no necessity whatsoever. You can use any interface you like and you think you are good with it (OpenCV offers C, C++, Python interfaces).
For your problem about imwrite() and imread() :
For color images the order channel is normally Blue, Green, Red , this
is what imshow() , imread() and imwrite() expect
Quoted from there
I am using OpenCV and trying to apply a Gaussian Blur to an incoming video stream. I basically use cvQueryFrame to remove a frame, blur it and display the frame onto the screen. The thing is though, my video gets stuck on the first frame after I apply the blur....anyone know why? its basically showing one frame instead of a video. The second I remove the blur it starts outputting video again.
#include "cv.h"
#include "highgui.h"
#include "cvaux.h"
#include <iostream>
using namespace std;
int main()
{
//declare initial data
IplImage *grabCapture= 0; //used for inital video frame capture
IplImage *process =0; //used for processing
IplImage *output=0; //displays final output
CvCapture* vidStream= cvCaptureFromCAM(0);
cvNamedWindow ("Output", CV_WINDOW_AUTOSIZE);
int createimage=1;
while (1)
{
grabCapture= cvQueryFrame (vidStream);
if (createimage==1)
{
process= cvCreateImage (cvGetSize(grabCapture), IPL_DEPTH_16U, 3);
createimage=0;
}
*process=*grabCapture;
cvSmooth (process,process,CV_GAUSSIAN,7,7); //line that makes it display frame instead of video
cvShowImage("Output",process);
}
//clean up data
cvReleaseImage (&grabCapture);
cvReleaseImage (&process);
cvReleaseImage (&output);
cvReleaseCapture (&vidStream);
return 0;
}
You are missing a call to cvWaitKey. This is the only way to tell OpenCV to process events and thus prevent the GUI from freezing.
Try adding this line:
cvWaitKey(10);
after cvShowImage("Output",process);.
Edit: here is the documentation for cvWaitKey