Converting Live Video Frames to Grayscale (OpenCV) - c++

First and foremost, I should say that I'm a beginner to OpenCV. I'm trying to convert a live video stream from my webcam from RGB to Grayscale.
I have the following code in my function:
VideoCapture cap(0);
while (true)
{
Mat frame;
Mat grayscale;
cvtColor(frame, grayscale, CV_RGB2GRAY);
imshow("Debug Window", grayscale);
if (waitKey(30) >=0)
{
cout << "End of Stream";
break;
}
}
I know it isn't complete. I'm trying to find a way to take a frame of the video and send it to frame, manipulate it using cvtColor, then output it back to grayscale so I can display it on my screen.
If anyone could help, it would be much appreciated.

Please see this example, here the complete code exists, hope this will work for you:
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
VideoCapture cap(0); // open the video camera no. 0
if (!cap.isOpened()) // if not success, exit program
{
cout << "Cannot open the video cam" << endl;
return -1;
}
namedWindow("MyVideo",CV_WINDOW_AUTOSIZE);
while (1)
{
Mat frame;
bool bSuccess = cap.read(frame); // read a new frame from video
if (!bSuccess)
{
cout << "Cannot read a frame from video stream" << endl;
break;
}
Mat grayscale;
cvtColor(frame, grayscale, CV_RGB2GRAY);
imshow("MyVideo", grayscale);
if (waitKey(30) == 27)
{
cout << "esc key is pressed by user" << endl;
break;
}
}
return 0;
}

You just initialized the variable "frame" and forgot to assign an image to it. Since the variable "frame" is empty you won't get output. Grab a image and copy to frame from the video sequence "cap". This piece of code will do the job for you.
Mat frame;
bool bSuccess = cap.read(frame); // read a frame from the video
if (!bSuccess)
{
cout << "Cannot read a frame from video stream" << endl;
break;
}

Related

Videowriter function for greyscale image using OpenCV function

I have a GigaE camera and which gives me a greyscale image and I want to record it as a video and process it later.
So as initial step I tried recording video using my webcam it worked and if i convert it into greyscale before writing it into video. I am not getting any video.
My code is below
int main(int argc, char* argv[])
{
VideoCapture cap(0);
VideoWriter writer;
if (!cap.isOpened())
{
cout << "not opened" << endl;
return -1;
}
char* windowName = "Webcam Feed";
namedWindow(windowName, CV_WINDOW_AUTOSIZE);
string filename = "D:\myVideo_greyscale.avi";
int fcc = CV_FOURCC('8', 'B', 'P', 'S');
int fps = 30;
Size frameSize(cap.get(CV_CAP_PROP_FRAME_WIDTH),cap.get(CV_CAP_PROP_FRAME_HEIGHT));
writer = VideoWriter(filename,-1,fps,frameSize);
if(!writer.isOpened())
{
cout<<"Error not opened"<<endl;
getchar();
return -1;
}
while (1)
{
Mat frame;
bool bSuccess = cap.read(frame);
if (!bSuccess)
{
cout << "ERROR READING FRAME FROM CAMERA FEED" << endl;
break;
}
cvtColor(frame, frame, CV_BGR2GRAY);
writer.write(frame);
imshow(windowName, frame);
}
return 0;
}`
I used fcc has -1 tried all the possibilities none of them are able to record video.
I also tried creating a grayscale video using opencv for fcc has CV_FOURCC('8','B','P','S') but it did not help me.
I get this error in debug after using the breakpoint
VideoWriter has an optional parameter which tells whether the video is grayscale or color. Default ist color = true. Try
bool isColor = false;
writer = VideoWriter(filename,-1,fps,frameSize, isColor);

OpenCV Video Capture

I am using OpenCV 2.4.8 and Visual Studio 2013 to run the following simple VideoCapture program. The program is intended to capture the video from the webcam and display it.
However, the program works fine only for the FIRST TIME (after signing in windows), and doesn't work second time.
The problem I get on debugging is :
After executing this line - "bool bSuccess = cap.read(frame);" frame variable is still NULL.
#include "stdafx.h"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
char key;
int main(int argc, char* argv[])
{
VideoCapture cap(0); // open the video camera no. 0
if(!cap.isOpened()) // if not success, exit program
{
cout << "Cannot open the video file" << endl;
return -1;
}
double dWidth = cap.get(CV_CAP_PROP_FRAME_WIDTH); //get the width of frames of the video
double dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT); //get the height of frames of the video
cout << "Frame size : " << dWidth << " x " << dHeight << endl;
namedWindow("MyVideo",CV_WINDOW_AUTOSIZE); //create a window called "MyVideo"
Mat frame;
while(1)
{
bool bSuccess = cap.read(frame); // read a new frame from video
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read a frame from video file" << endl;
break;
}
imshow("MyVideo", frame); //show the frame in "MyVideo" window
if(waitKey(30) == 27) //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop
{
cout << "esc key is pressed by user" << endl;
break;
}
}
return 0;
}
This happen because the camera is not correctly closed after first instance of program. You should try to close the console by esc button and not by clicking X.
Could you try and read more than a single frame before breaking the loop? This may be similar to this problem, where a corrupted first frame / slow camera set up was the only problem.
Unable to read frames from VideoCapture from secondary webcam with OpenCV

how to draw a circle in video

im trying to draw a circle in a video from my webcam i using this function
cv::circle(cap,points(1,0),3,cv::Scalar(255,255,255),-1);
i found it in a document but i don't know why its't work i edit my code many time but its still give my error that's my full code i using opencv3
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <vector>
#include <iostream>
#include <sstream>
#include <opencv2/video/background_segm.hpp>
#include <opencv2/video/background_segm.hpp>
using namespace cv;
using namespace std;
int main()
{
VideoCapture cap(0); // open the video file for reading
if ( !cap.isOpened() ) // if not success, exit program
{
cout << "Cannot open the video file" << endl;
return -1;
}
//cap.set(CV_CAP_PROP_POS_MSEC, 300); //start the video at 300ms
double fps = cap.get(CV_CAP_PROP_FPS); //get the frames per seconds of the video
cout << "Frame per seconds : " << fps << endl;
namedWindow("MyVideo",CV_WINDOW_AUTOSIZE); //create a window called "MyVideo"
while(1)
{
Mat frame;
bool bSuccess = cap.read(frame); // read a new frame from video
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read the frame from video file" << endl;
break;
}
imshow("MyVideo", frame); //show the frame in "MyVideo" window
cv::circle(cap,points(1,0),3,cv::Scalar(255,255,255),-1);
if(waitKey(30) == 27) //wait for 'esc' key press for 30 ms. If 'esc' key is pressed, break loop
{
cout << "esc key is pressed by user" << endl;
break;
}
}
return 0;
}
circle accepts a Mat object, not a VideoCapture object. So you need to draw the circle on frame.
Also you need to show the image after you actually draw the circle.
So replace the imshow / circle part of your code with:
...
cv::circle(frame, points(1,0), 3, cv::Scalar(255,255,255), -1);
imshow("MyVideo", frame);
...

RaspberryPi C++ app compile issue using OpenCV and Raspicam

Using the code below, I am trying to create an app in C++ with OpenCV + Raspicam. This app should stream video in real time from the RasPi camera model connected to my Pi to an Xwindow.
I get the following error on compile:
videofeed.cpp: In function ‘int main(int, char**)’:
videofeed.cpp:36:37: error: ‘cv::imread’ is not a member of ‘raspicam::RaspiCam_Cv’
How do I remedy this?
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/videoio.hpp"
#include <iostream>
#include </home/pi/raspicam-0.1.3/src/raspicam_cv.h>
using namespace std;
int main ( int argc,char **argv ) {
raspicam::RaspiCam_Cv Camera;
cv::Mat image;
int nCount=100;
//set camera params
Camera.set( CV_CAP_PROP_FORMAT, CV_8UC1 );
//Open camera
cout<<"Opening Camera..."<<endl;
if (!Camera.open())// if not success, exit program
{
cout << "Cannot open the video cam" << endl;
return -1;
}
double dWidth = Camera.get(CV_CAP_PROP_FRAME_WIDTH); //get the width of frames o$
double dHeight = Camera.get(CV_CAP_PROP_FRAME_HEIGHT); //get the height of frame$
cout << "Frame size : " << dWidth << " x " << dHeight << endl;
cv::namedWindow("MyVideo",CV_WINDOW_AUTOSIZE); //create a window called "MyVideo"
while (1)
{
cv::Mat frame;
bool bSuccess = Camera.cv::imread(frame); // get a new frame from camera
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read a frame from video stream" << endl;
break;
}
cv::imshow("MyVideo", frame); //show the frame in "MyVideo" window
if (cv::waitKey(30) == 27) //wait for 'esc' key press for 30ms. If 'esc' ke$
{
cout << "esc key is pressed by user" << endl;
break;
}
}
return 0;
}
///////////////////////////////////////////////////////////////////////////////////
bool bSuccess = Camera.cv::imread(frame); is an error,there is no imreadfunciton in raspicam::RaspiCam_Cv.If you want grub a frame,you can use function grub and than retrieve .
For example:
Mat img;
camera.grab();
camera.retrieve(img);
You can see the declaration:
/**
* Grabs the next frame from video file or capturing device.
*/
bool grab();
/**
*Decodes and returns the grabbed video frame.
*/
void retrieve ( cv::Mat& image );
Also ,you can get example in:https://github.com/cedricve/raspicam

opencv videocapture c++ not working 2 times

I tried the following code for capturing a video from my webcam:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
int main()
{
VideoCapture cap(0); // open the video camera no. 0
if (!cap.isOpened()) // if not success, exit program
{
cout << "Cannot open the video cam" << endl;
return -1;
}
double dWidth = cap.get(CV_CAP_PROP_FRAME_WIDTH); //get the width of frames of the video
double dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT); //get the height of frames of the video
cout << "Frame size : " << dWidth << " x " << dHeight << endl;
namedWindow("MyVideo", CV_WINDOW_AUTOSIZE); //create a window called "MyVideo"
namedWindow("Changed", CV_WINDOW_AUTOSIZE);
while (1)
{
Mat frame;
bool bSuccess = cap.read(frame); // read a new frame from video
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read a frame from video stream" << endl;
break;
}
Mat imgH = frame + Scalar(75, 75, 75);
imshow("MyVideo", frame); //show the frame in "MyVideo" window
imshow("Changed", imgH);
if (waitKey(30) == 27) //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop
{
cout << "esc key is pressed by user" << endl;
break;
}
}
return 0;
}
Now here's my problem:
After debugging that program for the first time everything works as expected. But when debugging for a second time (after changing some lines in the code) it cannot read from the camera.
Does anyone have a hint for me how to solve that problem?
Thanks!
The code you posted seems to be working absolutely fine in my case, and the output is as intended.
However please make sure that your webcam is switched on before you run the program, this is important.
Since i have a YouCam client in my computer for the webcam, therefore it shows that i need to start youcam.
Since i dont have enough reputation to post an image, so please see the following link in order to view the output i got when webcam not already switched on.
http://i.imgur.com/h4bTZ7z.png
Hope this helps!!