I have a camera application that will display the cameras live stream:
void MainWindow::on_start()
{
if(video.isOpened())
{
video.release();
return;
}
if(!isCameraFound || !video.open(pipeline.trimmed().toStdString(), cv::CAP_GSTREAMER))
{
QMessageBox::critical(this,
"Video Error",
"Must be the correct USB Camera connected!");
return;
}
Mat frame;
while(video.isOpened())
{
video >> frame;
if(!frame.empty())
{
QImage qimg(frame.data,
frame.cols,
frame.rows,
frame.step,
QImage::Format_RGB888);
pixmap.setPixmap( QPixmap::fromImage(qimg.rgbSwapped()) );
ui->graphicsView->fitInView(&pixmap, Qt::KeepAspectRatio);
}
...
And I have a function that does processing and has a loop to inspect the first 250 frames because the camera gives off a specific frame when something is found. But when doing this loop the videostream lags so much it basically freezes until the loop is done. How would I be able to go about removing the lag caused by the frame grabbing loop to be able to go through the loop and have the videostream not lag?
void MainWindow::on_push()
{
Mat test;
for(int i = 0; i < 250; i++){
video >> test;
...
Related
I´m using OpenCV for some face recognition stuff with a webcam. The thing is, whenever there is no camera installed, I get an exception. I handled that at the beginning with this code:
if (!realTime.isOpened())
{
cout << "No webcam installed!" << endl;
system("pause");
return 0;
}
realTime is an object of VideoCapture. So when I want to start the program with no webcam plugged in, I get a "No webcam installed" in the console.
But now I want the program to immediately stop whenever the webcam is plugged off. This seems to be really hard, cause my face recognition is in a while loop:
namedWindow("Face Detection", WINDOW_KEEPRATIO);
string trained_classifier_location = "C:/opencv/sources/data/haarcascades/haarcascade_frontalface_alt.xml";
CascadeClassifier faceDetector;
faceDetector.load(trained_classifier_location);
vector<Rect> faces;
while (true)
{
realTime.read(videoStream);
faceDetector.detectMultiScale(videoStream, faces, 1.1, 4, CASCADE_SCALE_IMAGE, Size(20, 20));
for (int i = 0; i < faces.size(); i++)
{
Mat faceROI = videoStream(faces[i]);
int x = faces[i].x;
int y = faces[i].y;
int h = y + faces[i].height;
int w = x + faces[i].width;
rectangle(videoStream, Point(x, y), Point(w, h), Scalar(255, 0, 255), 2, 8, 0);
}
imshow("Face Detection", videoStream);
if (waitKey(10) == 27)
{
break;
}
}
I also tried it with a try-catch-statement, but the exception is thrown at
Check the return value of read (you should do that anyway). From the doc:
The method/function combines VideoCapture::grab() and VideoCapture::retrieve() in one call. This is the most convenient method for reading video files or capturing data from decode and returns the just grabbed frame. If no frames has been grabbed (camera has been disconnected, or there are no more frames in video file), the method returns false and the function returns empty image (with cv::Mat, test it with Mat::empty()).
So:
bool valid_frame = false;
while (true)
{
valid_frame = realTime.read(videoStream);
if(!valid_frame)
{
std::cout << "camera disconnected, or no more frames in video file";
break;
}
...
}
I am using opencv to show frames from camera. I want to show that frames in to two separation windows. I want show real frame from camera into first window (show frames after every 30 mili-seconds) and show the frames in second window with some delay (that means it will show frames after every 1 seconds). Is it possible to do that task. I tried to do it with my code but it is does not work well. Please give me one solution to do that task using opencv and visual studio 2012. Thanks in advance
This is my code
VideoCapture cap(0);
if (!cap.isOpened())
{
cout << "exit" << endl;
return -1;
}
namedWindow("Window 1", 1);
namedWindow("Window 2", 2);
long count = 0;
Mat face_algin;
while (true)
{
Mat frame;
Mat original;
cap >> frame;
if (!frame.empty()){
original = frame.clone();
cv::imshow("Window 1", original);
}
if (waitKey(30) >= 0) break;// Delay 30ms for first window
}
You could write the loop to display frames in a single function with the video file name as the argument and call them simultaneously by multi-threading.
The pseudo code would look like,
void* play_video(void* frame_rate)
{
// play at specified frame rate
}
main()
{
create_thread(thread1, play_video, normal_frame_rate);
create_thread(thread2, play_video, delayed_frame_rate);
join_thread(thread1);
join_thread(thread2);
}
I have a few vision algorithms which perform well enough on live camera streams; however, it is far from good when I run these algorithms on video files; the stream slows down way too much, although it is fine when not running on a vision algorithm, which are executed by calling VideoAlgoVision->execute( grabbedFrame, &CurrentROI );
Here is how I read video so far:
//////////////////////////////////////////////////////////////////
void VisionUnit_widget::Play()
{
if(IsVideoPause)
{
ui->pausePushButton->setEnabled(true);
ui->playPushButton->setEnabled(false);
IsVideoPause = false;
TimerOpen->start( (int) (1000/FPS_open) );
return;
}
else
{
ui->pausePushButton->setEnabled(true);
ui->stopPushButton ->setEnabled(true);
ui->rewPushButton ->setEnabled(true);
ui->ffdPushButton ->setEnabled(true);
ui->videoSlider ->setEnabled(true);
ui->playPushButton ->setEnabled(false);
if(!VideoCap)
VideoCap = new VideoCapture( FileName.toStdString() );
else
VideoCap->open( FileName.toStdString() );
if( VideoCap->isOpened() )
{
FrameH = (int) VideoCap->get(CV_CAP_PROP_FRAME_HEIGHT);
FrameW = (int) VideoCap->get(CV_CAP_PROP_FRAME_WIDTH);
FPS_open = (int) VideoCap->get(CV_CAP_PROP_FPS);
NumFrames = (int) VideoCap->get(CV_CAP_PROP_FRAME_COUNT);
ui->videoSlider->setMaximum( (int)NumFrames );
ui->videoSlider->setEnabled(true);
READCOUNT = 0;
TimerOpen->start( (int) (1000/FPS_open) );
}
}
}
//////////////////////////////////////////////////////////////////
void VisionUnit_widget::Pause()
{
ui->playPushButton->setEnabled(true);
ui->pausePushButton->setEnabled(false);
TimerOpen->stop();
IsVideoPause = true;
}
//////////////////////////////////////////////////////////////////
void VisionUnit_widget::Stop()
{
ui->stopPushButton->setEnabled(false);
ui->playPushButton->setEnabled(false);
ui->pausePushButton->setEnabled(false);
ui->rewPushButton->setEnabled(false);
ui->ffdPushButton->setEnabled(false);
FileName = "";
TimerOpen->stop();
READCOUNT = 0;
ui->videoSlider->setSliderPosition(0);
ui->videoSlider->setEnabled(false);
ui->frameLabel->setText( "No camera connected" );
delete TimerOpen;
TimerOpen = 0;
if(VideoCap)
{
delete VideoCap;
VideoCap = 0;
}
if(VideoAlgoVision)
{
delete VideoAlgoVision;
VideoAlgoVision = 0;
}
}
//////////////////////////////////////////////////////////////////
void VisionUnit_widget::Read()
{
READCOUNT++;
// Update Video Player's slider
ui->videoSlider->setValue(READCOUNT);
if(READCOUNT >= NumFrames) // if avi ends
{
Pause();
return;
}
Mat grabbedFrame;
// Get next frame
if(VideoCap->isOpened() && VideoCap->read(grabbedFrame))
{
// Execute the vision filter
if(VideoAlgoVision)
VideoAlgoVision->execute( grabbedFrame, &CurrentROI );
// Convert Mat to QImage
QImage frame = MatToQImage( grabbedFrame );
// Update the display
UpdateFrame( frame );
}
}
//////////////////////////////////////////////////////////////////
QImage VisionUnit_widget::MatToQImage(const Mat& mat)
{
// 8-bits unsigned, NO. OF CHANNELS = 1
if(mat.type()==CV_8UC1)
{
// Set the color table (used to translate colour indexes to qRgb values)
QVector<QRgb> colorTable;
for (int i=0; i<256; i++)
{
colorTable.push_back(qRgb(i,i,i));
}
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_Indexed8);
img.setColorTable(colorTable);
return img;
}
// 8-bits unsigned, NO. OF CHANNELS=3
else if(mat.type()==CV_8UC3)
{
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
return img.rgbSwapped();
}
else
{
return QImage();
}
}
So, my question is, is there a better way than this to read video files with Qt and OpenCV, including video processing? Should I adapt the QTimer timing at run time? I start it with TimerOpen->start( (int) (1000/FPS_open) ); but obviously the vision algorithm will slow down the whole thing. Any thought?
There may be some optimization to make on the vision algorithm, however my point here is they do well on my webcam and IP cameras, having me think there is something wrong with the way I read/use video files.
thx
You did not provide the whole code however i guess that you have connected TimerOpen's timeout() signal to VisionUnit_widget::Read() slot. If this is the case, you are accumulating 1000/FPS_open and processing time.
Changing your desing to something like the following will solve it :
int fSpace = 1000 / FPS;
QTime timer;
timer.start();
forever {
if(timer.elapsed() < fSpace)
msleep(1);
timer.restart();
..Process the frame..
}
And probably it is better to move this to another thread from main thread.
I am trying to get the fps from my camera so that I can pass it to the VideoWriter for outputting the video. However, I am getting 0 fps by calling VideoCapture::get(CV_CAP_PROP_FPS) from my camera. If I hardcode it, my video may be too slow or too fast.
#include "opencv2/opencv.hpp"
#include <stdio.h>
#include <stdlib.h>
using namespace std;
using namespace cv;
int main(int argc, char *argv[])
{
cv::VideoCapture cap;
int key = 0;
if(argc > 1){
cap.open(string(argv[1]));
}
else
{
cap.open(CV_CAP_ANY);
}
if(!cap.isOpened())
{
printf("Error: could not load a camera or video.\n");
}
Mat frame;
cap >> frame;
waitKey(5);
namedWindow("video", 1);
double fps = cap.get(CV_CAP_PROP_FPS);
CvSize size = cvSize((int)cap.get(CV_CAP_PROP_FRAME_WIDTH),(int)cap.get(CV_CAP_PROP_FRAME_HEIGHT));
int codec = CV_FOURCC('M', 'J', 'P', 'G');
if(!codec){ waitKey(0); return 0; }
std::cout << "CODEC: " << codec << std::endl;
std::cout << "FPS: " << fps << std::endl;
VideoWriter v("Hello.avi",-1,fps,size);
while(key != 'q'){
cap >> frame;
if(!frame.data)
{
printf("Error: no frame data.\n");
break;
}
if(frame.empty()){ break; }
v << frame;
imshow("video", frame);
key = waitKey(5);
}
return(0);
}
How can I get VideoCapture::get(CV_CAP_PROP_FPS) to return the right fps or give a fps to the VideoWriter that works universally for all webcams?
CV_CAP_PROP_FPS only works on videos as far as I know. If you want to capture video data from a webcam you have to time it correctly yourself. For example use a timer to capture a frame from the webcam every 40ms and then save as 25fps video.
You can use VideoCapture::set(CV_CAP_PROP_FPS) to set the desired FPS for a webcam. However, you can't use get for some reason.
Note that sometimes the driver will choose a different FPS than what you have requested depending on the limitations of the webcam.
My workaround: capture frames during a few seconds (4 is fine in my tests, with 0.5 seconds of initial delay), and estimate the fps the camera outputs.
I've never observed CV_CAP_PROP_FPS to work. I have tried with various flavors of OpenCV 2.4.x (currently 2.4.11) using file inputs.
As a workaround in one scenario, I directly used libavformat (from ffmpeg) to get the frame rate, which I can then use in my other OpenCV code:
static double get_frame_rate(const char *filePath) {
AVFormatContext *gFormatCtx = avformat_alloc_context();
av_register_all();
if (avformat_open_input(&gFormatCtx, filePath, NULL, NULL) != 0) {
return -1;
} else if (avformat_find_stream_info(gFormatCtx, NULL) < 0) {
return -1;
}
for (int i = 0; i < gFormatCtx->nb_streams; i++) {
if (gFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
AVRational rate = gFormatCtx->streams[i]->avg_frame_rate;
return (double)av_q2d(rate);
}
}
return -1;
}
Aside from that, undoubtedly one of the slowest possible (although sure to work) methods to get the average fps, would be to step through each frame and divide the current frame number by the current time:
for (;;) {
currentFrame = cap.get(CV_CAP_PROP_POS_FRAMES);
currentTime = cap.get(CV_CAP_PROP_POS_MSEC);
fps = currentFrame / (currentTime / 1000);
# ... code ...
# stop this loop when you're satisfied ...
}
You'd probably only want to do the latter if the other methods of directly finding the fps failed, and further, there were no better way to summarily get overall duration and frame count information.
The example above works on a file -- to adapt to a camera, you could use elapsed wallclock time since beginning of capture, instead of getting CV_CAP_PROP_POS_MSEC. Then the average fps for the session would be the elapsed wall clock time divided by the current frame number.
For live video from webcam use cap.get(cv2.CAP_PROP_FPS)
I have a grabber which can get the images and show them on the screen with the following code
while((lastPicNr = Fg_getLastPicNumberBlockingEx(fg,lastPicNr+1,0,10,_memoryAllc))<200) {
iPtr=(unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc);
::DrawBuffer(nId,iPtr,lastPicNr,"testing"); }
but I want to use the pointer to the image data and display them with OpenCV, cause I need to do the processing on the pixels. my camera is a CCD mono camera and the depth of the pixels is 8bits. I am new to OpenCV, is there any option in opencv that can get the return of the (unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc); and disply it on the screen? or get the data from the iPtr pointer an allow me to use the image data?
Creating an IplImage from unsigned char* raw_data takes 2 important instructions: cvCreateImageHeader() and cvSetData():
// 1 channel for mono camera, and for RGB would be 3
int channels = 1;
IplImage* cv_image = cvCreateImageHeader(cvSize(width,height), IPL_DEPTH_8U, channels);
if (!cv_image)
{
// print error, failed to allocate image!
}
cvSetData(cv_image, raw_data, cv_image->widthStep);
cvNamedWindow("win1", CV_WINDOW_AUTOSIZE);
cvShowImage("win1", cv_image);
cvWaitKey(10);
// release resources
cvReleaseImageHeader(&cv_image);
cvDestroyWindow("win1");
I haven't tested the code, but the roadmap for the code you are looking for is there.
If you are using C++, I don't understand why your are not doing it the simple way like this:
If your camera is supported, I would do it this way:
cv::VideoCapture capture(0);
if(!capture.isOpened()) {
// print error
return -1;
}
cv::namedWindow("viewer");
cv::Mat frame;
while( true )
{
capture >> frame;
// ... processing here
cv::imshow("viewer", frame);
int c = cv::waitKey(10);
if( (char)c == 'c' ) { break; } // press c to quit
}
I would recommend starting to read the docs and tutorials which you can find here.