I want to use openCV to capture photos.
I use two labels for showing,one is for video stream and the other is for the captured photo.
Now the captured photo is works well,but the video stream cannot be display and always got a segmentation fault!
Actually,both of the codes almost exactly the sameļ¼
void CameraCap::on_CapBtn_clicked()
{
frame = cvQueryFrame(cam);
QImage image = QImage((const uchar*)frame->imageData, frame->width, \
frame->height, QImage::Format_RGB888).rgbSwapped();
ui->pictureLabel->setPixmap(QPixmap::fromImage(image));
}
void CameraCap::readFrame()
{
frame = cvQueryFrame(cam);
QImage image = QImage((const uchar*)frame->imageData, frame->width, \
frame->height, QImage::Format_RGB888).rgbSwapped();
ui->videoLabel->setPixmap(QPixmap::fromImage(image));
}
void CameraCap::on_openBtn_clicked()
{
cam = cvCreateCameraCapture(0);
timer->start(33);
}
timer = new QTimer(this);
connect(timer, SIGNAL(timeout()), this, SLOT(readFrame()));
That is to say, the function of on_CapBtn_clicked() works well,but the function readFrame() always leads to segmentation fault!
Why does this happen?
Related
I am using QT to build a custom component for QML. I want this component to be able to play a WEBM video that contains an alpha channel. However, all of my attempts have resulted in the transparent pixels of the video getting replaced with black ones. This is my code currently:
MyClass::MyClass()
{
m_pixmap = new QPixmap(1920, 1080); // Create a canvas to draw on:
// Create something that can be drawn:
m_painter = new QPainter(m_pixmap);
m_rect = new QRect(0, 0, 1920, 1080);
// Create an area to present on:
m_label = new QLabel();
m_label->setPixmap(*m_pixmap);
m_label->show();
// Play video:
m_videoSink = new QVideoSink();
m_mediaPlayer = new QMediaPlayer();
m_mediaPlayer->setVideoSink(m_videoSink);
m_mediaPlayer->setSource(QUrl::fromLocalFile("path..."));
m_mediaPlayer->setLoops(QMediaPlayer::Infinite);
m_mediaPlayer->play();
// Add a an event for when the video frame changes:
connect(m_videoSink, SIGNAL(videoFrameChanged(QVideoFrame)), this, SLOT(SetFrame(QVideoFrame)));
qDebug() << "Constructed";
}
void MyClass::SetFrame(QVideoFrame frame)
{
frame.paint(m_painter, *m_rect, m_options); //Stores the frame in the m_pixmap
m_label->setPixmap(*m_pixmap);
}
In this example I attempt to use a QMediaPlayer to play the video, then a QVideoSink to extract the currently playing QVideoFrame and paint that to a QPixmap that is finally being displayed in a QLabel.
I have also tried to have the QMediaPlayer directly hooked up to a QVideoWidget.
I know that my WEBM video works as it is displaying as expected when imported in to other programs.
The following is the method used to display a frame on the frontend:
Frame is grabbed from cv::VideoCapture object inside QRunnable function
void CameraFeedGrabber::run()
{
while(mStopCapture)
{
cv::Mat mat;
bool isFrameGrabbed = cap->read(mat);// gets the next frame
if(isFrameGrabbed )
{
emit frameAvailable(mViewId,mat.clone());
}+;
}
}
Signal is emitted from the above class to an interfacing class
void QmlInterface::processFrames(int cameraId, cv::Mat mat, bool sessionStatus)
{
if(GlobalSettings::mQmlVideoIntList.size() > 0)
{
cv::Mat matToSend = mat.clone();
cv::cvtColor(matToSend, matToSend, cv::COLOR_BGR2RGB);
QImage qFrame1 = QImage((uchar*)matToSend.data, matToSend.cols, matToSend.rows, matToSend.step, QImage::Format_RGB888);
GlobalSettings::mQmlVideoIntList[0]->setFrame(qFrame1);
}
if(sessionStatus)
mFrameAssigner->setFrame(mat);
}
Interfacing class calls function inside display class to update the frame
QmlVideoInterface::QmlVideoInterface(QQuickItem *parent):
QQuickPaintedItem(parent)
{
}
void QmlVideoInterface::paint(QPainter *painter)
{
painter->drawImage(0, 0, mCurrentImage.scaled(_imageWidth, _imageHeight, Qt::IgnoreAspectRatio, Qt::SmoothTransformation));
}
void QmlVideoInterface::setFrame(const QImage &image)
{
mutex.lock();
mCurrentImage = image.copy();
update();
mutex.unlock();
}
Using this method works fine when displaying smaller frames but for larger frames, for instance, 1920*1080 uses around 20% CPU (i7 6700 # 3.40 Ghz). With video analytics also running in the backend this is too costly for simply displaying the frame. Is there an alternative to this method?
I am struggling with writing a video player which uses OpenCV to read a frame from video and display it in a QWidget.
This is my code:
// video caputre is opened here
...
void VideoPlayer::run()
{
int sleep = 1000 / static_cast<unsigned long>(video_capture_.get(CV_CAP_PROP_FPS));
forever
{
QScopedPointer<cv::Mat> frame(new cv::Mat);
if(!video_capture_.read(*frame))
break;
cv::resize(*frame, *frame, cv::Size(640, 360), 0, 0, cv::INTER_CUBIC);
cv::cvtColor(*frame, *frame, CV_BGR2RGB);
QImage image(frame->data, frame->cols, frame->rows, QImage::Format_RGB888);
emit signalFrame(image); // notifying QWidget to draw an image
msleep(sleep); // wait before we read another frame
}
}
and on the QWidget side, I am just using this image and drawing it in paintEvent.
It just looks to me that parameter sleep doesn't play important role here. As much as I decrease it (to get more FPS) the video is just not smooth.
The only thing here left for me is that I gave up on that approach because it doesn't work, but I wanted to ask here one more time, just to be sure - am I doing something wrong here?
In my program, I'm reading from a webcam or a video file, via OpenCV and displaying it via Qt.
I get the fps from the video properties and set the timer accordingly.
I have no problem reading the videos, the fps calibration is good (since the webcam show 0fps, I set it to 30)
But when I record the images, I set the output video's fps to the same as the original video, yet, when I read it in VLC or even Windows Media Player the video is accelerated.
The most curious thing is when I play the recorded video in my program, the fps is good, and the video isn't accelerated.
Here's how I do it :
Constructor()
{
// Initializing the video resources
cv::VideoCapture capture;
cv::VideoWriter writer;
cv::Mat frame;
int fps;
if (webcam)
{
capture.open(0);
fps = 30;
}
else
{
capture.open(inputFilePath);
fps = this->capture.get(CV_CAP_PROP_FPS);
}
// .avi
writer = cv::VideoWriter(outputFilePath, CV_FOURCC('M', 'J', 'P', 'G'), fps, frame.size());
// Defining the refresh timer;
QTimer timer = new QTimer();
connect(timer, SIGNAL(timeout()), this, SLOT(updateFeed()));
this->timer->start(1000 / fps);
}
updateFeed()
{
// ... display the image in a QLabel ... at a reasonnable speed.
writer.write(frame);
}
Now is there anything I do wrong ?
Thanks
i am using Qt Creator to implement an application that reads a video and by clicking on a button i will save the frame that is being showed. Then i will process that frame with Opencv.
Having a video being displayed with QmediaPlayer, how can i extract a frame from the video? I should then be able to convert that frame to a Mat image in Matlab.
Thanks
QMediaPlayer *player = new QMediaPlayer();
QVideoProbe *probe = new QVideoProbe;
connect(probe, SIGNAL(videoFrameProbed(QVideoFrame)), this, SLOT(processFrame(QVideoFrame)));
probe->setSource(player); // Returns true, hopefully.
processFrame slot:
void processFrame(QVideoFrame const&) {
if (isButtonClicked == false) return;
isButtonClicked = false;
...
process frame
...
}
QVideoProbe reference
QVideoFrame reference
You can use QVideoFrame::bits() to process image with OpenCV