Qt video frames from camera corrupted - c++

EDIT: The first answer solved my problem. Apart from that I had to set the ASI_BANDWIDTH_OVERLOAD value to 0.
I am programming a Linux application in C++/Qt 5.7 to track stars in my telescope. I use a camera (ZWO ASI 120MM with according SDK v0.3) and grab its frames in a while loop in a separate thread. These are then emitted to a QOpenGlWidget to be displayed. I have following problem: When the mouse is inside the QOpenGlWidget area, the displayed frames get corrupted. Especially when the mouse is moved. The problem is worst when I use an exposure time of 50ms and disappears for lower exposure times. When I feed the pipeline with alternating images from disk, the problem disappears. I assume that this is some sort of thread-synchronization problem between the camera thread and the main thread, but I couldnt solve it. The same problem appears in the openastro software. Here are parts of the code:
MainWindow:
MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent){
mutex = new QMutex;
camThread = new QThread(this);
camera = new Camera(nullptr, mutex);
display = new GLViewer(this, mutex);
setCentralWidget(display);
cameraHandle = camera->getHandle();
connect(camThread, SIGNAL(started()), camera, SLOT(connect()));
connect(camera, SIGNAL(exposureCompleted(const QImage)), display, SLOT(showImage(const QImage)), Qt::BlockingQueuedConnection );
camera->moveToThread(camThread);
camThread->start();
}
The routine that grabs the frames:
void Camera::captureFrame(){
while( cameraIsReady && capturing ){
mutex->lock();
error = ASIGetVideoData(camID, buffer, bufferSize, int(exposure*2*1e-3)+500);
if(error == ASI_SUCCESS){
frame = QImage(buffer,width,height,QImage::Format_Indexed8).convertToFormat(QImage::Format_RGB32); //Indexed8 is for 8bit
mutex->unlock();
emit exposureCompleted(frame);
}
else {
cameraStream << "timeout" << endl;
mutex->unlock();
}
}
}
The slot that receives the image:
bool GLViewer::showImage(const QImage image)
{
mutex->lock();
mOrigImage = image;
mRenderQtImg = mOrigImage;
recalculatePosition();
updateScene();
mutex->unlock();
return true;
}
And the GL function that sets the image:
void GLViewer::renderImage()
{
makeCurrent();
glClear(GL_COLOR_BUFFER_BIT);
if (!mRenderQtImg.isNull())
{
glLoadIdentity();
glPushMatrix();
{
if (mResizedImg.width() <= 0)
{
if (mRenderWidth == mRenderQtImg.width() && mRenderHeight == mRenderQtImg.height())
mResizedImg = mRenderQtImg;
else
mResizedImg = mRenderQtImg.scaled(QSize(mRenderWidth, mRenderHeight),
Qt::IgnoreAspectRatio,
Qt::SmoothTransformation);
}
glRasterPos2i(mRenderPosX, mRenderPosY);
glPixelZoom(1, -1);
glDrawPixels(mResizedImg.width(), mResizedImg.height(), GL_RGBA, GL_UNSIGNED_BYTE, mResizedImg.bits());
}
glPopMatrix();
glFlush();
}
}
I stole this code from here: https://github.com/Myzhar/QtOpenCVViewerGl
And lastly, here is how my problem looks:
This looks awful.

The image producer should produce new images and emit them through a signal. Since QImage is implicitly shared, it will automatically recycle frames to avoid new allocations. Only when the producer thread out-runs the display thread will image copies be made.
Instead of using an explicit loop in the Camera object, you can run the capture using a zero-duration timer, and having the event loop invoke it. That way the camera object can process events, e.g. timers, cross-thread slot invocations, etc.
There's no need for explicit mutexes, nor for a blocking connection. Qt's event loop provides cross-thread synchronization. Finally, the QtOpenCVViewerGl project performs image scaling on the CPU and is really an example of how not to do it. You can get image scaling for free by drawing the image on a quad, even though that's also an outdated technique from the fixed pipeline days - but it works just fine.
The ASICamera class would look roughly as follows:
// https://github.com/KubaO/stackoverflown/tree/master/questions/asi-astro-cam-39968889
#include <QtOpenGL>
#include <QOpenGLFunctions_2_0>
#include "ASICamera2.h"
class ASICamera : public QObject {
Q_OBJECT
ASI_ERROR_CODE m_error;
ASI_CAMERA_INFO m_info;
QImage m_frame{640, 480, QImage::Format_RGB888};
QTimer m_timer{this};
int m_exposure_ms = 0;
inline int id() const { return m_info.CameraID; }
void capture() {
m_error = ASIGetVideoData(id(), m_frame.bits(), m_frame.byteCount(),
m_exposure_ms*2 + 500);
if (m_error == ASI_SUCCESS)
emit newFrame(m_frame);
else
qDebug() << "capture error" << m_error;
}
public:
explicit ASICamera(QObject * parent = nullptr) : QObject{parent} {
connect(&m_timer, &QTimer::timeout, this, &ASICamera::capture);
}
ASI_ERROR_CODE error() const { return m_error; }
bool open(int index) {
m_error = ASIGetCameraProperty(&m_info, index);
if (m_error != ASI_SUCCESS)
return false;
m_error = ASIOpenCamera(id());
if (m_error != ASI_SUCCESS)
return false;
m_error = ASIInitCamera(id());
if (m_error != ASI_SUCCESS)
return false;
m_error = ASISetROIFormat(id(), m_frame.width(), m_frame.height(), 1, ASI_IMG_RGB24);
if (m_error != ASI_SUCCESS)
return false;
return true;
}
bool close() {
m_error = ASICloseCamera(id());
return m_error == ASI_SUCCESS;
}
Q_SIGNAL void newFrame(const QImage &);
QImage frame() const { return m_frame; }
Q_SLOT bool start() {
m_error = ASIStartVideoCapture(id());
if (m_error == ASI_SUCCESS)
m_timer.start(0);
return m_error == ASI_SUCCESS;
}
Q_SLOT bool stop() {
m_error = ASIStopVideoCapture(id());
return m_error == ASI_SUCCESS;
m_timer.stop();
}
~ASICamera() {
stop();
close();
}
};
Since I'm using a dummy ASI API implementation, the above is sufficient. Code for a real ASI camera would need to set appropriate controls, such as exposure.
The OpenGL viewer is also fairly simple:
class GLViewer : public QOpenGLWidget, protected QOpenGLFunctions_2_0 {
Q_OBJECT
QImage m_image;
void ck() {
for(GLenum err; (err = glGetError()) != GL_NO_ERROR;) qDebug() << "gl error" << err;
}
void initializeGL() override {
initializeOpenGLFunctions();
glClearColor(0.2f, 0.2f, 0.25f, 1.f);
}
void resizeGL(int width, int height) override {
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, height, 0, 0, 1);
glMatrixMode(GL_MODELVIEW);
update();
}
// From http://stackoverflow.com/a/8774580/1329652
void paintGL() override {
auto scaled = m_image.size().scaled(this->size(), Qt::KeepAspectRatio);
GLuint texID;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glGenTextures(1, &texID);
glEnable(GL_TEXTURE_RECTANGLE);
glBindTexture(GL_TEXTURE_RECTANGLE, texID);
glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGB, m_image.width(), m_image.height(), 0,
GL_RGB, GL_UNSIGNED_BYTE, m_image.constBits());
glBegin(GL_QUADS);
glTexCoord2f(0, 0);
glVertex2f(0, 0);
glTexCoord2f(m_image.width(), 0);
glVertex2f(scaled.width(), 0);
glTexCoord2f(m_image.width(), m_image.height());
glVertex2f(scaled.width(), scaled.height());
glTexCoord2f(0, m_image.height());
glVertex2f(0, scaled.height());
glEnd();
glDisable(GL_TEXTURE_RECTANGLE);
glDeleteTextures(1, &texID);
ck();
}
public:
GLViewer(QWidget * parent = nullptr) : QOpenGLWidget{parent} {}
void setImage(const QImage & image) {
Q_ASSERT(image.format() == QImage::Format_RGB888);
m_image = image;
update();
}
};
Finally, we hook the camera and the viewer together. Since the camera initialization may take some time, we perform it in the camera's thread.
The UI should emit signals that control the camera, e.g. to open it, start/stop acquisition, etc., and have slots that provide feedback from the camera (e.g. state changes). A free-standing function would take the two objects and hook them together, using functors as appropriate to adapt the UI to a particular camera. If adapter code would be extensive, you'd use a helper QObject for that, but usually a function should suffice (as it does below).
class Thread : public QThread { public: ~Thread() { quit(); wait(); } };
// See http://stackoverflow.com/q/21646467/1329652
template <typename F>
static void postToThread(F && fun, QObject * obj = qApp) {
QObject src;
QObject::connect(&src, &QObject::destroyed, obj, std::forward<F>(fun),
Qt::QueuedConnection);
}
int main(int argc, char ** argv) {
QApplication app{argc, argv};
GLViewer viewer;
viewer.setMinimumSize(200, 200);
ASICamera camera;
Thread thread;
QObject::connect(&camera, &ASICamera::newFrame, &viewer, &GLViewer::setImage);
QObject::connect(&thread, &QThread::destroyed, [&]{ camera.moveToThread(app.thread()); });
camera.moveToThread(&thread);
thread.start();
postToThread([&]{
camera.open(0);
camera.start();
}, &camera);
viewer.show();
return app.exec();
}
#include "main.moc"
The GitHub project includes a very basic ASI camera API test harness and is complete: you can run it and see the test video rendered in real time.

Related

QImage without using scaled() makes program crash when rendering on QML

I want to render images from the webcam onto QML view. And I have to transfer the format from OpenCV to QImage. Then I implement a QQuickPaintedItem singleton class to render QImage.
If I don't use QImage::scaled() in my code when I start grabbing and invoking rendering, my program crashes and I don't know why.
image=cv::Mat(stImageInfo.nHeight,stImageInfo.nWidth,CV_8UC3,m_pBufForSaveImage);
//cv::Size dsize = cv::Size(round(0.33 * stImageInfo.nWidth), round(0.27 * stImageInfo.nHeight));
//cv::Mat shrink;
//resize(image, shrink, dsize, 0, 0, CV_INTER_AREA);
QImage Qimag = MatImageToQt(image);
Qimag = Qimag.scaled(image.cols*0.33,image.rows*0.27,Qt::IgnoreAspectRatio,Qt::SmoothTransformation);
MyImage *myimg = MyImage::instance();
myimg->setM_Image(Qimag);
//render QImage to QML
#include "myimage.h"
MyImage::MyImage(QQuickPaintedItem *parent)
{
Q_UNUSED(parent)
}
MyImage* MyImage::myImage = new MyImage;
MyImage *MyImage::instance()
{
return myImage;
}
void MyImage::paint(QPainter *painter)
{
QRectF target(0.0, 0.0, 800.0, 550.0);//width*0.33 height*0.27
QRectF source(0.0, 0.0, 800.0, 550.0);
painter->setRenderHint(QPainter::Antialiasing, true);
painter->drawImage(target, this->m_Image, source);
}
const QImage &MyImage::getM_Image() const
{
return m_Image;
}
void MyImage::setM_Image(const QImage &mimage)
{
if (mimage != m_Image) {
m_Image = mimage;
emit m_ImageChanged();
}
}

Display cv::Mat as QVideoFrame in a QML VideoOutput

I have an OpenCV backend which retrieves the video frame from a camera device through cv::VideoCapture, does some processing and then passes the cv::Mat frame to a Qt5 application for display in a QML VideoOutput.
The problem is the frames drawn are empty/white.
The CameraService class is receiving a cv::Mat from a backend object which runs on its own thread using a Qt::QueuedConnection signal. Then I convert to it to a QImage, which I use to initialize a QVideoFrame and pass it to a QAbstractVideoSurface received from a QML VideoOutput, after setting a pixel format on it.
I have checked whether the cv::Mat has valid content before conversion to QVideoFrame, so this is not the case.
Or am I doing it completely wrong and should instead draw an image?
Relevant code:
CameraService.cpp
CameraService::CameraService(Video::Backend *backend)
: QObject(),
surface(nullptr),
isFormatSet(false) {
this->backend = backend;
connect(
backend, &Video::Backend::onFrameReady,
this, &CameraService::onVideoFrameReady,
Qt::QueuedConnection);
}
CameraService::~CameraService() {
backend->deleteLater();
}
QAbstractVideoSurface *CameraService::getVideoSurface() const {
return surface;
}
void CameraService::setVideoSurface(QAbstractVideoSurface *surface) {
if (!this->surface && surface)
backend->start();
if (this->surface && this->surface != surface && this->surface->isActive())
this->surface->stop();
this->surface = surface;
if (this->surface && format.isValid()) {
format = this->surface->nearestFormat(format);
this->surface->start(format);
}
}
void CameraService::setFormat(
int width,
int height,
QVideoFrame::PixelFormat frameFormat
){
QSize size(width, height);
QVideoSurfaceFormat format(size, frameFormat);
this->format = format;
if (surface) {
if (surface->isActive())
surface->stop();
this->format = surface->nearestFormat(this->format);
surface->start(this->format);
}
}
void CameraService::onVideoFrameReady(cv::Mat currentFrame) {
if (!surface || currentFrame.empty())
return;
cv::Mat continuousFrame;
if (!currentFrame.isContinuous())
continuousFrame = currentFrame.clone();
else
continuousFrame = currentFrame;
if (!isFormatSet) {
setFormat(
continuousFrame.cols,
continuousFrame.rows,
QVideoFrame::PixelFormat::Format_BGR32);
isFormatSet = true;
}
frame = QImage(
(uchar *)continuousFrame.data,
continuousFrame.cols,
continuousFrame.rows,
continuousFrame.step,
QVideoFrame::imageFormatFromPixelFormat(
QVideoFrame::PixelFormat::Format_BGR32));
surface->present(QVideoFrame(frame));
}
QML object:
VideoOutput {
objectName: "videoOutput";
anchors.fill: parent;
fillMode: VideoOutput.PreserveAspectCrop;
source: CameraService;
}
The CameraService object is made available as a singleton to QML using this statement:
qmlRegisterSingletonInstance<Application::CameraService>("Application.CameraService", 1, 0, "CameraService", service);
Analyzing the code I have noticed that the conversion is not supported (I recommend you check if the format is valid or). For this I have made some changes:...
#ifndef CAMERASERVICE_H
#define CAMERASERVICE_H
#include "backend.h"
#include <QObject>
#include <QPointer>
#include <QVideoFrame>
#include <QVideoSurfaceFormat>
#include <opencv2/core/mat.hpp>
class QAbstractVideoSurface;
class CameraService : public QObject
{
Q_OBJECT
Q_PROPERTY(QAbstractVideoSurface* videoSurface READ videoSurface WRITE setVideoSurface NOTIFY surfaceChanged)
public:
explicit CameraService(Backend *backend, QObject *parent = nullptr);
QAbstractVideoSurface* videoSurface() const;
public Q_SLOTS:
void setVideoSurface(QAbstractVideoSurface* surface);
Q_SIGNALS:
void surfaceChanged(QAbstractVideoSurface* surface);
private Q_SLOTS:
void onVideoFrameReady(cv::Mat currentFrame);
private:
void setFormat(int width, int height, QVideoFrame::PixelFormat frameFormat);
QPointer<QAbstractVideoSurface> m_surface;
QScopedPointer<Backend> m_backend;
QVideoSurfaceFormat m_format;
bool m_isFormatSet;
QImage m_image;
};
#endif // CAMERASERVICE_H
#include "backend.h"
#include "cameraservice.h"
#include <QAbstractVideoSurface>
#include <iostream>
CameraService::CameraService(Backend *backend, QObject *parent)
: QObject(parent), m_backend(backend), m_isFormatSet(false)
{
connect(m_backend.data(), &Backend::frameReady, this, &CameraService::onVideoFrameReady);
}
QAbstractVideoSurface *CameraService::videoSurface() const
{
return m_surface;
}
void CameraService::setVideoSurface(QAbstractVideoSurface *surface){
if (m_surface == surface)
return;
if(m_surface && m_surface != surface && m_surface->isActive())
m_surface->stop();
m_surface = surface;
Q_EMIT surfaceChanged(m_surface);
m_backend->start();
if (m_surface && m_format.isValid()) {
m_format = m_surface->nearestFormat(m_format);
m_surface->start(m_format);
}
}
void CameraService::setFormat(
int width,
int height,
QVideoFrame::PixelFormat frameFormat
){
QSize size(width, height);
QVideoSurfaceFormat format(size, frameFormat);
m_format = format;
if (m_surface) {
if (m_surface->isActive())
m_surface->stop();
m_format = m_surface->nearestFormat(m_format);
m_surface->start(m_format);
}
}
void CameraService::onVideoFrameReady(cv::Mat currentFrame){
if (!m_surface || currentFrame.empty())
return;
cv::Mat continuousFrame;
if (!currentFrame.isContinuous())
continuousFrame = currentFrame.clone();
else
continuousFrame = currentFrame;
if (!m_isFormatSet) {
setFormat(continuousFrame.cols,
continuousFrame.rows,
QVideoFrame::Format_RGB32);
m_isFormatSet = true;
}
m_image = QImage(continuousFrame.data,
continuousFrame.cols,
continuousFrame.rows,
continuousFrame.step,
QImage::Format_RGB888);
m_image = m_image.rgbSwapped();
m_image.convertTo(QVideoFrame::imageFormatFromPixelFormat(QVideoFrame::Format_RGB32));
m_surface->present(QVideoFrame(m_image));
}
You can find the complete example here.

Synchronizing OpenGL with RtAudio (or Port Audio)

I need to synchronize some draws with OpenGL with a Metronome. The Metronome is build with libPD, and played with RtAudio.
Both things are working well (separately), but i need to move an object (a triangle) with the pulse a metronome. The application must play the clicks too. Both actions must be done parallel (playing and drawing). I should add a midi record too. My Application is in C++.
I tried to run that with one thread, but it doesn't work.
I tried to follow this explanation: How to make my metronome play at the same time as recording in my program?
The gui Library is WxWidgets. The threads are done with Poco::Runnable in this way:
class MyThread : public Poco::Runnable {
public:
MyThread(BasicGLPane *pane, std::shared_ptr<SoundManager> man);
virtual void run();
private:
BasicGLPane *_pane;
std::shared_ptr<SoundManager> _man;
};
MyThread::MyThread(BasicGLPane *pane, std::shared_ptr<SoundManager> man) {
_pane = pane;
_man = man;
}
void MyThread::run() {
_man->play();
_pane->startAnimation();
}
BasicGLpane is a wxGLCanvas. THe play function of the Sound Manager class is the following:
void SoundManager::play() {
// Init pd
if(!lpd->init(0, 2, sampleRate)) {
std::cerr << "Could not init pd" << std::endl;
exit(1);
}
// Receive messages from pd
lpd->setReceiver(object.get());
lpd->subscribe("metro-bang");
lpd->subscribe("switch");
// send DSP 1 message to pd
lpd->computeAudio(true);
// load the patch
open_patch("metro-main.pd");
std::cout << patch << std::endl;
// Use the RtAudio API to connect to the default audio device.
if(audio->getDeviceCount()==0){
std::cout << "There are no available sound devices." << std::endl;
exit(1);
}
RtAudio::StreamParameters parameters;
parameters.deviceId = audio->getDefaultOutputDevice();
parameters.nChannels = 2;
RtAudio::StreamOptions options;
options.streamName = "Pd Metronome";
options.flags = RTAUDIO_SCHEDULE_REALTIME;
if ( audio->getCurrentApi() != RtAudio::MACOSX_CORE ) {
options.flags |= RTAUDIO_MINIMIZE_LATENCY; // CoreAudio doesn't seem to like this
}
try {
if(audio->isStreamOpen()) {
audio->closeStream();
}
else {
audio->openStream( &parameters, NULL, RTAUDIO_FLOAT32, sampleRate, &bufferFrames, &audioCallback, lpd.get(), &options );
audio->startStream();
}
}
catch ( RtAudioError& e ) {
std::cerr << e.getMessage() << std::endl;
exit(1);
}
}
The OpenGL drawing methode are the following:
void BasicGLPane::startAnimation() {
std::cout<<"Start Animation"<<std::endl;
triangle_1(p1, p2, p3);
Refresh();
}
void BasicGLPane::triangle_1(std::shared_ptr<vertex2f> _p1, std::shared_ptr<vertex2f> _p2, std::shared_ptr<vertex2f> _p3) {
CGLContextObj ctx = CGLGetCurrentContext(); //enable multithreading (only apple)
CGLError err = CGLEnable( ctx, kCGLCEMPEngine);
if (err != kCGLNoError ) {
glEnable(GL_MULTISAMPLE);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, getWidth(), getHeight(),0 , -1, 1);
glShadeModel(GL_SMOOTH);
glBegin(GL_POLYGON); // Drawing Using Triangles
glColor3f (157.0/255.0, 44.0/255.0, 44.0/255.0);
glVertex3f( p1->x, p1->y, 0.0f); // Top left
glVertex3f(p2->x,p2->y, 0.0f); // Top Right
glVertex3f( p3->x,p3->y, 0.0f); //Bottom
glEnd();
glMatrixMode(GL_MODELVIEW);
glEnable (GL_BLEND);
glLoadIdentity();
glDisable(GL_MULTISAMPLE);
}
}
And the thread is calling with the following funcion:
void BasicGLPane::startThread() {
while (_object->getCounter()<10) { //this is only to test the functionality
thread.start(work);
}
thread.join();
manager->stop();
}
And after that, this funtion is called in Reder:
void BasicGLPane::render( wxPaintEvent& evt ) {
//some code here, not important....
startThread();
SwapBuffers();
}
Maybe I'm going to change this object, that is not important now, my problem is the synchronisation. I think RtAudio is making problems, because i become a EXC_BAD_Acces to getDeviceCount() or with any other function from RtAudio. That occurrs only in the thread context.
It would be better to do that with Port Audio?. It would be nice to know what I'm doing wrong, or if there is another way to resolve this problem
I found a solution. The problem was in the interaction between the wxwidgets main loop and openGL.The solution is to create an Idle event int the following way:
//on wxApp
void MyApp::activateRenderLoop(bool on) {
if(on && !render_loop_on) {
Connect(wxID_ANY, wxEVT_IDLE, wxIdleEventHandler(MyApp::onIdle));
render_loop_on = true;
}
else if (!on && render_loop_on) {
Disconnect(wxEVT_IDLE, wxIdleEventHandler(MyApp::onIdle));
render_loop_on = false;
}
}
void MyApp::onIdle(wxIdleEvent &evt) {
activateRenderLoop(glPane->render_on);
if(render_loop_on) {
std::cout<<"MyApp on Idle, render_loop_on"<<std::endl;
glPane->paint_now();
evt.RequestMore();
}
}
//on event table:
EVT_PAINT(BasicGLPane::paint_rt)
void BasicGLPane::rightClick(wxMouseEvent& event) {
render_on = true;
manager->init();
SLEEP(2000);
manager->play();
wxGetApp().activateRenderLoop(true);
}
void BasicGLPane::paint_rt(wxPaintEvent &evt) {
wxPaintDC dc(this);
render_rt(dc);
}
void BasicGLPane::paint_now(){
wxClientDC dc(this);
std::cout<<"paint now() "<<std::endl;
render_rt(dc);
}
void BasicGLPane::render_rt(wxDC &dc) {
wxGLCanvas::SetCurrent(*m_context);
if(_object->getCounter()>=10) {
wxGetApp().activateRenderLoop(false);
manager->stop();
render_on = false;
}
else {
ctx = CGLGetCurrentContext(); //OSx only
err = CGLEnable( ctx, kCGLCEMPEngine); //OSX only
std::cout<<"render_rt CGLError: "<<err<<std::endl;
if (err==0) {
glTranslatef(p3->x, p3->y, 0);
Refresh(false);
}
}
}
The synchronsazion works perfectly now.

QMutex with QConcurrent::run not working as expected?

I am making a Qt GUI application that uses a custom QLabel class (name as ImageInteraction) to show images from a streaming camera while also allowing mouse interaction on the image. As the GUI has other functionalities, the customized QLabel class does the job of extracting the image from the camera and updating its shown image through a while loop in a function which is run in another thread. The code for that is as below:
void ImageInteraction::startVideo()
{
if (!capture.open(streamUrl))
{
QMessageBox::warning(this, "Error", "No input device availabe!");
}
else
{
QFuture<void> multiprocess = QtConcurrent::run(this, &ImageInteraction::loadVideo);
}
}
void ImageInteraction::loadVideo()
{
while(loopContinue){
cv::Mat frame;
capture.read(frame);
if(!frame.empty())
{
cv::cvtColor(frame, frame, CV_BGR2RGBA);
cv::resize(frame, frame, cv::Size(this->width(), this->height()), 0, 0);
QImage image(frame.data, frame.cols, frame.rows, frame.step, QImage::Format_RGBA8888);
this->setPixmap(QPixmap::fromImage(image));
}
}
capture.release();
}
Here capture is of type cv::VideoCapture and loopContinue is a boolean type which is initially set to be true. There is a closeEvent() function that invokes the method for stopping the capture of the image from the camera.
void MainWindow::closeEvent(QCloseEvent *event)
{
liveVideo->stopVideoThread();//liveVideo is a pointer to an object of ImageInteraction
event->accept();
}
where stopVideoThread simply sets the boolean flag loopContinue to false and has the following simple code:
void ImageInteraction::stopVideoThread()
{
mutex.lock();//QMutex mutex;
loopContinue = false;
mutex.unlock();
}
In my understanding the while loop in loadVideo method should be stopped once stopVideoThread method is invoked and loopContinue is set to false. But in reality, when the close button is pressed, apparently it doesn't stop while loop and the application crashes with a message:
The inferior stopped because it received a signal from the operating system.
Signal name : SIGSEGV
Signal meaning : Segmentation fault
Am I using QtConcurrent::run method and the QMutex object erroneously? Could you identify what the problem is? FYI, the OS is ubuntu 14.04 and IDE is QtCreator.
Thanks!
The following is just an idea of the improvements mentioned in the above comments.
class ImageInteraction
{
public:
~ImageInteraction()
{
multiprocess_.waitForFinished();
}
void startVideo()
{
if (!capture.open(streamUrl))
{
QMessageBox::warning(this, "Error", "No input device availabe!");
}
else
{
multiprocess_ = QtConcurrent::run(this, &ImageInteraction::loadVideo);
}
}
void loadVideo()
{
while(loopContinue_)
{
cv::Mat frame;
capture.read(frame);
if(!frame.empty())
{
cv::cvtColor(frame, frame, CV_BGR2RGBA);
cv::resize(frame, frame, cv::Size(this->width(), this->height()), 0, 0);
QImage image(frame.data, frame.cols, frame.rows, frame.step, QImage::Format_RGBA8888);
this->setPixmap(QPixmap::fromImage(image));
}
}
capture.release();
}
void stopVideoThread()
{
loopContinue_ = false;
//multiprocess_.waitForFinished(); // you can call this here if you want to make sure that the thread has finished before returning
}
private:
QFuture<void> multiprocess_;
std::atomic<bool> loopContinue_;
};

Using Qt's QTimer function to make animations in OpenGl

How exactly do you use a QTimer to set off an animation in OpenGl?
I want to draw a simple circle and change the radius every 30 milliseconds, so it appears to grow and shrink smoothly.
Here's what I've come up with so far:
Header File
#include <QGLWidget>
#include <QTimer>
class GLWidget : public QGLWidget
{
Q_OBJECT
public:
explicit GLWidget(QWidget *parent = 0);
protected:
void initializeGL();
void paintGL();
void resizeGL(int width, int height);
void timerEvent(QTimerEvent *event);
private:
QBasicTimer timer;
private slots:
void animate();
};
CPP File
int circRad = 0;
GLWidget::GLWidget(QWidget *parent) :
QGLWidget(parent)
{
QTimer *aTimer = new QTimer;
connect(aTimer,SIGNAL(timeout(QPrivateSignal)),SLOT(animate()));
aTimer->start(30);
}
void GLWidget::initializeGL()
{
glClearColor(1,1,1,0);
}
void GLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(0,0,1);
const float DEG2RAD = 3.14159/180;
glBegin(GL_LINE_LOOP);
for (int i=0; i <= 360; i++)
{
float degInRad = i*DEG2RAD;
glVertex2f(cos(degInRad)*circRad,sin(degInRad)*circRad);
}
glEnd();
}
void GLWidget::resizeGL(int width, int height)
{
glViewport(0,0,width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
void GLWidget::animate()
{
if(circRad < 6)
{
circRad = circRad + 1;
}
else
{
circRad = circRad - 1;
}
update();
}
This(suprise, suprise) does nothing. Am I supposed to call a QTimerEvent? If so, does that mean I remove the animate SLOT and replace it with the QTimerEvent? Do I put the code from animate() into the QTimerEvent?
Typically you would only use a timer to trigger repaints, e.g. to limit the frame rate to 60 FPS. In the paint method, you would then check the current time, and do what you need to do to animate stuff. E.g. store the time t_start when the circle started growing, then offset the radius by sin(t - t_start).
By using the time (instead of the number of frames) you get animation that is independent of the frame rate. Keep in mind that Qt's timers are not exact. If you set a repeat interval of 30 ms, Qt doesn't guarantee that the slot is going to get called every 30 ms. Sometimes it might be 30 ms, sometimes 40 or even 100, depending on what else is in the event queue, or what's blocking the UI thread. If these hiccups occur, you don't want your animation to slow down.
Oh, and don't use int for the circle radius. If you want smooth animation, always use float or double.
QPrivateSignal should not be part of the signal signature in the connect call:
connect(aTimer,SIGNAL(timeout()),SLOT(animate()));
QtCreator's completion doesn't ignore it yet as it should (there is a bug report about that).