Image Streaming using QGraphicsView - c++

I am trying to display images from a CameraLink Camera into a QGraphicsView.
I am using a ring buffer which get() and set() functions and two threads that consume or produce the images in this ring buffer.
This part works because I already had a successfull streaming using a QWidget. I want to use a QGraphicsView because I want to draw over those incoming images
I have a display class which is promoted to QGraphicsView, everytime an image is produced, a signal is emitted from the consumer thread to the following function:
void display::drawImage(ImageBW* image)
{
firstImage = image;
QRect rect;
rect.setHeight(1000);
rect.setWidth(2000);
QApplication::postEvent(this->viewport(),new QPaintEvent(rect));
}
I have reimplemented the painEvent of the QGraphicsview class:
void display::paintEvent(QPaintEvent *event)
{
if(firstImage)
_drawImage();
event->setAccepted(true);
}
and the drawing function is:
void display::_drawImage()
{
int w, h;
if(img)
{
delete img;
img = NULL;
}
w = firstImage->getSizeX();
h = firstImage->getSizeY();
unsigned char * udata = (unsigned char *)firstImage ->getData();
img = new QImage(udata,w, h,QImage::Format_Grayscale8);
pix_img = QPixmap::fromImage(*img);
scene->clear();
scene->addPixmap(pix_img);
scene->setSceneRect(pix_img.rect());
scene->update();
update(); // I have tried here also setScene(scene)
}
In the above code, firstImage is an ImageBW* variable. ImageBW is where the image data from camera is stored.
I have also tried with QGraphicsItem and addItem() function but i get no Image.
'scene' is initialized on the producer of this class and also the 'setScene(scene);' command is there.
The program stucks after a while so that probably means that this code produces stackoverflow.
Please I have tried everything i can think of, any help would be greatly appreciated.
Thanks in advance

Related

Qt showing live image, escape from massive signals

I have a QThread running, trying to decode image from a camera:
struct ImageQueue
{
enum {NumItems = 5};
tbb::concurrent_bounded_queue<DecodedImage> camera_queue_; // decoded image
tbb::concurrent_bounded_queue<DecodedImage> display_queue_; // widget display image
ImageQueue(int camid){camera_queue_.set_capacity(NumItems);display_queue_.set_capacity(NumItems)}
};
std::shared_ptr<ImageQueue> camera_queue;
void Worker::process()
{
while(1)
{
if(quit_)
break;
DecodedImage tmp_img;
camera_queue->camera_queue_.pop(tmp_img);
camera_queue->display_queue_.push(tmp_img);
emit imageReady();
}
emit finished();
}
And this thread is part of the Camera Class:
void Camera::Start()
{
work_ = new Worker();
work_->moveToThread(workerThread_);
QObject::connect(workerThread_, &QThread::finished, work_, &QObject::deleteLater);
QObject::connect(this, &Camera::operate, work_, &Worker::process);
QObject::connect(this, &Camera::stopDecode, work_, &Worker::stop);
QObject::connect(work_, &Worker::imageReady, this, &Camera::DisplayImage);
workerThread_->start();
emit operate();
}
On the widget display side:
class CVImageWidget : public QGraphicsView
{
Q_OBJECT
public:
void display(DecodedImage& tmp_img);
~CVImageWidget();
private:
QGraphicsScene *scene_;
};
CVImageWidget widget;
void Camera::DisplayImage()
{
if(camera_queue != nullptr)
{
DecodedImage tmp_img;
camera_queue->display_queue_.pop(tmp_img);
widget->display(tmp_img);
}
}
void CVImageWidget::display(DecodedImage& tmp_img)
{
if(!tmp_img.isNull())
{
scene_->clear();
scene_->addPixmap(QPixmap::fromImage(tmp_img));
}
}
My question is:
Is there a way to save me from the massive imageReady signals ? I use this signal to display image because the image has to be shown in main thread otherwise the image will not be displayed. But the massive amount of signals will make the GUI response become slower.
Is there a way to fix that ? Thanks.
You do not want to be altering a graphics scene each time an image arrives. You can just as well use a QLabel and its setPixmap method, or a simple custom widget.
There is no need for a display queue: you're only interested in the most recent frame. If the UI thread lags behind the camera thread, you want to drop obsolete frames.
You have to rethink if the camera_queue needs to be a concurrent queue, since you access it from a single thread only, and whether you need that queue at all.
The typical image producer-consumer would work as follows: a Camera class interfaces with a camera and emits a hasFrame(const DecodedImage &) signal each time it got a new frame. No need for a queue: any listening object thread's event queues are concurrent queues already.
A display widget simply accepts the images to display:
class Display : public QWidget {
Q_OBJECT
DecodedImage m_image;
protected:
void paintEvent(QPaintEvent *event) {
QPainter painter{this};
painter.drawImage(0, 0, m_image);
}
public:
Display(QWidget *parent = nullptr) : QWidget{parent} {}
QSize sizeHint() const override { return m_image.size(); }
Q_SLOT void setImage(const QImage & image) {
auto oldSize = m_image.size();
m_image = image;
if (oldSize != m_image.size())
updateGeometry();
update();
}
};
You then connect the image source to the display widget and you're done. The paintEvent or other GUI thread operations can take as long as you wish, even if the camera produces images much faster than the display can consume them, you'll still show the most recent image each time the widget gets a chance to repaint itself. See this answer for a demonstration of that fact. See this answer for a similar demonstration that works on a fixed-pipeline OpenGL backend and uses the GPU to scale the image.
Mutex approach should be faster than emiting Qt signals. I've once tried to skip Qt signals by using STL condition_variable and it worked just fine for me.

QWidget::grab function for videos: cannot catch

I want to save the current window to an image (like as ATL+PrintScreen) in Qt. Example, for the Qt Media Player Example (you can get all the code in Qt Creator examples when search "player", I added this code to player.h :
//private slots:
...
void exportVideoToImages();
and in player.cpp:
void Player::exportVideoToImages()
{
qDebug() << "ok";
QPixmap const& px = grab();
px.save("File.png");
}
Add this line to the the constructor to trigger the slot:
new QShortcut(QKeySequence(Qt::CTRL + Qt::Key_G), this, SLOT(exportVideoToImages()));
So, when I trigger CTRL+G, I will receive an image "File.png". It worked, but the problem is the playing video cannot be catch.
This is the two images, one from Alt+PrintScreen and one from the program:
Why is it? How can I grab the video in Qt? Can you show me?
Thank you very much!
I found the solution:
QScreen *screen = QGuiApplication::primaryScreen();
if (screen) {
QPixmap px =screen->grabWindow(this->winId());
px.save("screen.png");
}

Saving Qwtplot to image

I am using QwtPlotRenderer to save plot to file. I have also used QImage::grabWidget() to save plot to QPixmap.
But in both cases the resulting image is being :
As you can see ,the curve is not visible in result image whereas in myplot->show() function I am using full output. How can I solve this ?
Here is my code :
#include "QApplication"
#include<qwt_plot_layout.h>
#include<qwt_plot_curve.h>
#include<qwt_scale_draw.h>
#include<qwt_scale_widget.h>
#include<qwt_legend.h>
#include<qwt_plot_renderer.h>
class TimeScaleDraw:public QwtScaleDraw
{
public:
TimeScaleDraw(const QTime & base)
:baseTime(base)
{
}
virtual QwtText label(double v)const
{
QTime upTime = baseTime.addSecs((int)v);
return upTime.toString();
}
private:
QTime baseTime;
};
int main(int argc,char * argv[])
{
QApplication a(argc,argv);
QwtPlot * myPlot = new QwtPlot(NULL);
myPlot->setAxisScaleDraw(QwtPlot::xBottom,new TimeScaleDraw(QTime::currentTime()));
myPlot->setAxisTitle(QwtPlot::xBottom,"Time");
myPlot->setAxisLabelRotation(QwtPlot::xBottom,-50.0);
myPlot->setAxisLabelAlignment(QwtPlot::xBottom,Qt::AlignLeft|Qt::AlignBottom);
myPlot->setAxisTitle(QwtPlot::yLeft,"Speed");
QwtPlotCurve * cur = new QwtPlotCurve("Speed");
QwtPointSeriesData * data = new QwtPointSeriesData;
QVector<QPointF>* samples=new QVector<QPointF>;
for ( int i=0;i<60;i++)
{
samples->push_back(QPointF(i,i*i));
}
data->setSamples(*samples);
cur->setData(data);
cur->attach(myPlot);
//myPlot->show();
QPixmap pix = QPixmap::grabWidget(myPlot);
std::cout<<pix.save("der.png","PNG")<<std::endl;
QPixmap pixmap (400,400);
QPainter * painter=new QPainter(&pixmap);
QwtPlotRenderer rend;
rend.render(myPlot,painter,myPlot->geometry());
pixmap.save("Dene.jpg");
return a.exec();
}
The replot() function is what actually draw the graph.
Since you want to save a picture of the graph, calling replot() first make sense.
I found the solution by trial and error method.It solved my problem but I am not sure if it is the rightest solution.If anyone can explain reason I would be happy.
After attaching the curve to plot and before saving it to image I called myplot->replot() and the result become as I expected.

How to pass detected faces from a function to a separate push button slot in Qt

I am using Qt for creating front-end,I have a method called 'facedetect',in that i detect all the faces.Now i need to pass those detected faces to another function,in that it compares detected faces with all the faces in the database.But the actual problem is ,I am comparing faces when i click a pushbutton 'compare'(It is in different slot). I need to access the detected faces in to the 'compare' pushbutton slots.
Here is my code:
void third::facedetect(IplImage* image) //face detection
{
int j=0,l=0,strsize,count=0;
char numstr[50];
CvPoint ul,lr,w,h;
string path;
IplImage* image1;IplImage* tmpsize;IplImage* reimg;
CvHaarClassifierCascade* cascade=(CvHaarClassifierCascade*) cvLoad(cascade_name);
CvMemStorage* storage=cvCreateMemStorage(0);
if(!cascade)
{
qDebug()<<"Coulid not load classifier cascade";
}
if(cascade)
{
CvSeq* faces=cvHaarDetectObjects(image,cascade,storage,1.1,1,CV_HAAR_DO_CANNY_PRUNING,cvSize(10,10));
for(int i=0;i<(faces ? faces->total : 0);i++)
{
string s1="im",re,rename,ex=".jpeg";
sprintf(numstr, "%d", k);
re = s1 + numstr;
rename=re+ex;
char *extract1=new char[rename.size()+1];
extract1[rename.size()]=0;
memcpy(extract1,rename.c_str(),rename.size());
strsize=rename.size();
CvRect *r=(CvRect*) cvGetSeqElem(faces,i);
ul.x=r->x;
ul.y=r->y;
w.x=r->width;
h.y=r->height;
lr.x=(r->x + r->width);
lr.y=(r->y + r->height);
cvSetImageROI(image,cvRect(ul.x,ul.y,w.x,h.y));
image1=cvCreateImage(cvGetSize(image),image->depth,image->nChannels);
cvCopy(image, image1, NULL);
reimg=resizeImage(image1, 40, 40, true);
saveImage(reimg,extract1);
img_1=cvarrToMat(reimg);//this img_1 contains the detected faces.
cvResetImageROI(image);
cvRectangle(image,ul,lr,CV_RGB(1,255,0),3,8,0);
j++,count++,k++;
}
}
qDebug()<<"Number of images:"<<count<<endl;
cvNamedWindow("output");//creating a window.
cvShowImage("output",image);//showing resized image.
cvWaitKey(0);//wait for user response.
cvReleaseImage(&image);//releasing the memory.
cvDestroyWindow("output");//destroying the window.
}
**And this the push button 'compare' slot**
void third::on_pushButton_5_clicked()
{
}
If I could access those detected faces from 'void third::on_pushButton_5_clicked()' then only I can use it in the push button slot'compare'. Please help me to fix this..
It sounds like a declaration problem... that the image showing the detected face is not declared under the "public", or maybe "global" variable. If so, its simple.
In your header file, third.h,
class third : public third
{
Q_OBJECT
public:
explicit third(QWidget *parent = 0);
~third();
//declare your image holding the detected face here, which see from your .cpp file is img_1
Following which, for your void third::on_pushButton_5_clicked(),
void third::on_pushButton_5_clicked()
{
cvNamedWindow("img_1");//creating a window.
cvShowImage("img_1",img_1);//showing resized image.
cvWaitKey(0);//wait for user response.
cvReleaseImage(&img_1);//releasing the memory.
cvDestroyWindow("img_1");//destroying the window.
}
The code may have complilation error, as I am typing this directly into stackOverflow.
Do let me know if that solves the problem, if so, do considering clicking the tick button under the arrows and upvoting it. If it doesn't work, comment, and I will see how else I can help. Cheers!

What is the fastest way to get QWidget pixel color under mouse?

I need to get the color of pixel under mouse, inside mouseMoveEvent of a QWidget (Breadboard). Currently I have this code->
void Breadboard::mouseMoveEvent(QMouseEvent *e)
{
QPixmap pixmap = QPixmap::grabWindow(winId());
QRgb color = pixmap.toImage().pixel(e->x(), e->y());
if (QColor(color) == terminalColor)
QMessageBox::information(this, "Ter", "minal");
}
Take a look at (scaled down) screenshot below-
When user moves his mouse on breadboard, the hole should get highlighted with some different color (like in red circle). And when the mouse exits, the previous color (grey) should be restored. So I need to do following steps-
Get color under mouse
According to color, floodfill the hole. (Different holes are distinguished using color)
On mouse out, restore the color. There would be wires going over holes, so I can't update the small rectangle (hole) only.
What is the fastest way of doing this? My attempt to extract color is not working i.e the Message box in my above code never displays. Moreover I doubt if my existing code is fast enough for my purpose. Remember, how fast you will be moving your mouse on breadboard.
Note - I was able to do this using wxWidgets framework. But due to some issues that project got stalled. And I am rewriting it using Qt now.
You are invited to look at code https://github.com/vinayak-garg/dic-sim
The "idiomatic" way of doing this in Qt is completely different from what you're describing. You'd use the Graphics View Framework for this type of thing.
Graphics View provides a surface for managing and interacting with a large number of custom-made 2D graphical items, and a view widget for visualizing the items, with support for zooming and rotation.
You'd define your own QGraphicsItem type for the "cells" in the breadboard that would react to hover enter/leave events by changing their color. The connections between the cells (wires, resistors, whatever) would also have their own graphics item types with the features you need for those.
Here's a quick and dirty example for you. It produces a 50x50 grid of green cells that become red when the mouse is over them.
#include <QtGui>
class MyRect: public QGraphicsRectItem
{
public:
MyRect(qreal x, qreal y, qreal w, qreal h)
: QGraphicsRectItem(x,y,w,h) {
setAcceptHoverEvents(true);
setBrush(Qt::green);
}
protected:
void hoverEnterEvent(QGraphicsSceneHoverEvent *) {
setBrush(Qt::red);
update();
}
void hoverLeaveEvent(QGraphicsSceneHoverEvent *) {
setBrush(Qt::green);
update();
}
};
int main(int argc, char **argv)
{
QApplication app(argc, argv);
QGraphicsScene scene;
for (int i=0; i<50; i++)
for (int j=0; j<50; j++)
scene.addItem(new MyRect(10*i, 10*j, 8, 8));
QGraphicsView view(&scene);
view.show();
return app.exec();
}
You could modify the hover event handlers to talk to your "main window" or "controller" indicating what's currently under the mouse so you can update your caption, legend box or tool palette.
For best speed, render only the portion of the widget you're interested in into a QPaintDevice (like a QPixmap). Try something like this:
void Breadboard::mouseMoveEvent(QMouseEvent *e)
{
// Just 1 pixel.
QPixmap pixmap(1, 1);
// Target coordinates inside the pixmap where drawing should start.
QPoint targetPos(0, 0);
// Source area inside the widget that should be rendered.
QRegion sourceArea( /* use appropriate coordinates from the mouse event */ );
// Render it.
this->render(&pixmap, targetPos, sourceArea, /* look into what flags you need */);
// Do whatever else you need to extract the color from the 1 pixel pixmap.
}
Mat's answer is better if you're willing to refactor your application to use the graphics view API.