QWidget::grab function for videos: cannot catch - c++

I want to save the current window to an image (like as ATL+PrintScreen) in Qt. Example, for the Qt Media Player Example (you can get all the code in Qt Creator examples when search "player", I added this code to player.h :
//private slots:
...
void exportVideoToImages();
and in player.cpp:
void Player::exportVideoToImages()
{
qDebug() << "ok";
QPixmap const& px = grab();
px.save("File.png");
}
Add this line to the the constructor to trigger the slot:
new QShortcut(QKeySequence(Qt::CTRL + Qt::Key_G), this, SLOT(exportVideoToImages()));
So, when I trigger CTRL+G, I will receive an image "File.png". It worked, but the problem is the playing video cannot be catch.
This is the two images, one from Alt+PrintScreen and one from the program:
Why is it? How can I grab the video in Qt? Can you show me?
Thank you very much!

I found the solution:
QScreen *screen = QGuiApplication::primaryScreen();
if (screen) {
QPixmap px =screen->grabWindow(this->winId());
px.save("screen.png");
}

Related

Image Streaming using QGraphicsView

I am trying to display images from a CameraLink Camera into a QGraphicsView.
I am using a ring buffer which get() and set() functions and two threads that consume or produce the images in this ring buffer.
This part works because I already had a successfull streaming using a QWidget. I want to use a QGraphicsView because I want to draw over those incoming images
I have a display class which is promoted to QGraphicsView, everytime an image is produced, a signal is emitted from the consumer thread to the following function:
void display::drawImage(ImageBW* image)
{
firstImage = image;
QRect rect;
rect.setHeight(1000);
rect.setWidth(2000);
QApplication::postEvent(this->viewport(),new QPaintEvent(rect));
}
I have reimplemented the painEvent of the QGraphicsview class:
void display::paintEvent(QPaintEvent *event)
{
if(firstImage)
_drawImage();
event->setAccepted(true);
}
and the drawing function is:
void display::_drawImage()
{
int w, h;
if(img)
{
delete img;
img = NULL;
}
w = firstImage->getSizeX();
h = firstImage->getSizeY();
unsigned char * udata = (unsigned char *)firstImage ->getData();
img = new QImage(udata,w, h,QImage::Format_Grayscale8);
pix_img = QPixmap::fromImage(*img);
scene->clear();
scene->addPixmap(pix_img);
scene->setSceneRect(pix_img.rect());
scene->update();
update(); // I have tried here also setScene(scene)
}
In the above code, firstImage is an ImageBW* variable. ImageBW is where the image data from camera is stored.
I have also tried with QGraphicsItem and addItem() function but i get no Image.
'scene' is initialized on the producer of this class and also the 'setScene(scene);' command is there.
The program stucks after a while so that probably means that this code produces stackoverflow.
Please I have tried everything i can think of, any help would be greatly appreciated.
Thanks in advance

Qt showing live image, escape from massive signals

I have a QThread running, trying to decode image from a camera:
struct ImageQueue
{
enum {NumItems = 5};
tbb::concurrent_bounded_queue<DecodedImage> camera_queue_; // decoded image
tbb::concurrent_bounded_queue<DecodedImage> display_queue_; // widget display image
ImageQueue(int camid){camera_queue_.set_capacity(NumItems);display_queue_.set_capacity(NumItems)}
};
std::shared_ptr<ImageQueue> camera_queue;
void Worker::process()
{
while(1)
{
if(quit_)
break;
DecodedImage tmp_img;
camera_queue->camera_queue_.pop(tmp_img);
camera_queue->display_queue_.push(tmp_img);
emit imageReady();
}
emit finished();
}
And this thread is part of the Camera Class:
void Camera::Start()
{
work_ = new Worker();
work_->moveToThread(workerThread_);
QObject::connect(workerThread_, &QThread::finished, work_, &QObject::deleteLater);
QObject::connect(this, &Camera::operate, work_, &Worker::process);
QObject::connect(this, &Camera::stopDecode, work_, &Worker::stop);
QObject::connect(work_, &Worker::imageReady, this, &Camera::DisplayImage);
workerThread_->start();
emit operate();
}
On the widget display side:
class CVImageWidget : public QGraphicsView
{
Q_OBJECT
public:
void display(DecodedImage& tmp_img);
~CVImageWidget();
private:
QGraphicsScene *scene_;
};
CVImageWidget widget;
void Camera::DisplayImage()
{
if(camera_queue != nullptr)
{
DecodedImage tmp_img;
camera_queue->display_queue_.pop(tmp_img);
widget->display(tmp_img);
}
}
void CVImageWidget::display(DecodedImage& tmp_img)
{
if(!tmp_img.isNull())
{
scene_->clear();
scene_->addPixmap(QPixmap::fromImage(tmp_img));
}
}
My question is:
Is there a way to save me from the massive imageReady signals ? I use this signal to display image because the image has to be shown in main thread otherwise the image will not be displayed. But the massive amount of signals will make the GUI response become slower.
Is there a way to fix that ? Thanks.
You do not want to be altering a graphics scene each time an image arrives. You can just as well use a QLabel and its setPixmap method, or a simple custom widget.
There is no need for a display queue: you're only interested in the most recent frame. If the UI thread lags behind the camera thread, you want to drop obsolete frames.
You have to rethink if the camera_queue needs to be a concurrent queue, since you access it from a single thread only, and whether you need that queue at all.
The typical image producer-consumer would work as follows: a Camera class interfaces with a camera and emits a hasFrame(const DecodedImage &) signal each time it got a new frame. No need for a queue: any listening object thread's event queues are concurrent queues already.
A display widget simply accepts the images to display:
class Display : public QWidget {
Q_OBJECT
DecodedImage m_image;
protected:
void paintEvent(QPaintEvent *event) {
QPainter painter{this};
painter.drawImage(0, 0, m_image);
}
public:
Display(QWidget *parent = nullptr) : QWidget{parent} {}
QSize sizeHint() const override { return m_image.size(); }
Q_SLOT void setImage(const QImage & image) {
auto oldSize = m_image.size();
m_image = image;
if (oldSize != m_image.size())
updateGeometry();
update();
}
};
You then connect the image source to the display widget and you're done. The paintEvent or other GUI thread operations can take as long as you wish, even if the camera produces images much faster than the display can consume them, you'll still show the most recent image each time the widget gets a chance to repaint itself. See this answer for a demonstration of that fact. See this answer for a similar demonstration that works on a fixed-pipeline OpenGL backend and uses the GPU to scale the image.
Mutex approach should be faster than emiting Qt signals. I've once tried to skip Qt signals by using STL condition_variable and it worked just fine for me.

Identify which QPixmapItem has been selected

I am adding QGraphicsPixmapItems to my scene, and I can see that when I pick on the item, it gets the white dashed selection rectangle, but I'm struggling to get any data from this selection. Here is how I'm adding it to the scene.
void MainWindow::drawImage(curTarget *newTarget){
QGraphicsPixmapItem *tgt = new QGraphicsPixmapItem;//new pixmap
tgt = scene->addPixmap(newTarget->myIcon);//assign pixmap image
tgt->setFlag(QGraphicsItem::ItemIsSelectable, true);
scene->addItem(tgt);
}
Each PixmapItem that I add to the scene has struct data associated with it, and I need to be able to retrieve that data when I select on the QGraphicsPixmapItem inside of the QGraphicsScene. If the selection rectangle is showing up when the pixmapitem is selected, isn't there some easy way to return information to me based on that fact? A Pointer to what is selected perhaps?
I do have a mousePressEvent method implemented but I'm struggling getting anything relevant with that.
void MainWindow::mousePressEvent(QMouseEvent *event){
qDebug() << "Clicked" << endl;
}
When I run the app, I see Clicked everywhere in my scene except when I click on my pixmapitems.
I've tried every version of the mousePressEvents available and the ones that actually do something, only do something as long as I don't press on my pixmapitems.
Thank to the help I received in my comments, I will post up what ultimately worked for me.
void MainWindow::drawImage(curTarget *newTarget)
{
QGraphicsPixmapItem *tgt = new QGraphicsPixmapItem
tgt = scene->addPixmap(newTarget->myIcon);
tgt->setFlag(QGraphicsItem::ItemIsSelectable, true);
scene->addItem(tgt);
}
With the new function added...
void MainWindow::whatIsSelected(){
QDebug() <<scene->selectedItems() << endl;}
And then I made the connection of the scene to the window elsewhere...
QObject::connect(scene, SIGNAL(selectionChanged()), this, SLOT(whatIsSelected);

Show two images in one QGraphicsView

It's my first post here and I would like to say Hello Everyone on stackoverflow :)
OK, welcome already have been and now I specify the problem which I've got. I have one QGraphicsView widget and I want to show two images with some opacity in it, but my code doesn't work and I don't know what is the reason :/
QGraphicsScene *scenaWynikowa = new QGraphicsScene(ui->graphicsViewWynik);
ui->graphicsViewWynik->setScene(scenaWynikowa);
ui->graphicsViewWynik->fitInView(scenaWynikowa->itemsBoundingRect(), Qt::KeepAspectRatio);
//wyświetlenie zdjęcia nr 1
QImage obraz1(s1);
obraz1.scaled(QSize(541,541), Qt::IgnoreAspectRatio, Qt::FastTransformation);
update();
resize(541, 541);
QPixmap mapaPikseli1(n1);
QGraphicsPixmapItem *pixmapItem1 = scenaWynikowa->addPixmap(mapaPikseli1);
QGraphicsOpacityEffect poziomPrzezroczystosci1;
poziomPrzezroczystosci1.setOpacity(0.5);
pixmapItem1->setGraphicsEffect(&poziomPrzezroczystosci1);
//wyświetlenie zdjęcia nr 2
QImage obraz2(s2);
obraz2.scaled(QSize(541,541), Qt::IgnoreAspectRatio, Qt::FastTransformation);
update();
resize(541, 541);
QPixmap mapaPikseli2(n2);
QGraphicsPixmapItem *pixmapItem2 = scenaWynikowa->addPixmap(mapaPikseli2);
QGraphicsOpacityEffect poziomPrzezroczystosci2;
poziomPrzezroczystosci2.setOpacity(0.5);
pixmapItem2->setGraphicsEffect(&poziomPrzezroczystosci2);
pixmapItem2->moveBy(0, 0);
ui->graphicsViewWynik->show();
Sorry for not English Variable's names but it's more convinient for me. If you want I can explain what and why variable has that name :)
Maybe someone finds a mistake in this code and explain to me where's the problem with my code and how to fix it?
edit: It's my new code. When I move position of pix2 on QGraphicsView I can see two images (pix2 under pix1)and it works fine, but images should have opacity level to make a diffusion effect. How should I do it?
The reason it doesn't work is because you are trying to use two different QGraphicsScenes for your QGraphicsView. QGraphicsView can only have one scene.
What you should do, is create only one QGraphicsScene and add your pixmaps there.
QGraphicsScene *scene = new QGraphicsScene(this);
ui->graphicsScene->setScene(scene);
QPixmap pix1(n1);
QGraphicsPixmapItem *pixmapItem1 = scene->addPixmap(pix1);
QPixmap pix2(n2);
QGraphicsPixmapItem *pixmapItem2 = scene->addPixmap(pix2);
pixmapItem2->moveBy(0, pix1.height());
Also your QGraphicsOpacityEffect object is only valid inside the scope you created it in. One way to solve this problem is to allocate it with new.
QGraphicsOpacityEffect *opacity1 = new QGraphicsOpacityEffect;
QGraphicsOpacityEffect *opacity2 = new QGraphicsOpacityEffect;
opacity1->setOpacity(0.5);
opacity2->setOpacity(0.2);
pixmapItem1->setGraphicsEffect(opacity1);
pixmapItem2->setGraphicsEffect(opacity2);
Ok. Thanks #thuga for your help. The problem has been solved. What was wrong? I was setting Opacity twice to two another variables but it was huge mistake. In QGraphicsView we can only declare OpacityEffect variable once, and assign it to multiple variables - just like QGraphicsScene.
Latest version of code (it works fine):
QGraphicsScene *scenaWynikowa = new QGraphicsScene(ui->graphicsViewWynik);
ui->graphicsViewWynik->setScene(scenaWynikowa);
ui->graphicsViewWynik->fitInView(scenaWynikowa->itemsBoundingRect(), Qt::KeepAspectRatio);
QGraphicsOpacityEffect *poziomPrzezroczystosci = new QGraphicsOpacityEffect();
poziomPrzezroczystosci->setOpacity(0.5);
QImage obraz1(s1);
obraz1.scaled(QSize(ui->graphicsViewWynik->width(), ui->graphicsViewWynik->height()), Qt::IgnoreAspectRatio, Qt::FastTransformation);
update();
resize(ui->graphicsViewWynik->width(), ui->graphicsViewWynik->height());
QPixmap mapaPikseli1(n1);
QGraphicsPixmapItem *pixmapItem1 = scenaWynikowa->addPixmap(mapaPikseli1);
QImage obraz2(s2);
obraz2.scaled(QSize(ui->graphicsViewWynik->width(), ui->graphicsViewWynik->height()), Qt::IgnoreAspectRatio, Qt::FastTransformation);
update();
resize(ui->graphicsViewWynik->width(), ui->graphicsViewWynik->height());
QPixmap mapaPikseli2(n2);
QGraphicsPixmapItem *pixmapItem2 = scenaWynikowa->addPixmap(mapaPikseli2);
pixmapItem1->setGraphicsEffect(poziomPrzezroczystosci);
pixmapItem2->setGraphicsEffect(poziomPrzezroczystosci);
pixmapItem2->moveBy(0, 0);
ui->graphicsViewWynik->show();

QWidget paint even not being called even with non-zero width and height?

I am trying to make a scroll area hold a widget which will serve as a drawing area for a drag-and-drop editor I am trying to build. However, I can't seem to get it to draw.
Here is a picture: http://i.imgur.com/rTBjg.png
On the right the black space is what is supposed to be my scrollarea
Here is the constructor for my the window class (I am using Qt-Creator):
ModelWindow::ModelWindow(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::ModelWindow)
{
ui->setupUi(this);
editor = new ModelEditorWidget(this);
ui->scrollArea->setWidget(editor);
}
The model editor widget looks like so:
//header
class ModelEditorWidget : public QWidget
{
Q_OBJECT
public:
explicit ModelEditorWidget(QWidget *parent = 0);
signals:
public slots:
protected:
virtual void paintEvent(QPaintEvent *e);
};
//.cpp file:
ModelEditorWidget::ModelEditorWidget(QWidget *parent) :
QWidget(parent)
{
this->setAcceptDrops(true);
this->resize(1000, 1000);
cout << this->rect().x() << " " << this->rect().width() << endl;
this->update();
}
void ModelEditorWidget::paintEvent(QPaintEvent *e)
{
cout << "painting";
QWidget::paintEvent(e);
QPainter painter(this);
painter.setBrush(QBrush(Qt::green));
painter.setPen(QPen(Qt::red));
painter.drawRect(400, 400, 50, 50);
painter.fillRect(e->rect(), Qt::SolidPattern);
}
I would think that this would set the modeleditorwidget's size to be 1000x1000 and then draw a green or red rectangle on the widget. However, the lack of a "painting" message in the command line from the cout at the beginning of the paintEvent shows that it isn't being executed. I first suspected this is because there was 0 width and 0 height on the widget. However, the cout in the constructor tells me that the widget is positioned at x = 0 and width = 1000 and so I assume that since that matches my resize statement that the height is also 1000 as was specified.
EDIT: By calling cout.flush() I got the "painting" output. However, that only deepens the mystery since the paint event doesn't look like it is actually painting. I am now calling show on both the scroll area and the widget as well.
Does anyone see what I could be doing wrong here? Perhaps I am not adding the ModelEditorWidget to the scrollarea properly?
Btw, I am very new to Qt and this is my first major GUI project using it. Most of my other GUI stuff was done using .NET stuff in C#, but since I want this to be cross platform I decided to stay away from C#.NET and mono and use Qt.
Documentation of QScrollArea::setWidget() says:
If the scroll area is visible when the widget is added, you must
show() it explicitly.
Note that You must add the layout of widget before you call this
function; if you add it later, the widget will not be visible -
regardless of when you show() the scroll area. In this case, you can
also not show() the widget later.
Have you tried that?