I'm making a custom widget in Qt, and drawing an image as its background. The background image is to be exactly the same for all instances of widgets. I'd like to know if I'm doing it in an appropriate way.
Here's how I'm doing it now:
// QMyWidget.h
class QMyWidget : public QWidget
{
/* some stuff.. and then: */
protected:
static QImage *imgBackground;
}
// QMyWidget.cpp
QImage *QMyWidget::imgDial = NULL;
QMyWidget::QMyWidget(QWidget *parent) : QWidget(parent)
{
if(imgBackground== NULL)
{
imgBackground= new QImage();
imgBackground->load(":/Images/background.png");
}
}
void QMyWidget::paintEvent(QPaintEvent *e)
{
QPainter painter(this);
painter.drawImage(QPoint(), *imgBackground);
}
The code works just fine, but is this considered a good way to do it?
That's one way of doing it, and it's a reasonably good way if you're only dealing with a single image, but if you ever decide to scale and use a couple of custom resources then the Qt Resource System is the better way to do it. It'll save you time in code (don't need to repeatedly do QImage), and has a few other nice features like resource compression.
Related
I'm currently having a problem where I try to render an SVG file to a QLabel and it does not display correctly:
stop sign SVG displayed wrongly in main window
This is how the SVG actually looks like:
stop sign SVG
I want the SVG to be displayed without scaling (I mean, that's what SVG is all about, isn't it) in it's original square shape. The SVG has a size specified of 256x256, but I don't care about that as long as it fills the layout's cell and it's displayed in its correct aspect ratio.
This is the meaningful part of the header:
class LoginUnavailableWindow : public QWidget {
Q_OBJECT
public:
explicit LoginUnavailableWindow(QWidget *parent = nullptr);
~LoginUnavailableWindow() override;
private:
QHBoxLayout errorLayout {this};
QLabel errorIconLabel {this};
QLabel errorTextLabel {this};
};
And this is the meaningful part of the main cpp:
LoginUnavailableWindow::LoginUnavailableWindow(QWidget *parent) : QWidget(parent) {
/* set overall layout */
errorLayout.addWidget(&errorIconLabel);
errorLayout.addWidget(&errorTextLabel);
errorLayout.setStretch(0, 1);
errorLayout.setStretch(1, 3);
/* apply size constraints */
setMinimumSize(800, 200);
setMaximumSize(800, 200);
errorTextLabel.setAlignment(Qt::AlignCenter);
errorIconLabel.setAlignment(Qt::AlignCenter);
/* set text */
QFont font = errorTextLabel.font();
font.setPointSize(48);
errorTextLabel.setFont(font);
errorTextLabel.setText("Login is currently\nunavailable");
/* render SVG */
QSvgRenderer renderer(ERROR_SVG_PATH);
QPixmap pm(errorIconLabel.size());
pm.fill(QColorConstants::Transparent);
QPainter painter(&pm);
renderer.render(&painter, pm.rect());
errorIconLabel.setPixmap(pm);
}
The actual SVG rendering code was taken from here. One of the problems that I can see is that errorIconLabel.size() is returning 100x30 which I find very confusing. How do I get the actual size of the layouts cell so that I can calculate at which resolution to render the SVG?
A lot of answers I found would use setScaledContents() which does cause the pixmap to be displayed at a more reasonable size, but then it's all blurry/pixelated since the SVG is still rendered at the wrong resolution. I would like to achieve this without introducing scaling artifacts.
Okay, I was able to solve this on my own. I've realized that the QLabel would only have the size 100x30 returned in the constructor. After the constructor finishes, a resizeEvent is issued directly and inside this resizeEvent the size of that label is returned correctly.
So my solution in the end was to implement void resizeEvent(QResizeEvent *) override in LoginUnavailableWindow and to always redraw the SVG when the size would change. In my case I force the window size fixed, but if someone else would do this, it would also solve display problems when the user would resize the window.
I want to write an tool of objective annotation using qt5 MinGw32, which could annotate object in video file and diplay them during playing. So QGraphicsScene is inherited to implementation the function.
Something wrong happens when I change the QGraphicsScene's background frequently(e.g. 30 fps): most of time it works as expected while sometimes the background could not move.
Here is my code:
void MyGraphicsScene::UpdateFrame(QImage image)
{
QPixmap pixmap = QPixmap::fromImage(image);
//fix the view's size and scene's size
views().first()->setFixedSize(pixmap.size());
setSceneRect(0,0, pixmap.width(), pixmap.height());
setBackgroundBrush(QBrush(pixmap));
}
...
//In another thread
void Thread::run()
{
...
myScene.UpdateFrame(newImage);
...
}
I have search through the qt's document and found no answer.
However, there is something strange:
when wrong thing happens, I find the background continues to change, but it didn't show change on the screen unless I move the app to another screen (I have two screen). However, with the app moved, the QGraphicsScene's background just change once and becomes static afterwards.
I guess the background has been changed but doesn't repainted, so I used update(), but it didn't help.
BTW, I couldn't reproduce the occasion, sometiems it happens, somtimes not.
do I need to represented any methods? Or I called the methods in a wrong way? Or is there an alternative approach that would work?
Many thanks for your help in advance.
You should not change QtGui elements from a different thread by calling a method directly.
Use the Qt signal-slot concept.
Calling update() is not necessary.
class MyGraphicsScene{...
....
signals:
void singalFromThread(QImage image);
public:
//CTor
MyGraphicsScene()
{
connect(this, SIGNAL(singalFromThread(QImage)),this,SLOT(UpdateFrame(QImage)));
}
//Call this method from your thread
void updateForwarder(QImage image)
{
//Jump into gui thread
emit singalFromThread(image);
}
public slots:
void UpdateFrame(QImage image)
{
setBackgroundBrush(....);
}
};
I am using the QGraphicsPixmapItem to show an image on the display. Now, I want to be able to update this image on the fly, but I seem to be running into some issues.
This is the header file:
class Enemy_View : public QGraphicsPixmapItem
{
public:
Enemy_View(QGraphicsScene &myScene);
void defeat();
private:
QGraphicsScene &scene;
QPixmap image;
}
And here is the cpp file
Enemy_View::Enemy_View(QGraphicsScene &myScene):
image{":/images/alive.png"}, scene(myScene)
{
QGraphicsPixmapItem *enemyImage = scene.addPixmap(image.scaledToWidth(20));
enemyImage->setPos(20, 20);
this->defeat();
}
void Enemy_View::defeat(void)
{
image.load(":/images/dead.png");
this->setPixmap(image);
this->update();
}
SO the idea is that I want to be able to call the defeat method on my object, that then edits some attributes and eventually changes the image. However, what I am doing now does not work. The alive.png image does display, but does not get updated to the dead.png one.
Update 1
As mentioned by Marek R I appear to be replicating a lot of built-in functionality. I tried to clean this up but now nothing is showing up on the scene anymore.
.h file
class Enemy_View : public QGraphicsPixmapItem
{
public:
Enemy_View(QGraphicsScene &myScene);
void defeat();
private:
QGraphicsScene &scene;
/* Extra vars */
};
.cpp file
Enemy_View::Enemy_View(QGraphicsScene &myScene):
scene(myScene)
{
/* This part would seem ideal but doesn't work */
this->setPixmap(QPixmap(":/images/alive.png").scaledToWidth(10));
this->setPos(10, 10);
scene.addItem(this);
/* This part does render the images */
auto *thisEl = scene.addPixmap(QPixmap(":/images/Jackskellington.png").scaledToWidth(10));
thisEl->setPos(10, 10);
scene.addItem(this);
this->defeat();
}
void Enemy_View::defeat(void)
{
this->setPixmap(QPixmap(":/images/dead.png"));
}
So I removed the QPixmap, but I'm not sure whether I can remove the QGraphicsScene. In my cpp-file you can see I have two versions of the constructor now. The first part, using the this seems like an ideal solution, but does not display the image on the screen (even though it does compile without errors). The second version with thisEl does render it. What am I doing wrong with the first version?
Why FGS you are subclassing QGraphicsPixmapItem? QGraphicsPixmapItem has all functionality you need. And those new fields you have added does nothing, they only try replicate functionality which is already there (but with this implementation it does nothing).
This suppose to be something like that:
QPixmp image(":/images/alive.png");
QGraphicsPixmapItem *enemyItem = scene.addPixmap(image.scaledToWidth(20));
enemyItem->setPos(20, 20);
// and after something dies
QPixmap dieImage(":/images/dead.png");
enemyItem->setPixmap(dieImage);
I'm trying to write a QML plugin that reads frames from a video (using a custom widget to do that task, NOT QtMultimedia/Phonon), and each frame is converted to a QImage RGB888, and then displayed on a QGLWidget (for performance reasons). Right now nothing is draw to the screen and the screen stays white all the time.
It's important to state that I already have all of this working without QGLWidget, so I know the issue is setting up and drawing on QGLWidget.
The plugin is being registered with:
qmlRegisterType<Video>(uri,1,0,"Video");
so Video is the main class of the plugin. On it's constructor we have:
Video::Video(QDeclarativeItem* parent)
: QDeclarativeItem(parent), d_ptr(new VideoPrivate(this))
{
setFlag(QGraphicsItem::ItemHasNoContents, false);
Q_D(Video);
QDeclarativeView* view = new QDeclarativeView;
view->setViewport(&d->canvas()); // canvas() returns a reference to my custom OpenGL Widget
}
Before I jump to the canvas object, let me say that I overloaded Video::paint() so it calls canvas.paint() while passing QImage as parameter, I don't know if this is the right way to do it so I would like some advice on this:
void Video::paint(QPainter* painter, const QStyleOptionGraphicsItem* option, QWidget* widget)
{
Q_UNUSED(painter);
Q_UNUSED(widget);
Q_UNUSED(option);
Q_D(Video);
// I know for sure at this point "d->image()" is valid, but I'm hiding the code for clarity
d->canvas().paint(painter, option, d->image());
}
The canvas object is declared as GLWidget canvas; and the header of this class is defined as:
class GLWidget : public QGLWidget
{
Q_OBJECT
public:
explicit GLWidget(QWidget* parent = NULL);
~GLWidget();
void paint(QPainter* painter, const QStyleOptionGraphicsItem* option, QImage* image);
};
Seems pretty simple. Now, the implementation of QGLWidget is the following:
GLWidget::GLWidget(QWidget* parent)
: QGLWidget(QGLFormat(QGL::SampleBuffers), parent)
{
// Should I do something here?
// Maybe setAutoFillBackground(false); ???
}
GLWidget::~GLWidget()
{
}
And finally:
void GLWidget::paint(QPainter* painter, const QStyleOptionGraphicsItem* option, QImage* image)
{
// I ignore painter because it comes from Video, so I create a new one:
QPainter gl_painter(this);
// Perform drawing as Qt::KeepAspectRatio
gl_painter.fillRect(QRectF(QPoint(0, 0), QSize(this->width(), this->height())), Qt::black);
QImage scaled_img = image->scaled(QSize(this->width(), this->height()), _ar, Qt::FastTransformation);
gl_painter.drawImage(qRound(this->width()/2) - qRound(scaled_img.size().width()/2),
qRound(this->height()/2) - qRound(scaled_img.size().height()/2),
scaled_img);
}
What am I missing?
I originally asked this question on Qt Forum but got no replies.
Solved. The problem was that I was trying to create a new GL context within my plugin when I should be retrieving the GL context from the application that loaded it.
This code was very helpful to understand how to accomplish that.
By the way, I discovered that the stuff was being draw inside view. It's just that I needed to execute view->show(), but that created another window which was not what I was looking for. The link I shared above has the answer.
I think that you have to draw on your glwidget using the opengl functions.
One possible way is to override your paintGL() method in GLWidget
and then draw your image with glDrawPixels().
glClear(GL_COLOR_BUFFER_BIT);
glDrawPixels(buffer.width(), buffer.height(), GL_RGBA, GL_UNSIGNED_BYTE, buffer.bits());
Where buffer is a QImage object that needs to be converted using QGLWidget::converrtToGLFormat() static method.
Look at this source for reference:
https://github.com/nayyden/ZVector/blob/master/src/GLDebugBufferWidget.cpp
So it appears that Qt4 doesn't let you draw on windows outside of a paint event. I have a lot of code that expects to be able to in order to draw rubber band lines (generic drawing code for a particular, proprietary interface that I then implement in the given UI). I've read about the pixmap method, it would be a lot of work and I don't think it's really what I want.
Is there a workaround that allows me to do what I want anyway? I just need to draw XOR bands on the screen.
Tried the WA_PaintOutsidePaintEvent flag. Then I saw the bit that says it doesn't work on Windows.
In modern compositing desktops window painting needs to be synchronized by the window manager so that the alpha blending and other effects can be applied, in order, to the correct back buffers - the result of which is then flipped onto the screen to allow tear-free window animations.
Invoking painting operations out-of-band of this process - while supported for legacy reasons on the underlying platforms - would subvert this process and cause a number of very non optimal code paths to be executed.
Basically, when you have painting to do on a window: Call the invalidate function to schedule the painting soon, and paint during the paint event.
Just paint to a QPixmap, and copy it to the real widget in the paintEvent. This is the only standard way. You shouldn't try to workaround it.
Seems like if you could get access to the Hwnd of the window in question, you could paint on that surface. Otherwise, I'm not sure. If by pixmap method you mean something like this, I don't think it's a bad solution:
m_composed_image = QImage(size, QImage::Format_ARGB32);
m_composed_image.setDotsPerMeterX(dpm);
m_composed_image.setDotsPerMeterY(dpm);
m_composed_image.fill(Qt::transparent);
//paint all image data onto new image
QPainter painter(&m_composed_image);
painter.drawImage(QPoint(0, 0), m_alignment_image);
As it's mentioned in one of the answers, The best way to do it will be to make a pixmap buffer. The painting works will be done in the buffer and when it's done, repaint() will be scheduled. And the paintEvent() function just paints the widget by copying the pixel buffer
I was trying to draw a circle on a widget area after user inputs values and pushes a button. This was my solution. connecting the drawCircle() slot to the clicked() signal.
class PaintHelper : public QWidget
{
Q_OBJECT
private:
QPixmap *buffer;
public:
explicit PaintHelper(QWidget *parent = 0) : QWidget(parent)
{
buffer=new QPixmap(350,250);// this is the fixe width of this widget so
buffer->fill(Qt::cyan);
}
signals:
public slots:
void drawCircle(int cx, int cy, int r){
QPainter painter(buffer);
painter.setBrush(QBrush(QColor(0,0,255)));
// A part of mid-point algorithm to draw 1/8 pacrt of circle
int x1=0,y1=r;
int p=1-r;
for(int i=0;y1>=x1;i++){
painter.drawPoint(x1+cx,y1+cy);
x1++;
if(p>0){
p+=3+x1;
}
else{
y1--;
p+=2*x1-2*y1;
p++;
}
}
this->repaint();
}
// QWidget interface
protected:
void paintEvent(QPaintEvent *event)
{
QPainter painter(this);
painter.drawPixmap(0,0,*buffer);
}
};