I have a little question about how to implement some sort of graphics editor.
For drawing I use this method:
first I check if left mouse button is clicked, then I draw one pixel at event->pos() on my QPixmap, and after that I call update(); to redraw it. I also paint lines on QPixmap between two dots if the mouse is moved with pressed button(because without it it will just some dots). It works pretty well, but I want to know if there's more optimized method to do this. Here's some code(I've skipped parts with zooming, joining missing pixels between to pixels etc.)
void Editor::paintEvent(QPaintEvent *event)
{
painter.drawPixmap(QRect(0, 0, image.width() * zoom , image.height() * zoom),
image);
}
void Editor::mousePressEvent(QMouseEvent *event)
{
if(event->button() == Qt::LeftButton)
{
setImagePixel(event->pos());
}
}
void Editor::mouseMoveEvent(QMouseEvent *event)
{
if(event->buttons() & Qt::LeftButton)
{
setImagePixel(event->pos(), true);
}
}
void Editor::setImagePixel(const QPoint &pos)
{
QPainter painter(&image);
if(image.rect().contains(i, j))
{
painter.begin(&image);
painter.setPen(primaryColor);
painter.drawPoint(i, j);
painter.end();
}
}
Yes, I would use QPainterPath and its API to draw hand-made shapes. Look at its methods : moveTo() and lineTo(), which will let you get rid of the drawing logic (missing pixels, etc). It is also very easy to combine with mouse events.
Hope this helps.
Related
I'm working on a project where I have to do an image editor similar to Paint. I would like to open up an image, zoom in/zoom out if necessary, and then draw on it.
For that, I used Image Viewer and Scribble examples, the only thing different that I added was a QLabel subclass that is supposed to draw lines and other forms with the mouse press/mouse release events. The problem is with the overriden paintEvent.
void imLabel::paintEvent(QPaintEvent *event){
if(tipo != ""){ //draw mode
QPainter painter(this);
QRect dirtyRect = event->rect();
painter.drawImage(dirtyRect,image,dirtyRect);
}
else QLabel::paintEvent(event);
}
I am able to zoom in normally in the beginning, but when I start drawing the image comes back to its original size, while the rest of the label that is not occupied by the image is completely white, because it kept the zoomed in size.
Is there a way that I can keep the image zoomed in while I draw, just like in Paint?
UPDATE
Here's the code of the draw function, maybe it'll help
void imLabel::draw(const QPoint &endPoint){
QPainter painter(&image);
painter.setPen(QPen(myPenColor, myPenWidth, Qt::SolidLine, Qt::RoundCap, Qt::RoundJoin));
if(tipo == "maoLivre" || tipo == "reta") //draw line or free hand
painter.drawLine(startPoint, endPoint);
else{
int x = startPoint.x(), y = startPoint.y(),
largura = endPoint.x()- startPoint.x(),
altura = endPoint.y() - startPoint.y();
if(tipo == "retangulo") //draw rectangle
painter.fillRect(x,y,largura,altura,Qt::green);
else if(tipo == "circulo"){ //draw circle
painter.setBrush(Qt::green);
painter.drawEllipse(startPoint,altura,altura);
}
}
modified = true;
update();
startPoint = endPoint;
}
I have a QAxWidget which embeds a flash player and loads a swf. I don't want to use the widget, but instead i want to render the widget into a QImage, and then, use that image as a texture in my OpenSceneGraph node. Everything is ok, so far. But since there is no widget visible, i have to get the mouse move/press/release events from my osgView, calculate the corresponding position to my node, and send the events to the flash player to make actionscript mouse events work. The calculation part is tomorrow's problem. But up to now, i was not able to send any custom mouse event to the player. Does anyone have any opinion?
Here is the example code,
BFlashWidget::BFlashWidget(QWidget *parent)
: QAxWidget(parent)
{
QSize imgSize(800,600);
setMinimumSize(imgSize);
setMouseTracking(true);
mQtImg = QImage(imgSize, QImage::Format_RGBA8888);
setControl(QString::fromUtf8("{d27cdb6e-ae6d-11cf-96b8-444553540000}"));
dynamicCall("LoadMovie(long, string)", 0,
QDir::currentPath() + "//deneme.swf");
activateWindow();
}
BFlashWidget::~BFlashWidget()
{
}
QImage BFlashWidget::getImage()
{
QPainter painter(&mQtImg);
render(&painter);
return mQtImg;
}
void BFlashWidget::simulateMouseMove(QMouseEvent *e)
{
QApplication::sendEvent(this, e);
QApplication::processEvents();
}
void BFlashWidget::simulateMousePress(QMouseEvent *e)
{
QApplication::sendEvent(this, e);
QApplication::processEvents();
}
void BFlashWidget::simulateMouseRelease(QMouseEvent *e)
{
QApplication::sendEvent(this, e);
QApplication::processEvents();
}
player does nothing when i call those simulateMouseXXXXX methods
I'm facing an annoying issue using Qt QGraphicsView framework.
I write a basic image viewer to display large B&W tiff images.
Images are displayed using a QGraphicsPixmapItem and added to the scene.
I also need to draw a colored rectangle around the viewport.
void ImageView::drawForeground( QPainter * painter, const QRectF & rect )
{
if( m_image ) {
qDebug() << "drawForeground(), rect = " << rect;
QBrush br(m_image->isValidated() ? Qt::green : Qt::red);
qreal w = 40. / m_scaleFactor;
QPen pen(br, w);
painter->save();
painter->setOpacity(0.5);
painter->setPen(pen);
painter->drawRect(rect);
painter->restore();
}
}
It works fine, at first glance.
But when I scroll the viewport content, things are getting ugly.
drawForeground() method is effectively called but it seems the content of the viewport is not erased before. So the drawing becomes horrible on screen.
Is there a better way to achieve it?
EDIT
As leems mentioned it, Qt internals don't alow me to achieve it with drawForeground().
I found a workaround by using a QGraphicsRectItem which gets resized in the viewportEvent().
Loos like:
bool ImageView::viewportEvent(QEvent *ev)
{
QRect rect = viewport()->rect();
if( m_image ) {
QPolygonF r = mapToScene(rect);
QBrush br2(m_image->isValidated() ? Qt::green : Qt::red);
qreal w2 = 40. / m_scaleFactor;
QPen pen2(br2, w2);
m_rect->setPen(pen2);
m_rect->setOpacity(0.5);
m_rect->setRect(r.boundingRect());
}
return QGraphicsView::viewportEvent(ev);
}
The code is not finalized but will basically looks like that.
It works fine, though it blinks a little bit when scrolling too fast...
I have made a GUI in Qt that is basically a widget with a QGraphicsView on it i have a function:
void GUI::mousePressEvent(QMouseEvent *event)
{
if(event->button() == Qt::LeftButton)
{
QPointF mousePoint = ui->graphicsView->mapToScene(event->pos());
qDebug() << mousePoint;
}
}
which links to a public slot:
void mousePressEvent(QMouseEvent *event);
this shows me on the console the x,y coordinate of where i have clicked, however currently this is working on the entire widget and i ideally would like x,y(0,0) to be the top left of the QGraphicsView instead of the top left of the entire widget. does anyone have any idea how to make this work, i thought from my code that this is what it was doing but it turns out this is not so, iv had a look around for a while now but im coming up with nothing
any help would be really appreciated thanks.
Reimplement the mousePressEvent(QMouseEvent *event) of the QGraphicsView not your widget would be the 'proper' way of doing it. Otherwise you can hack it with:
// Detect if the click is in the view.
QPoint remapped = ui->graphicsView->mapFromParent( event->pos() );
if ( ui->graphicsView->rect().contains( remapped ) )
{
QPointF mousePoint = ui->graphicsView->mapToScene( remapped );
}
This assumes that the widget is the parent of the QGraphicsView.
I'm trying to make a Paint application in C++ with Qt. Everytime I click or click & drag the mouse, the program will draw something on a pixmap. After that, it updates the window calling paintEvent(), which will draw the pixmap onto the window.
void QPaintArea::mousePressEvent(QMouseEvent *event){
startpoint = event->pos();
drawPoint(startpoint);
is_pressed = true;
}
void QPaintArea::mouseReleaseEvent(QMouseEvent *event){
is_pressed = false;
}
void QPaintArea::mouseMoveEvent(QMouseEvent *event){
if(is_pressed == true){
endpoint = event->pos();
drawLine(startpoint, endpoint);
startpoint = endpoint;
}
else{
return;
}
}
void QPaintArea::paintEvent(QPaintEvent *event){
QDesktopWidget *desktop = QApplication::desktop();
int x = (desktop->width() - 800) / 2;
int y = (desktop->height() - 600) / 2;
QPainter painter(this);
QRect target(QPoint(x, y - 35), QSize(800, 600));
QRect dirtyrect(QPoint(0,0), QSize(800, 600));
painter.drawPixmap(target, *pixmap, dirtyrect);
}
The problem is that, the program is not printing the pixmap onto the window as expected. For example, I press the mouse at x: 17, y: 82 trying to draw something. The program will print what I drew but at an offset location, like x + 20, y.
Maybe I don't fully understand how QRect or drawPixmap works, but the pixmap is 800x600. "dirtyrect" is supposed to save the entire pixmap (starting a x: 0, y: 0, and the size 800x600).
drawPixmap(target, pixmap, source) paints on target rect of painter area (QPaintArea in this case) source part of pixmap. So you paint whole pixmap (0,0,800,600) at some (x,y-35,800,600) rect of QPaintArea. If you want to paint whole pixmap on whole QPaintArea just use drawPixmap(QPoint(0,0), *pixmap).
// EDIT
But if you expected, that pixmap will be painted with some offset from QPaintArea top left corner, then your calculations are wrong, and if you wont explain what did you want to achieve we won't be able to help you. Explain us your calculations of x,y (and magic -35 for y), and maybe we will be able to figure something out
// EDIT
You don't have to use window offsets like -35 if you're painting on widget. 0,0 of the widget is not top left corner of window frame, but of widget contents. How do you expect it to behave on other platforms?
If you want to paint it in the middle of your window simply use:
void QPaintArea::paintEvent(QPaintEvent *event){
QPoint middle = geometry.center();
int x = middle.x() - 800/2; // probably best would be pixmap->width()/2
int y = middle.y() - 600/2; // probably best would be pixmap->height()/2
QPainter painter(this);
painter.drawPixmap(QPoint(x,y), *pixmap);
}