How to render QGraphicsScene animations out to a movie file? - c++

Does anybody know how to render an animation from QGraphicsView / QGraphicsScene out to a movie file format (AVI, MPG, MP4, MOV, something) on disk? I've seen examples for rendering the scene out to an image file with QPrinter. Any ideas on if rendering animations to movies is possible?
I suppose worst case would be to step frame-by-frame and save images, then combine the images into a movie with some external tool... Any better suggestions? Is it possible to do this from QT directly somehow?
Thanks!
- Brian

I think one of the ways is to use QPixmap for capturing the images,
QPixmap pixMap = QPixmap::grabWidget(view);
Use QPixmap::toImage() to convert it to QImage.
And using QtFFmpegWrapper for creating the video directly from QImage.
You do not need to use an external tool. Just an external library.

Related

Qt : screenshot with QOpenGLWidget

Hellow,
I have an application based on Qt using QOpenGLWidget.
In the OpenGL widget (named oglwidget in my code) I draw meshes and lines using opengl functions. Then I use QPainter to draw scales and texts in teh same widgte.
Now when I use :
const QRect rect(0,0,oglwidget.width(),oglwidget.height());
QPiaxmap pixmap = oglwidget.grab(rect);
to save the pixmap in a file with :
pixmap.save(...);
only the objects drawn with opengl functions are saved.
What do I miss ? Is there any solution to save the entire scene ?
Would you please help ?
Thanks and regards.
If you will be using the QOpenGLWidget method for grabbing it will actually get the pixels from the frame buffer rendered by the opengl, if you want to capture the screenshoot you probably should have a look at this tutorial.
To be more specific you probably want the grab method from QScreen rather then grab on the openGL context (the QOpenGlWidget method).

convert Qt qml file to bit map image

I want to take a qml I created in the QTCreator and transform it to a bmp image.
what I want in the bmp image is just the items I created.
I read about grabWindow but this is not realy what I want because it takes a snapshot of the screen and not creating bmp image with only the items in the qml.
Thanks
From QML:
youritem.grabToImage(function(result) {
result.saveToFile("something.bmp");
});
Although a bmp image is a bad idea, those things are huge. png is the best candidate for QML stuff, which is usually more "vector-y".

Drawing items on VLC object

Recently I have tried to do some graphics on the top of VLC video using vlc-qt (which provides a video widget). The approach was trying to draw something on the widget. But it failed duo to the fact that vlc-qt's widget uses an internal widget to render video. (See more details here)
Now I'm trying to do something different. I want to try drawing text (or some rectangles) on the VLC media itself (not the widget). I suppose it's the way how VLC media player renders subtitles (isn't it?)
So the question is this: Having a vlc-qt interface, how can I access underlying vlc object and draw something on it [using libVLC API]?
I'm afraid the only way to do it with libvlc is to use libvlc_video_set_callbacks + libvlc_video_set_format_callbacks. It will decode media stream's frames to memory, which you could use as you wish.

How to run .gif file in cocos2d

Is it possible to run .gif file in cocos2d for iphone
CCSpriteSheet works really well for animating with spritesheets, you'll need a way to generate the spritesheet from the animated gif.
Read this.
I assume you mean an animated GIF. If so, no, not directly, but you could extract the frames and run them using any of the usual animation classes.

What is the most efficient way to display decoded video frames in Qt?

What is the fastest way to display images to a Qt widget? I have decoded the video using libavformat and libavcodec, so I already have raw RGB or YCbCr 4:2:0 frames. I am currently using a QGraphicsView with a QGraphicsScene object containing a QGraphicsPixmapItem. I am currently getting the frame data into a QPixmap by using the QImage constructor from a memory buffer and converting it to QPixmap using QPixmap::fromImage().
I like the results of this and it seems relatively fast, but I can't help but think that there must be a more efficient way. I've also heard that the QImage to QPixmap conversion is expensive. I have implemented a solution that uses an SDL overlay on a widget, but I'd like to stay with just Qt since I am able to easily capture clicks and other user interaction with the video display using the QGraphicsView.
I am doing any required video scaling or colorspace conversions with libswscale so I would just like to know if anyone has a more efficient way to display the image data after all processing has been performed.
Thanks.
Thanks for the answers, but I finally revisited this problem and came up with a rather simple solution that gives good performance. It involves deriving from QGLWidget and overriding the paintEvent() function. Inside the paintEvent() function, you can call QPainter::drawImage(...) and it will perform the scaling to a specified rectangle for you using hardware if available. So it looks something like this:
class QGLCanvas : public QGLWidget
{
public:
QGLCanvas(QWidget* parent = NULL);
void setImage(const QImage& image);
protected:
void paintEvent(QPaintEvent*);
private:
QImage img;
};
QGLCanvas::QGLCanvas(QWidget* parent)
: QGLWidget(parent)
{
}
void QGLCanvas::setImage(const QImage& image)
{
img = image;
}
void QGLCanvas::paintEvent(QPaintEvent*)
{
QPainter p(this);
//Set the painter to use a smooth scaling algorithm.
p.setRenderHint(QPainter::SmoothPixmapTransform, 1);
p.drawImage(this->rect(), img);
}
With this, I still have to convert the YUV 420P to RGB32, but ffmpeg has a very fast implementation of that conversion in libswscale. The major gains come from two things:
No need for software scaling. Scaling is done on the video card (if available)
Conversion from QImage to QPixmap, which is happening in the QPainter::drawImage() function is performed at the original image resolution as opposed to the upscaled fullscreen resolution.
I was pegging my processor on just the display (decoding was being done in another thread) with my previous method. Now my display thread only uses about 8-9% of a core for fullscreen 1920x1200 30fps playback. I'm sure it could probably get even better if I could send the YUV data straight to the video card, but this is plenty good enough for now.
I have the same problem with gtkmm (gtk+ C++ wrapping). The best solution besides using a SDL overlay was to update directly the image buffer of the widget then ask for a redraw. But I don't know if it is feasible with Qt ...
my 2 cents
Depending on your OpenGL/shading skills you could try to copy the videos frames to a texture, map the texture to a rectangle (or anything else..fun!) and display it in a OpenGL scene. Not the most straight approach, but fast, because you're writing directly into the graphics memory (like SDL). I would also recoomend to use YCbCR only since this format is compressed (color, Y=full Cb,Cr are 1/4 of the frame) so less memory + less copying is needed to display a frame. I'm not using Qts GL directly but indirectly using GL in Qt (vis OSG) and can display about 7-11 full HD (1440 x 1080) videos in realtime.