I am currently reading image from a decoder and paint each frame of it in a widget.
This is what I am doing now:
paintEvent(...){
...
painter.setRenderHint(QPainter::Antialiasing, false);
painter.setRenderHint(QPainter::HighQualityAntialiasing, false);
QPixmap pmap = QPixmap::fromImage(glImage).scaledToWidth(width());
painter.drawPixmap(0, (height() - pmap.height()) / 2, pmap);
...
}
However, I found it to be computationally expensive...
Is there any solution to this without using the OpenGL in Qt?
You could try to use QPainter::drawImage instead of doing manual conversion between image representation (QImage -> QPixmap). Refering to documentation it should still provide way to scale the image -"Note: The image is scaled to fit the rectangle, if both the image and rectangle size disagree.".
First of all, there is no need to scale your pixmap before painting. You can pass the desired width and height as an argument to painter.drawPixmap. This will scale the image while painting which is (probably) faster.
QPixmap pmap = QPixmap::fromImage(glImage);
int w = width();
// "scaledToWidth"
int h = w * pmap.height() / (double)pmap.width();
painter.drawPixmap(0, (height() - h) / 2, w, h, pmap);
Then, you could try to draw the image directly. Depending on which operating system you are using, this might be slower or faster.
On Windows, for example, QPixmap is internally represented by a QImage anyway. And therefore, QPixmap::fromImage will basically create a (possible unnecessary) copy of that image.
int w = width();
int h = w * glImage.height() / (double)glImage.width();
painter.drawImage(0, (height() - h) / 2, w, h, glImage);
If you draw the image directly, alpha blending can become quite expensive. So if possible, use a pixel format without alpha channel or with premultiplied alpha. (In the premultiplied format the red, green, and blue channels are multiplied by the alpha component divided by 255.) (See also: QImage::Format_ARGB32_Premultiplied is your friend).
Bonus fact: That's basically what QPixmap::fromImage on Windows does. If you pass a QImage with alpha channel to that function, the internal QImage will be stored with premultiplied alpha to optimize render performance. See source code.
Related
I am trying to draw an arrow with OpenCV 3.2:
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
int main()
{
Mat image(480, 640, CV_8UC3, Scalar(255, 255, 255)); //White background
Point from(320, 240); //Middle
Point to(639, 240); //Right border
arrowedLine(image, from, to, Vec3b(0, 0, 0), 1, LINE_AA, 0, 0.1);
imshow("Arrow", image);
waitKey(0);
return 0;
}
An arrow is drawn, but at the tip some pixels are missing:
To be more precise, two columns of pixels are not colored correctly (zoomed):
If I disable antialiasing, i.e., if I use
arrowedLine(image, from, to, Vec3b(0, 0, 0), 1, LINE_8, 0, 0.1);
instead (note the LINE_8 instead of LINE_AA), the pixels are there, albeit without antialiasing:
I am aware that antialiasing might rely on neighboring pixels, but it seems strange that pixels are not drawn at all at the borders instead of being drawn without antialiasing. Is there a workaround for this issue?
Increasing the X coordinate, e.g. to 640 or 641) makes the problem worse, i.e., more of the arrow head pixels disappear, while the tip still lacks nearly two complete pixel columns.
Extending and cropping the image would solve the neighboring pixels issue, but in my original use case, where the problem appeared, I cannot enlarge my image, i.e., its size must remain constant.
After a quick review, I've found that OpenCV draws AA lines using a Gaussian filter, which contracts the final image.
As I've suggested in comments, you can implement your own function for the AA mode (you can call the original one if AA is disabled) extending the points manually (see code below to have an idea).
Other option may be to increase the line width when using AA.
You may also simulate the AA effect of OpenCV but on the final image (may be slower but helpful if you have many arrows). I'm not an OpenCV expert so I'll write a general scheme:
// Filter radius, the higher the stronger
const int kRadius = 3;
// Image is extended to fit pixels that are not going to be blurred
Mat blurred(480 + kRadius * 2, 640 + kRadius * 2, CV_8UC3, Scalar(255, 255, 255));
// Points moved a according to filter radius (need testing, but the idea is that)
Point from(320, 240 + kRadius);
Point to(639 + kRadius * 2, 240 + kRadius);
// Extended non-AA arrow
arrowedLine(blurred, ..., LINE_8, ...);
// Simulate AA
GaussianBlur(blurred, blurred, Size(kRadius, kRadius), ...);
// Crop image (be careful, it doesn't copy data)
Mat image = blurred(Rect(kRadius, kRadius, 640, 480));
Another option may be to draw the arrow in an image twice as large and the scale it down with a good smoothing filter.
Obviously, last two options will work only if you don't have any previous data on the image. If so, then use a transparent image for temporal drawing and overlay it at the end.
I investigated and stripped down my previous question (Is there a way to avoid conversion from YUV to BGR?). I want to overlay few images (format is YUV) on the resulting, bigger image (think about it like it is a canvas) and send it via network library (OPAL) forward without converting it to to BGR.
Here is the code:
Mat tYUV;
Mat tClonedYUV;
Mat tBGR;
Mat tMergedFrame;
int tMergedFrameWidth = 1000;
int tMergedFrameHeight = 800;
int tMergedFrameHalfWidth = tMergedFrameWidth / 2;
tYUV = Mat(tHeader->height * 1.5f, tHeader->width, CV_8UC1, OPAL_VIDEO_FRAME_DATA_PTR(tHeader));
tClonedYUV = tYUV.clone();
tMergedFrame = Mat(Size(tMergedFrameWidth, tMergedFrameHeight), tYUV.type(), cv::Scalar(0, 0, 0));
tYUV.copyTo(tMergedFrame(cv::Rect(0, 0, tYUV.cols > tMergedFrameWidth ? tMergedFrameWidth : tYUV.cols, tYUV.rows > tMergedFrameHeight ? tMergedFrameHeight : tYUV.rows)));
tClonedYUV.copyTo(tMergedFrame(cv::Rect(tMergedFrameHalfWidth, 0, tYUV.cols > tMergedFrameHalfWidth ? tMergedFrameHalfWidth : tYUV.cols, tYUV.rows > tMergedFrameHeight ? tMergedFrameHeight : tYUV.rows)));
namedWindow("merged frame", 1);
imshow("merged frame", tMergedFrame);
waitKey(10);
The result of above code looks like this:
I guess the image is not correctly interpreted, so the pictures stay black/white (Y component) and below them, we can see the U and V component. There are images, which describes the problem well (http://en.wikipedia.org/wiki/YUV):
and: http://upload.wikimedia.org/wikipedia/en/0/0d/Yuv420.svg
Is there a way for these values to be correctly read? I guess I should not copy the whole images (their Y, U, V components) straight to the calculated positions. The U and V components should be below them and in the proper order, am I right?
First, there are several YUV formats, so you need to be clear about which one you are using.
According to your image, it seems your YUV format is Y'UV420p.
Regardless, it is a lot simpler to convert to BGR work there and then convert back.
If that is not an option, you pretty much have to manage the ROIs yourself. YUV is commonly a plane-format where the channels are not (completely) multiplexed - and some are of different sizes and depths. If you do not use the internal color conversions, then you will have to know the exact YUV format and manage the pixel copying ROIs yourself.
With a YUV image, the CV_8UC* format specifier does not mean much beyond the actual memory requirements. It certainly does not specify the pixel/channel muxing.
For example, if you wanted to only use the Y component, then the Y is often the first plane in the image so the first "half" of whole image can just be treated as a monochrome 8UC1 image. In this case using ROIs is easy.
I want to use the CImg library (http://cimg.sourceforge.net/) to rotate an image with an arbitrary angle (the image is read by Qt which should not perform the rotation):
QImage img("sample_with_alpha.png");
img = img.convertToFormat(QImage::Format_ARGB32);
float angle = 45;
cimg_library::CImg<uint8_t> src(img.bits(), img.width(), img.height(), 1, 4);
cimg_library::CImg<uint8_t> out = src.get_rotate(angle);
// Further processing:
// Data: out.data(), out.width(), out.height(), Stride: out.width() * 4
The final data in "out.data()" is ok when the the angle is set to 0. But for other angles the output data is distorted. I assume that the CImg library changes the output format and/or stride during rotation?
Regards,
CImg does not store the pixel buffer of an image in interleaved mode, as RGBARGBARGBA... but uses a channel by channel structure RRRRRRRR.....GGGGGGGGG.......BBBBBBBBB.....AAAAAAAAA.
I assume your img.bits() pointer points to pixels with interleaved channels, so if you want to pass this to CImg, you'll need to permute the buffer structure before you can apply any of the CImg method.
Try this :
cimg_library::CImg<uint8_t> src(img.bits(), 4,img.width(), img.height(), 1);
src.permute_axes("yzcx");
cimg_library::CImg<uint8_t> out = src.get_rotate(angle);
// Here, the out image should be OK, try displaying it with out.display();
// But you still need to go back to an interleaved image pointer if you want to
// get it back in Qt.
out.permute_axes("cxyz"); // Do the inverse permutation.
const uint8_t *p_out = out.data(); // Interleaved result.
I guess this should work as expected.
I have *.png files and I want to get different 8x8 px parts from textures and place them on bitmap (SDL_Surface, I guess, but maybe not), smth like this:
Now I'm rendering that without bitmap, i.e. I call each texture and draw part directly on screen each frame, and it's too slow. I guess I need to load each *.png to separate bitmap and use them passing video memory, then call just one big bitmap, but maybe I'm wrong. I need the fastest way of doing that, I need code of this (SDL 2, not SDL 1.3).
Also maybe I need to use clear OpenGL here?
Update:
Or maybe I need to load *.png's to int arrays somehow and use them just like usual numbers and place them to one big int array, and then convert it to SDL_Surface/SDL_Texture? It seems this is the best way, but how to write this?
Update 2:
Colors of pixels in each block are not the same as it presented at the picture and also can they be transparent. Picture is just an example.
Assumming you already have your bitmaps loaded up as SDL_Texture(s), composing them into a different texture is done via SDL_SetRenderTarget .
SDL_SetRenderTarget(renderer, target_texture);
SDL_RenderCopy(renderer, texture1, ...);
SDL_RenderCopy(renderer, texture2, ...);
...
SDL_SetRenderTarget(renderer, NULL);
Every render operation you perform between setting your Render Target and resetting it (by calling SDL_SetRenderTarget with a NULL texture parameter) will be renderer to the designated texture. You can then use this texture as you would use any other.
Ok so, when I asked about "solid colour", I meant - "in that 8x8 pixel area in the .png that you are copying from, do all 64 pixels have the same identical RGB value?" It looks that way in your diagram, so how about this:
How about creating an SDL_Surface, and directly painting 8x8 pixel areas of the memory pointed to by the pixels member of that SDL_Surface with the values read from the original .png.
And then when you're done, convert that surface to an SDL_Texture and render that?
You would avoid all the SDL_UpdateTexture() calls.
Anyway here is some example code. Let's say that you create a class called EightByEight.
class EightByEight
{
public:
EightByEight( SDL_Surface * pDest, Uint8 r, Uint8 g, Uint8 b):
m_pSurface(pDest),
m_red(r),
m_green(g),
m_blue(b){}
void BlitToSurface( int column, int row );
private:
SDL_Surface * m_pSurface;
Uint8 m_red;
Uint8 m_green;
Uint8 m_blue;
};
You construct an object of type EightByEight by passing it a pointer to an SDL_Surface and also some values for red, green and blue. This RGB corresponds to the RGB value taken from the particular 8x8 pixel area of the .png you are currently reading from. You will paint a particular 8x8 pixel area of the SDL_Surface pixels with this RGB value.
So now when you want to paint an area of the SDL_Surface, you use the function BlitToSurface() and pass in a column and row value. For example, if you divided the SDL_Surface into 8x8 pixel squares, BlitToSurface(3,5) means the paint the square at the 4th column, and 5th row with the RGB value that I set on construction.
The BlitToSurface() looks like this:
void EightByEight::BlitToSurface(int column, int row)
{
Uint32 * pixel = (Uint32*)m_pSurface->pixels+(row*(m_pSurface->pitch/4))+column;
// now pixel is pointing to the first pixel in the correct 8x8 pixel square
// of the Surface's pixel memory. Now you need to paint a 8 rows of 8 pixels,
// but be careful - you need to add m_pSurface->pitch - 8 each time
for(int y = 0; y < 8; y++)
{
// paint a row
for(int i = 0; i < 8; i++)
{
*pixel++ = SDL_MapRGB(m_pSurface->format, m_red, m_green, m_blue);
}
// advance pixel pointer by pitch-8, to get the next "row".
pixel += (m_pSurface->pitch - 8);
}
}
I'm sure you could probably speed things up further by pre-calculating an RGB value on construction. Or if you're reading a pixel from the texture, you could probably dispense with the SDL_MapRGB() (but it's just there in case the Surface has different pixel format to the .png).
memcpy is probably faster than 8 individual assignments to the RGB value - but I just want to demonstrate the technique. You could experiment.
So, all the EightByEight objects you create, all point to the same SDL_Surface.
And then, when you're done, you just convert that SDL_Surface to an SDL_Texture and blit that.
Thanks to everyone who took part, but we solved it with my friends. So here is an example (source code is too big and unnecessary here, I'll just describe the main idea):
int pitch, *pixels;
SDL_Texture *texture;
...
if (!SDL_LockTexture(texture, 0, (void **)&pixels, &pitch))
{
for (/*Conditions*/)
memcpy(/*Params*/);
SDL_UnlockTexture(texture);
}
SDL_RenderCopy(renderer, texture, 0, 0);
I want to implement a method which reduces the alpha of every pixel in a QPixmap (4.8) by 1, every time it is called. In between calls new lines might be added to the image (with alpha of 255). Additionally I'd like to have a lower alpha threshold of say 15. Pixels which have initial alpha of 0 will keep that alpha. In pseudo-code:
if alpha == 0:
newAlpha = 0
else:
newAlpha = max(15, alpha - 1)
Right now I have two methods in mind. The first one is conversion to QImage and pixel-by-pixel reduction of alpha. However, this has two drawbacks: Performance and color artefacts: some pixels' colors change wildly. The artefacts appear when QPainting the resulting qpixmap onto another qpixmap filled with one color (with QPainter::CompositionMode_SourceOver). This is likely due to the dithering ?! I tried the two available ones, both produce these types artefacts.
QImage image = pixmap.toImage();
for (int y = 0; y < image.height(); ++y) {
for (int x = 0; x < image.width(); ++x) {
QRgb col = image.pixel(x,y);
int alpha = qAlpha(col);
if(alpha>15) {
alpha -= 1;
QRgb newCol = qRgba(qRed(col), qGreen(col), qBlue(col), alpha);
image.setPixel(x,y, newCol);
}
}
}
pixmap = QPixmap::fromImage(image, Qt::DiffuseAlphaDither | Qt::NoOpaqueDetection);
The artefacts appear with this:
QPixmap screen;
...
screen.fill(Qt::transparent);
QPainter painter( &screen );
// remove anti-aliasing, which (with current composition mode) results in even stronger artefacts
painter.setRenderHints(0);
background.fill(someRandomColor);
painter.drawPixmap(0, 0, w, h, background);
painter.drawPixmap(0, 0, w, h, pixmap);
painter.end();
Alternatively I tried to map the above pseudo-code with QPixmap drawing operations. For instance, the composition mode of QPainter QPainter::CompositionMode_DestinationIn is useful to reduce the alpha. But I don't know how to handle the thresholding with simultaneous keeping the 0 alpha values.
So now there are actually three questions:
Can I avoid the color artefacts with the QImage detour ?
Or can I map the above pseude-code to pure QPixmap/QPainter operations ?
Is there a totally different idea to this ?
EDIT:
QImage image = pixmap.toImage().convertToFormat(QImage::Format_ARGB32);
Does seem to remove the artefacts. Before it would have converted to QImage::Format_ARGB32_Premultiplied and hence the artefacts. But now it is even less performant