QPixmap / QImage alpha reduction with minimum ensured alpha - c++

I want to implement a method which reduces the alpha of every pixel in a QPixmap (4.8) by 1, every time it is called. In between calls new lines might be added to the image (with alpha of 255). Additionally I'd like to have a lower alpha threshold of say 15. Pixels which have initial alpha of 0 will keep that alpha. In pseudo-code:
if alpha == 0:
newAlpha = 0
else:
newAlpha = max(15, alpha - 1)
Right now I have two methods in mind. The first one is conversion to QImage and pixel-by-pixel reduction of alpha. However, this has two drawbacks: Performance and color artefacts: some pixels' colors change wildly. The artefacts appear when QPainting the resulting qpixmap onto another qpixmap filled with one color (with QPainter::CompositionMode_SourceOver). This is likely due to the dithering ?! I tried the two available ones, both produce these types artefacts.
QImage image = pixmap.toImage();
for (int y = 0; y < image.height(); ++y) {
for (int x = 0; x < image.width(); ++x) {
QRgb col = image.pixel(x,y);
int alpha = qAlpha(col);
if(alpha>15) {
alpha -= 1;
QRgb newCol = qRgba(qRed(col), qGreen(col), qBlue(col), alpha);
image.setPixel(x,y, newCol);
}
}
}
pixmap = QPixmap::fromImage(image, Qt::DiffuseAlphaDither | Qt::NoOpaqueDetection);
The artefacts appear with this:
QPixmap screen;
...
screen.fill(Qt::transparent);
QPainter painter( &screen );
// remove anti-aliasing, which (with current composition mode) results in even stronger artefacts
painter.setRenderHints(0);
background.fill(someRandomColor);
painter.drawPixmap(0, 0, w, h, background);
painter.drawPixmap(0, 0, w, h, pixmap);
painter.end();
Alternatively I tried to map the above pseudo-code with QPixmap drawing operations. For instance, the composition mode of QPainter QPainter::CompositionMode_DestinationIn is useful to reduce the alpha. But I don't know how to handle the thresholding with simultaneous keeping the 0 alpha values.
So now there are actually three questions:
Can I avoid the color artefacts with the QImage detour ?
Or can I map the above pseude-code to pure QPixmap/QPainter operations ?
Is there a totally different idea to this ?
EDIT:
QImage image = pixmap.toImage().convertToFormat(QImage::Format_ARGB32);
Does seem to remove the artefacts. Before it would have converted to QImage::Format_ARGB32_Premultiplied and hence the artefacts. But now it is even less performant

Related

Pixels Not Set Properly In OpenCV

I am trying to make an image that is completely black except for a white rectangle at the centre of the image. However, on my first attempt, I got a weird result so I changed my code to nail down the problem.
So with for loops, I tried to set all the horizontal pixels at the centre to white to draw a white line across the image. Below is my code.
//--Block Mask--//
block_mask = cv::Mat::zeros(image_height, image_width, CV_8UC3);
int img_height = block_mask.rows;
int img_width = block_mask.cols;
for (int row = (img_height / 2); row < ((img_height / 2) + 1); row++)
{
for (int column = 0; column < img_width; column++)
{
block_mask.at<uchar>(row, column) = 255;
}
}
cv::namedWindow("Block Mask", CV_WINDOW_AUTOSIZE);
cv::imshow("Block Mask", block_mask);
img_height = 1080
img_width = 1920
image_height and image_width are defined from another image.
With this code I expected to see a white line drawn across the entire image, however, the white line extends only part way across the image. See the image below.
To troubleshoot I made a variable to count the iterations of the inner for loop and it counted up to 1920 as I expected it to. This leaves me wondering if it is something to do with the image being displayed? When simply setting individual pixels (not in loops) to white past where the line comes to, no results can be seen either.
I am at a loss as to what is going on here so any help, or perhaps a better way of achieving this, would be greatly appreciated.
Solved: The image block_mask is a three channel BGR image as it was created with the type CV_8UC3. However, when setting the pixel values to white the type uchar was used. Moreover, this was set to a signal integer type of value 255.
To properly set the colour of each pixel all three channels must be set. This can be achieved using a cv::Vec3b type variable that contains values for each channel and can be individually set. This can be done by:
cv::Vec3b new_pixel_colour;
new_pixel_colour[0] = 255; //Blue channel
new_pixel_colour[1] = 255; //Green channel
new_pixel_colour[2] = 255; //Red channel
From here, pixels can be assigned with this variable to change their colour, making sure to change the type in the .at operator to cv::Vec3b also. The corrected code is below.
//--Block Mask--//
block_mask = cv::Mat::zeros(image_height, image_width, CV_8UC3);
cv::Vec3b new_pixel_colour;
new_pixel_colour[0] = 255; //Blue channel
new_pixel_colour[1] = 255; //Green channel
new_pixel_colour[2] = 255; //Red channel
int img_height = block_mask.rows;
int img_width = block_mask.cols;
for (int row = (img_height / 2); row < ((img_height / 2) + 1); row++)
{
for (int column = 0; column < img_width; column++)
{
block_mask.at<cv::Vec3b>(row, column) = new_pixel_colour;
}
}
cv::namedWindow("Block Mask", CV_WINDOW_AUTOSIZE);
cv::imshow("Block Mask", block_mask);
An alternative solution for drawing is using the in-buit drawing functions of OpenCV. Specifically, for drawing a rectangle the OpenCV function cv::rectangle() can be used. A tutorial on basic drawing in OpenCV can be found here: https://docs.opencv.org/master/d3/d96/tutorial_basic_geometric_drawing.html

Animated transition/wipe using SDL2 and black/white mask?

I've been tearing my hair out over how to do this simple effect. I've got an image (see below), and when this image is used in a game, it produces a clockwise transition to black effect. I have been trying to recreate this effect in SDL(2) but to no avail. I know it's got something to do with masking but I've no idea how to do that in code.
The closest I could get was by using "SDL_SetColorKey" and incrementing the RGB values so it would not draw the "wiping" part of the animation.
Uint32 colorkey = SDL_MapRGBA(blitSurf->format,
0xFF - counter,
0xFF - counter,
0xFF - counter,
0
);
SDL_SetColorKey(blitSurf, SDL_TRUE, colorkey);
// Yes, I'm turning the surface into a texture every frame!
SDL_DestroyTexture(streamTexture);
streamTexture = SDL_CreateTextureFromSurface(RENDERER, blitSurf);
SDL_RenderCopy(RENDERER, streamTexture, NULL, NULL);
I've searched all over and am now just desperate for an answer for my own curiosity- and sanity! I guess this question isn't exactly specific to SDL; I just need to know how to think about this!
Arbitrarily came up with a solution. It's expensive, but works. By iterating through every pixel in the image and mapping the colour like so:
int tempAlpha = (int)alpha + (speed * 5) - (int)color;
int tempColor = (int)color - speed;
*pixel = SDL_MapRGBA(fmt,
(Uint8)tempColor,
(Uint8)tempColor,
(Uint8)tempColor,
(Uint8)tempAlpha
);
Where alpha is the current alpha of the pixel, speed is the parameterised speed of the animation, and color is the current color of the pixel. fmt is the SDL_PixelFormat of the image. This is for fading to black, the following is for fading in from black:
if ((255 - counter) > origColor)
continue;
int tempAlpha = alpha - speed*5;
*pixel = SDL_MapRGBA(fmt,
(Uint8)0,
(Uint8)0,
(Uint8)0,
(Uint8)tempAlpha
);
Where origColor is the color of the pixel in the original grayscale image.
I made a quick API to do all of this, so feel free to check it out: https://github.com/Slynchy/SDL-AlphaMaskWipes

Create mask from color Image in C++ (Superimposing a colored image mask)

I've wrote a code which detects squares (white) in realtime and draws a frame around it. Each side of length l of the squares is divided in 7 parts. Then I draw a line of length h=l/7 at each of the six points evolving from the deviation perpendicular to the side of the triangle (blue). The corners are marked in red. It then looks something like this:
For the drawing of the blue lines and circles I have a 3 Channel (CV_8UC3) matrix drawing, which is zero everywhere except at the positions of the red, blue and white lines. Then what I do to lay this matrix over my webcam image is using the addWeighted function of opencv.
addWeighted( drawing, 1, webcam_img, 1, 0.0, dst); (Description for addWeighted here).
But then, as you can see I get the effect that the colors for my dashes and circles are wrong outside the black area (probably also not correct inside the black area, but better there). It makes totally sense why it happens, as it just adds the matrices with a weight.
I'd like to have the matrix drawing with the correct colors over my image. Problem is, I don't no how to fix it. I somehow need a mask drawing_mask where my dashes are, sort of, superimposed to my camera image. In Matlab something like dst=webcam_img; dst(drawing>0)=drawing(drawing>0);
Anyone an idea how to do this in C++?
1. Custom version
I would write it explicitly:
const int cols = drawing.cols;
const int rows = drawing.rows;
for (int j = 0; j < rows; j++) {
const uint8_t* p_draw = drawing.ptr(j); //Take a pointer to j-th row of the image to be drawn
uint8_t* p_dest = webcam_img.ptr(j); //Take a pointer to j-th row of the destination image
for (int i = 0; i < cols; i++) {
//Check all three channels BGR
if(p_draw[0] | p_draw[1] | p_draw[2]) { //Using binary OR should ease the optimization work for the compiler
p_dest[0] = p_draw[0]; //If the pixel is not zero,
p_dest[1] = p_draw[1]; //copy it (overwrite) in the destination image
p_dest[2] = p_draw[2];
}
p_dest += 3; //Move to the next pixel
p_draw += 3;
}
}
Of course you can move this code in a function with arguments (const cv::Mat& drawing, cv::Mat& webcam_img).
2. OpenCV "purist" version
But the pure OpenCV way would be the following:
cv::Mat mask;
//Create a single channel image where each pixel != 0 if it is colored in your "drawing" image
cv::cvtColor(drawing, mask, CV_BGR2GRAY);
//Copy to destination image only pixels that are != 0 in the mask
drawing.copyTo(webcam_img, mask);
Less efficient (the color conversion to create the mask is somehow expensive), but certainly more compact. Small note: It won't work if you have one very dark color, like (0,0,1) that in grayscale will be converted to 0.
Also note that it might be less expensive to redraw the same overlays (lines, circles) in your destination image, basically calling the same draw operations that you made to create your drawing image.

Qt: optimize the paint event

I am currently reading image from a decoder and paint each frame of it in a widget.
This is what I am doing now:
paintEvent(...){
...
painter.setRenderHint(QPainter::Antialiasing, false);
painter.setRenderHint(QPainter::HighQualityAntialiasing, false);
QPixmap pmap = QPixmap::fromImage(glImage).scaledToWidth(width());
painter.drawPixmap(0, (height() - pmap.height()) / 2, pmap);
...
}
However, I found it to be computationally expensive...
Is there any solution to this without using the OpenGL in Qt?
You could try to use QPainter::drawImage instead of doing manual conversion between image representation (QImage -> QPixmap). Refering to documentation it should still provide way to scale the image -"Note: The image is scaled to fit the rectangle, if both the image and rectangle size disagree.".
First of all, there is no need to scale your pixmap before painting. You can pass the desired width and height as an argument to painter.drawPixmap. This will scale the image while painting which is (probably) faster.
QPixmap pmap = QPixmap::fromImage(glImage);
int w = width();
// "scaledToWidth"
int h = w * pmap.height() / (double)pmap.width();
painter.drawPixmap(0, (height() - h) / 2, w, h, pmap);
Then, you could try to draw the image directly. Depending on which operating system you are using, this might be slower or faster.
On Windows, for example, QPixmap is internally represented by a QImage anyway. And therefore, QPixmap::fromImage will basically create a (possible unnecessary) copy of that image.
int w = width();
int h = w * glImage.height() / (double)glImage.width();
painter.drawImage(0, (height() - h) / 2, w, h, glImage);
If you draw the image directly, alpha blending can become quite expensive. So if possible, use a pixel format without alpha channel or with premultiplied alpha. (In the premultiplied format the red, green, and blue channels are multiplied by the alpha component divided by 255.) (See also: QImage::Format_ARGB32_Premultiplied is your friend).
Bonus fact: That's basically what QPixmap::fromImage on Windows does. If you pass a QImage with alpha channel to that function, the internal QImage will be stored with premultiplied alpha to optimize render performance. See source code.

Thresholding a range of colors from an image

The plan
My project is able to capture the bitmap of a target window and convert it into an IplImage, and then display that image in a cvNamedWindow, where further processing can take place.
For the sake of testing, I've loaded an image into MSPaint like so:
The user is then allowed to click and drag the mouse over any number of pixels within the image to create a vector<cv::Scalar_<BYTE>> containing these RGB color values.
Then, with the help of ColorRGBToHLS(), this array is then sorted from left to right by hue, like so:
// PixelColor is just a cv::Scalar_<BYTE>
bool comparePixelColors( PixelColor& pc1, PixelColor& pc2 ) {
WORD h1 = 0, h2 = 0;
WORD s1 = 0, s2 = 0;
WORD l1 = 0, l2 = 0;
ColorRGBToHLS(RGB(pc1.val[2], pc1.val[1], pc1.val[0]), &h1, &l1, &s1);
ColorRGBToHLS(RGB(pc2.val[2], pc2.val[1], pc2.val[0]), &h2, &l2, &s2);
return ( h1 < h2 );
}
//..(elsewhere in code)
std::sort(m_colorRange.begin(), m_colorRange.end(), comparePixelColors);
...and then shown in a new cvNamedWindow, which looks something like:
The problem
Now, the idea here is to create a binary threshold image (or "mask") where this selected range of colors become white, and the rest of the source image becomes black... similar to the way the "Select By Color" tool operates in GIMP, or the "magic wand" tool works in Photoshop... except instead of limiting ourselves to a specific contoured selection, we are literally operating on the image as a whole.
I've read into cvInRangeS, and it sounds like it's precisely what I need.
However, and for whatever reason, the thresholded image always ends up being totally black...
VOID ShowThreshedImage(const IplImage* src, const PixelColor& min, const PixelColor& max)
{
IplImage* imgHSV = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 3);
cvCvtColor(src, imgHSV, CV_RGB2HLS);
cvNamedWindow("T1");
cvShowImage("T1", imgHSV); // <-- Shows up like the image below
IplImage* imgThreshed = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 1);
cvInRangeS(imgHSV, min, max, imgThreshed);
cvNamedWindow("T2");
cvShowImage("T2", imgThreshed); // <-- SHOWS UP PITCH BLACK!
}
This is what the "T1" window ends up looking like (which I suppose is correct?):
Bearing in mind that because the color range vector is stored as RGB (and that OpenCV internally reverses this order into BGR), I have converted the min/max values into HLS before passing them into ShowThreshedImage() like so:
CvScalar rgbPixelToHSV(const PixelColor& pixelColor)
{
WORD h = 0, s = 0, l = 0;
ColorRGBToHLS(RGB(pixelColor.val[2], pixelColor.val[1], pixelColor.val[0]), &h, &l, &s);
return PixelColor(h, s, l);
}
//...(elsewhere in code)
if(m_colorRange.size() > 0)
m_minHSV = rgbPixelToHSV(m_colorRange[0]);
if(m_colorRange.size() > 1)
m_maxHSV = rgbPixelToHSV(m_colorRange[m_colorRange.size() - 1]);
ShowThreshedImage(m_imgSrc, m_minHSV, m_maxHSV);
...But even without this conversion and simply passing RGB values instead, the result is still an entirely black image. I've even tried manually plugging in certain min/max values, and the best result I got was a few lit pixels (albeit, the incorrect ones).
The question:
What am I doing wrong here?
Is there something that I don't understand about the cvInRangeS method?
Do I need to step through each and every single color in order to properly threshold the selected range out of the source image?
Are there any other ways of accomplishing this?
Thank you for your time.
Update:
I have discovered that cvInRangeS expects all values for min to be lower than that of max. But when a range of colors are selected, there doesn't appear to be any guarantee that this will be the case, often resulting in a black thresholded image.
And swapping values to enforce this rule may result in unwanted colors within the new range (in some cases, this could include all colors instead of just the desired ones).
So I suppose the real question here would be:
"How do you segment an array of RGB colors, and use them to threshold an image?"
Your problem might be caused by the simple fact that OpenCV maintains a different range for values than for instanc MSpaint. For instance the HSV color space in paint is 360,100,100 while in OpenCV it is 180,255,255. Check your input values in openCV bu outputting the pixel value when clicking on a certain pixel. inRangeS should be the correct tool for the job. That said, in RGB it should work just as well because the range is the same as in paint.
cvSetMouseCallback("MyWindow", mouseEvent, (void*) &myImage);
void mouseEvent(int evt, int x, int y, int flags, void *param) {
if (evt == CV_EVENT_LBUTTONDOWN) {
printf("%d %d\n", x, y);
IplImage* imageSource = (IplImage*) param;
Mat image(imageSource);
cout << "Image cols " << image.cols << " rows " << image.rows << endl;
Mat imageHSV;
cvtColor(image, imageHSV, CV_BGR2HSV);
Vec3b p = imageHSV.at<Vec3b > (y, x);
char text[20];
sprintf(text, "H=%d, S=%d, V=%d", p[0], p[1], p[2]);
cout << text << endl;
}
}
When you have an idea about the HSV values by using this values, use these as lower and upper bounds for the in range method after converting the image to HSV by using cvtColor(image, imageHSV, CV_BGR2HSV). That should make you able to get the desired result.
It is not going to be too inefficient to iterate through every pixel. That is exactly what cvInRangeS would do - see this: http://docs.opencv.org/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-efficient-way (I do this all the time and it is instantaneous for reasonable size images).
I would treat the color in the array as points in 3D RGB space. Find two color points that specify a prism that includes all other color points. That is just finding the min and max of all r,g, and b values. If this idea is not ok then you might have to check every image pixel against every pixel in the vector.
Then for each pixel in the image: result is black if (pixel.r < min.r) || (pixel.r > max.r) || (pixel.g < min.g) || (pixel.g > max.g) || (pixel.b < min.b) || (pixel.b > max.b), result is the pixel value otherwise.
This all should be very easy, so long as it is actually what you want.