Animated transition/wipe using SDL2 and black/white mask? - c++

I've been tearing my hair out over how to do this simple effect. I've got an image (see below), and when this image is used in a game, it produces a clockwise transition to black effect. I have been trying to recreate this effect in SDL(2) but to no avail. I know it's got something to do with masking but I've no idea how to do that in code.
The closest I could get was by using "SDL_SetColorKey" and incrementing the RGB values so it would not draw the "wiping" part of the animation.
Uint32 colorkey = SDL_MapRGBA(blitSurf->format,
0xFF - counter,
0xFF - counter,
0xFF - counter,
0
);
SDL_SetColorKey(blitSurf, SDL_TRUE, colorkey);
// Yes, I'm turning the surface into a texture every frame!
SDL_DestroyTexture(streamTexture);
streamTexture = SDL_CreateTextureFromSurface(RENDERER, blitSurf);
SDL_RenderCopy(RENDERER, streamTexture, NULL, NULL);
I've searched all over and am now just desperate for an answer for my own curiosity- and sanity! I guess this question isn't exactly specific to SDL; I just need to know how to think about this!

Arbitrarily came up with a solution. It's expensive, but works. By iterating through every pixel in the image and mapping the colour like so:
int tempAlpha = (int)alpha + (speed * 5) - (int)color;
int tempColor = (int)color - speed;
*pixel = SDL_MapRGBA(fmt,
(Uint8)tempColor,
(Uint8)tempColor,
(Uint8)tempColor,
(Uint8)tempAlpha
);
Where alpha is the current alpha of the pixel, speed is the parameterised speed of the animation, and color is the current color of the pixel. fmt is the SDL_PixelFormat of the image. This is for fading to black, the following is for fading in from black:
if ((255 - counter) > origColor)
continue;
int tempAlpha = alpha - speed*5;
*pixel = SDL_MapRGBA(fmt,
(Uint8)0,
(Uint8)0,
(Uint8)0,
(Uint8)tempAlpha
);
Where origColor is the color of the pixel in the original grayscale image.
I made a quick API to do all of this, so feel free to check it out: https://github.com/Slynchy/SDL-AlphaMaskWipes

Related

Choose luminosity (exposure) from HDR image

I'm currently stuck on a video projet from pictures.
Problem :
I'm extracting pictures from UE4, due to a bug, not all lights are taken into account during the rendering of the screenshot.
Output are HDR images. I want to get better brighteness because the exported picture are very dark, like the first exposure.
Using the "exposure bias" parameter in UE4 i'm able to real good luminosity of my scene, but cannot apply this parameter to the screenshot rendering :
Tries :
Using Tonemapper algorithm (speciafically cv::TonemapDrago) i'm able to get better image result :
The main problem, for my case, of the Tonemap Algorithm is because the global luminosity is changed depending of luminosity of areas : In the second image, the window add lots of light, so the algorithm low all the luminosity to adjust the mean.
In the video rendered, the light change is very brutal.
I've tried to change brightness and saturation without success.
I've modified the code of the TonemapDrago trying to use constants for some steps of the algorithm.
Question :
I would like to "choose the exposure time" from an HDR image. Tonemap is based on several exposure time of the same image, not interesting in my case.
Any other idea is welcome.
EDIT:
CV::Mat depth is 5, type is CV_32FC3
cout << mat.step give me 19200
Here are 2 samples i use to try solving my problem :
First Image
Image with light window
Edit 2 :
Cannot open .HDR picture with gimp, event with the "explosure blend" plugin.
I'm able to get great enough result using Photoshop. Any idea of the algorithm behind that ? Any of the 6 Tonemap algos by OpenCV allow to choose an exposure correction.
EDIT 3:
I've followed the algorithm explain in this tuto for openGL, which is giving this C+ code to me :
cv::Mat exposureTonemap (cv::Mat m, float gamma = 2.2, float exposure = 1)
{
// Exposure tone mapping
cv::Mat exp;
cv::exp( (-m) * exposure, exp );
cv::Mat mapped = cv::Vec3f(1.0) - exp;
// Gamma correction
cv::pow(exp, 1.0f / gamma, exp);
cv::imshow("exposure tonemap", exp );
cv::waitKey();
return exp;
}
Applying this algo on my .HDR picture i got very bright result even with a correction of 1 and 1 for gamma and exposure :
Reading the code, there is something wrong because 1 and 1 as argument should not modify the picture.
Fixed that, the answer is posted. thanks a lot to #user3896254 (Ge saw it too in comment)
Consider using Retinex. It uses single image for input and is included in GIMP, so is easy to toy around, besides you can get its source code (or roll your own, which is pretty simple anyway). Since you have renders instead of photos - there's no noise, and you theoretically are able to adjust the colours to your needs.
But as #mark-ransom has already said, you may have trouble recovering information from your rendered output. you said you have HDR images as render output, but I am not sure what do you mean. Is it a single RGB image? What is the colour depth of each channel? I have tried to apply retinex to your sample, but obviously it doesn't look good, because of compression, and limited range that was applied before saving. If your output has high range and is not compressed - you'll get better results.
EDIT: I have tried retinex on your input and it turned out not very good - the bright parts of image (lamps/windows) introduced ugly dark halos around them.
In this case simple tonemapping&gamma correction looks a lot better. Your code was almost fine, you just had a little typo:
instead of cv::pow(exp, 1.0f / gamma, exp); you should have had v::pow(mapped, 1.0f / gamma, exp);
I have messed around with your code, and noticed that this tonemapping seems to degrade color saturation. To overcome this I perform it only on V channel of HSV image. Compare results yourself (left - full space tonemapping, right - V only):
Note floor color, sky in window and yellowish light color that got preserved with this approach.
Here is full code for the sake of completeness:
#include <opencv2/opencv.hpp>
using namespace cv;
Mat1f exposureTonemap (Mat1f m, float gamma = 2.2, float exposure = 1) {
// Exposure tone mapping
Mat1f exp;
cv::exp( (-m) * exposure, exp );
Mat1f mapped = 1.0f - exp;
// Gamma correction
cv::pow(mapped, 1.0f / gamma, mapped);
return mapped;
}
Mat3f hsvExposureTonemap(Mat &a) {
Mat3f hsvComb;
cvtColor(a, hsvComb, COLOR_RGB2HSV);
Mat1f hsv[3];
split(hsvComb, hsv);
hsv[2] = exposureTonemap(hsv[2], 2.2, 10);
merge(hsv, 3, hsvComb);
Mat rgb;
cvtColor(hsvComb, rgb, COLOR_HSV2RGB);
return rgb;
}
int main() {
Mat a = imread("first.HDR", -1);
Mat b = imread("withwindow.HDR", -1);
imshow("a", hsvExposureTonemap(a));
imshow("b", hsvExposureTonemap(b));
waitKey();
return 0;
}
What kind of scene lighting are you currently using? It looks like you are using point lights where the lightbulbs would be, but they aren't bright enough. In your unrendered scene, the scene is going to be full brightness. In your rendered scene, you'll get darkness.
I would maybe recommend at least a minimal sky light so that you always have some light across your scene (unless you have areas of actual darkness)
cv::Mat exposureTonemap (cv::Mat m, float gamma = 2.2, float exposure = 1)
{
// Exposure tone mapping
cv::Mat exp;
cv::exp( (-m) * exposure, exp );
cv::Mat mapped = cv::Scalar(1.0f, 1.0f, 1.0f) - exp;
// Gamma correction
cv::pow(mapped, 1.0f / gamma, mapped);
/*
cv::imshow("exposure tonemap", mapped );
cv::waitKey();
*/
return mapped;
}
This algorithm is a Tonemapper trying to simulate exposure bias in an HDR
If you want to use it in openCv 3.0 don't forget to open with -1 as last argument of imread cv::Mat img = cv::imread("mypicture.HDR", -1);

Vignettation with white color in opencv

im working on a vignette filter in openCV and i tried the code in this question ( Creating vignette filter in opencv? ), and it works perfectly.
But now I'm trying to modify it to create a white vignetting filter and I can't find a way to turn it so that it shows white color vignette instead of black.
ADDITIONALY TO ANSWER
After modifying the code there are some points I'd like to make clear for any future programmers/developers or people interested in the problem.
What is said in the answer is basically to do a weighted addition of pixels. Simple addition can be easily done using openCV's AddWeighted. This can be use to do blending with any color, not just black or white. However this is not simple addition since we do not have the same blending level everyuwhere, but instead level of blending is given by the gradient;
pseudocode looks like:
pixel[][] originalImage; //3 channel image
pixel[][] result; //3 channel image
pixel[][] gradient; //1 channel image
pixel color; //pixel for color definition of color to blend with
generateGradient(gradient); //generates the gradient as one channel image
for( x from 0 to originalImage.cols )
{
for( y from 0 to originalImage.rows )
{
pixel blendLevel = gradient[x][y];
pixel pixelImage = originalImage[x][y];
pixel blendcolor = color;
//this operation is called weighted addition
//you have to multiply the whole pixel (every channel value of the pixel)
//by the blendLevel, not just one channel
result[x][y] = (blendLevel * pixelImage) + ( ( 1 - blendLevel ) * blendColor );
}
}
Say, you darken your colour fore by a factor of x. Then to blend it with a different colour back, you take x * fore + (1 - x) * back. I don't remember the exact OpenCV syntax; looking at your link, I would write something like this:
cv::Mat result = maskImage * img + (1.0 - maskImage) * white;
If you convert your image to the CIE Lab colour space (as in the vignette code), which would be a good idea, don't forget to do the same for white.

How to create one bitmap from parts of many textures (C++, SDL 2)?

I have *.png files and I want to get different 8x8 px parts from textures and place them on bitmap (SDL_Surface, I guess, but maybe not), smth like this:
Now I'm rendering that without bitmap, i.e. I call each texture and draw part directly on screen each frame, and it's too slow. I guess I need to load each *.png to separate bitmap and use them passing video memory, then call just one big bitmap, but maybe I'm wrong. I need the fastest way of doing that, I need code of this (SDL 2, not SDL 1.3).
Also maybe I need to use clear OpenGL here?
Update:
Or maybe I need to load *.png's to int arrays somehow and use them just like usual numbers and place them to one big int array, and then convert it to SDL_Surface/SDL_Texture? It seems this is the best way, but how to write this?
Update 2:
Colors of pixels in each block are not the same as it presented at the picture and also can they be transparent. Picture is just an example.
Assumming you already have your bitmaps loaded up as SDL_Texture(s), composing them into a different texture is done via SDL_SetRenderTarget .
SDL_SetRenderTarget(renderer, target_texture);
SDL_RenderCopy(renderer, texture1, ...);
SDL_RenderCopy(renderer, texture2, ...);
...
SDL_SetRenderTarget(renderer, NULL);
Every render operation you perform between setting your Render Target and resetting it (by calling SDL_SetRenderTarget with a NULL texture parameter) will be renderer to the designated texture. You can then use this texture as you would use any other.
Ok so, when I asked about "solid colour", I meant - "in that 8x8 pixel area in the .png that you are copying from, do all 64 pixels have the same identical RGB value?" It looks that way in your diagram, so how about this:
How about creating an SDL_Surface, and directly painting 8x8 pixel areas of the memory pointed to by the pixels member of that SDL_Surface with the values read from the original .png.
And then when you're done, convert that surface to an SDL_Texture and render that?
You would avoid all the SDL_UpdateTexture() calls.
Anyway here is some example code. Let's say that you create a class called EightByEight.
class EightByEight
{
public:
EightByEight( SDL_Surface * pDest, Uint8 r, Uint8 g, Uint8 b):
m_pSurface(pDest),
m_red(r),
m_green(g),
m_blue(b){}
void BlitToSurface( int column, int row );
private:
SDL_Surface * m_pSurface;
Uint8 m_red;
Uint8 m_green;
Uint8 m_blue;
};
You construct an object of type EightByEight by passing it a pointer to an SDL_Surface and also some values for red, green and blue. This RGB corresponds to the RGB value taken from the particular 8x8 pixel area of the .png you are currently reading from. You will paint a particular 8x8 pixel area of the SDL_Surface pixels with this RGB value.
So now when you want to paint an area of the SDL_Surface, you use the function BlitToSurface() and pass in a column and row value. For example, if you divided the SDL_Surface into 8x8 pixel squares, BlitToSurface(3,5) means the paint the square at the 4th column, and 5th row with the RGB value that I set on construction.
The BlitToSurface() looks like this:
void EightByEight::BlitToSurface(int column, int row)
{
Uint32 * pixel = (Uint32*)m_pSurface->pixels+(row*(m_pSurface->pitch/4))+column;
// now pixel is pointing to the first pixel in the correct 8x8 pixel square
// of the Surface's pixel memory. Now you need to paint a 8 rows of 8 pixels,
// but be careful - you need to add m_pSurface->pitch - 8 each time
for(int y = 0; y < 8; y++)
{
// paint a row
for(int i = 0; i < 8; i++)
{
*pixel++ = SDL_MapRGB(m_pSurface->format, m_red, m_green, m_blue);
}
// advance pixel pointer by pitch-8, to get the next "row".
pixel += (m_pSurface->pitch - 8);
}
}
I'm sure you could probably speed things up further by pre-calculating an RGB value on construction. Or if you're reading a pixel from the texture, you could probably dispense with the SDL_MapRGB() (but it's just there in case the Surface has different pixel format to the .png).
memcpy is probably faster than 8 individual assignments to the RGB value - but I just want to demonstrate the technique. You could experiment.
So, all the EightByEight objects you create, all point to the same SDL_Surface.
And then, when you're done, you just convert that SDL_Surface to an SDL_Texture and blit that.
Thanks to everyone who took part, but we solved it with my friends. So here is an example (source code is too big and unnecessary here, I'll just describe the main idea):
int pitch, *pixels;
SDL_Texture *texture;
...
if (!SDL_LockTexture(texture, 0, (void **)&pixels, &pitch))
{
for (/*Conditions*/)
memcpy(/*Params*/);
SDL_UnlockTexture(texture);
}
SDL_RenderCopy(renderer, texture, 0, 0);

QPixmap / QImage alpha reduction with minimum ensured alpha

I want to implement a method which reduces the alpha of every pixel in a QPixmap (4.8) by 1, every time it is called. In between calls new lines might be added to the image (with alpha of 255). Additionally I'd like to have a lower alpha threshold of say 15. Pixels which have initial alpha of 0 will keep that alpha. In pseudo-code:
if alpha == 0:
newAlpha = 0
else:
newAlpha = max(15, alpha - 1)
Right now I have two methods in mind. The first one is conversion to QImage and pixel-by-pixel reduction of alpha. However, this has two drawbacks: Performance and color artefacts: some pixels' colors change wildly. The artefacts appear when QPainting the resulting qpixmap onto another qpixmap filled with one color (with QPainter::CompositionMode_SourceOver). This is likely due to the dithering ?! I tried the two available ones, both produce these types artefacts.
QImage image = pixmap.toImage();
for (int y = 0; y < image.height(); ++y) {
for (int x = 0; x < image.width(); ++x) {
QRgb col = image.pixel(x,y);
int alpha = qAlpha(col);
if(alpha>15) {
alpha -= 1;
QRgb newCol = qRgba(qRed(col), qGreen(col), qBlue(col), alpha);
image.setPixel(x,y, newCol);
}
}
}
pixmap = QPixmap::fromImage(image, Qt::DiffuseAlphaDither | Qt::NoOpaqueDetection);
The artefacts appear with this:
QPixmap screen;
...
screen.fill(Qt::transparent);
QPainter painter( &screen );
// remove anti-aliasing, which (with current composition mode) results in even stronger artefacts
painter.setRenderHints(0);
background.fill(someRandomColor);
painter.drawPixmap(0, 0, w, h, background);
painter.drawPixmap(0, 0, w, h, pixmap);
painter.end();
Alternatively I tried to map the above pseudo-code with QPixmap drawing operations. For instance, the composition mode of QPainter QPainter::CompositionMode_DestinationIn is useful to reduce the alpha. But I don't know how to handle the thresholding with simultaneous keeping the 0 alpha values.
So now there are actually three questions:
Can I avoid the color artefacts with the QImage detour ?
Or can I map the above pseude-code to pure QPixmap/QPainter operations ?
Is there a totally different idea to this ?
EDIT:
QImage image = pixmap.toImage().convertToFormat(QImage::Format_ARGB32);
Does seem to remove the artefacts. Before it would have converted to QImage::Format_ARGB32_Premultiplied and hence the artefacts. But now it is even less performant

How to apply overlay transparency to RGBA image

Here's my dilemma: I have to RGBA RAW images: a master image (the first) and a subtitle track (the second), and I want to overlay them in a way based on the alpha channel of the second image: If it's zero, then take the pixels from the second image, if it's 0xFF take the pixels from the first image, otherwise create an overlay of the second image on the first one. Here's the code used for this:
if(frame->bytes[pc + 3] == 0xFF) /* this is NO transparency in the overlay image, meaning: take over the overlay 100% */
{
pFrameRGB->data[0][pc] = frame->bytes[pc]; // Red
pFrameRGB->data[0][pc+1] = frame->bytes[pc+1];// Green
pFrameRGB->data[0][pc+2] = frame->bytes[pc+2];// Blue
}
else
if(frame->bytes[pc + 3] != 0) /* this is full transparency in the overlay image, meaning: take over the image 100% */
{
pFrameRGB->data[0][pc] |= frame->bytes[pc]; // Red
pFrameRGB->data[0][pc+1] |= frame->bytes[pc+1];// Green
pFrameRGB->data[0][pc+2] |= frame->bytes[pc+2];// Blue
pFrameRGB->data[0][pc+3] = frame->bytes[pc+3]; // Alpha
}
In the code above the pFrameRGB is the target RGBA image, already containing somet image there, frame->bytes is the "overlay/subtitle" image ... And here comes my question: with some colourful overlay/subtitle images the destination is too colourful... so it's not like the subtitle image is overlayed which effect I want to obtain but you can see a whole range of colors (For example: I have a red/green overlay image with an increasing alpha and I would like the overlay image to look like a "pale" red/green overlay with image below it, however with the approach above I get a lot of colourful pixels on the image below). Do you have a somewhat better approach to this?
Thanks,
fritzone
An equation for doing alpha blending is more complex than just a bitwise or. Supposing a linear response model for RGB a quite common implementation is:
dst_R = (src_R*src_A + dst_R*(255 - src_A)) / 255;
dst_G = (src_G*src_A + dst_G*(255 - src_A)) / 255;
dst_B = (src_B*src_A + dst_B*(255 - src_A)) / 255;
dst_A = min(src_A + dst_A, 255);