Vignettation with white color in opencv - c++

im working on a vignette filter in openCV and i tried the code in this question ( Creating vignette filter in opencv? ), and it works perfectly.
But now I'm trying to modify it to create a white vignetting filter and I can't find a way to turn it so that it shows white color vignette instead of black.
ADDITIONALY TO ANSWER
After modifying the code there are some points I'd like to make clear for any future programmers/developers or people interested in the problem.
What is said in the answer is basically to do a weighted addition of pixels. Simple addition can be easily done using openCV's AddWeighted. This can be use to do blending with any color, not just black or white. However this is not simple addition since we do not have the same blending level everyuwhere, but instead level of blending is given by the gradient;
pseudocode looks like:
pixel[][] originalImage; //3 channel image
pixel[][] result; //3 channel image
pixel[][] gradient; //1 channel image
pixel color; //pixel for color definition of color to blend with
generateGradient(gradient); //generates the gradient as one channel image
for( x from 0 to originalImage.cols )
{
for( y from 0 to originalImage.rows )
{
pixel blendLevel = gradient[x][y];
pixel pixelImage = originalImage[x][y];
pixel blendcolor = color;
//this operation is called weighted addition
//you have to multiply the whole pixel (every channel value of the pixel)
//by the blendLevel, not just one channel
result[x][y] = (blendLevel * pixelImage) + ( ( 1 - blendLevel ) * blendColor );
}
}

Say, you darken your colour fore by a factor of x. Then to blend it with a different colour back, you take x * fore + (1 - x) * back. I don't remember the exact OpenCV syntax; looking at your link, I would write something like this:
cv::Mat result = maskImage * img + (1.0 - maskImage) * white;
If you convert your image to the CIE Lab colour space (as in the vignette code), which would be a good idea, don't forget to do the same for white.

Related

Animated transition/wipe using SDL2 and black/white mask?

I've been tearing my hair out over how to do this simple effect. I've got an image (see below), and when this image is used in a game, it produces a clockwise transition to black effect. I have been trying to recreate this effect in SDL(2) but to no avail. I know it's got something to do with masking but I've no idea how to do that in code.
The closest I could get was by using "SDL_SetColorKey" and incrementing the RGB values so it would not draw the "wiping" part of the animation.
Uint32 colorkey = SDL_MapRGBA(blitSurf->format,
0xFF - counter,
0xFF - counter,
0xFF - counter,
0
);
SDL_SetColorKey(blitSurf, SDL_TRUE, colorkey);
// Yes, I'm turning the surface into a texture every frame!
SDL_DestroyTexture(streamTexture);
streamTexture = SDL_CreateTextureFromSurface(RENDERER, blitSurf);
SDL_RenderCopy(RENDERER, streamTexture, NULL, NULL);
I've searched all over and am now just desperate for an answer for my own curiosity- and sanity! I guess this question isn't exactly specific to SDL; I just need to know how to think about this!
Arbitrarily came up with a solution. It's expensive, but works. By iterating through every pixel in the image and mapping the colour like so:
int tempAlpha = (int)alpha + (speed * 5) - (int)color;
int tempColor = (int)color - speed;
*pixel = SDL_MapRGBA(fmt,
(Uint8)tempColor,
(Uint8)tempColor,
(Uint8)tempColor,
(Uint8)tempAlpha
);
Where alpha is the current alpha of the pixel, speed is the parameterised speed of the animation, and color is the current color of the pixel. fmt is the SDL_PixelFormat of the image. This is for fading to black, the following is for fading in from black:
if ((255 - counter) > origColor)
continue;
int tempAlpha = alpha - speed*5;
*pixel = SDL_MapRGBA(fmt,
(Uint8)0,
(Uint8)0,
(Uint8)0,
(Uint8)tempAlpha
);
Where origColor is the color of the pixel in the original grayscale image.
I made a quick API to do all of this, so feel free to check it out: https://github.com/Slynchy/SDL-AlphaMaskWipes

Choose luminosity (exposure) from HDR image

I'm currently stuck on a video projet from pictures.
Problem :
I'm extracting pictures from UE4, due to a bug, not all lights are taken into account during the rendering of the screenshot.
Output are HDR images. I want to get better brighteness because the exported picture are very dark, like the first exposure.
Using the "exposure bias" parameter in UE4 i'm able to real good luminosity of my scene, but cannot apply this parameter to the screenshot rendering :
Tries :
Using Tonemapper algorithm (speciafically cv::TonemapDrago) i'm able to get better image result :
The main problem, for my case, of the Tonemap Algorithm is because the global luminosity is changed depending of luminosity of areas : In the second image, the window add lots of light, so the algorithm low all the luminosity to adjust the mean.
In the video rendered, the light change is very brutal.
I've tried to change brightness and saturation without success.
I've modified the code of the TonemapDrago trying to use constants for some steps of the algorithm.
Question :
I would like to "choose the exposure time" from an HDR image. Tonemap is based on several exposure time of the same image, not interesting in my case.
Any other idea is welcome.
EDIT:
CV::Mat depth is 5, type is CV_32FC3
cout << mat.step give me 19200
Here are 2 samples i use to try solving my problem :
First Image
Image with light window
Edit 2 :
Cannot open .HDR picture with gimp, event with the "explosure blend" plugin.
I'm able to get great enough result using Photoshop. Any idea of the algorithm behind that ? Any of the 6 Tonemap algos by OpenCV allow to choose an exposure correction.
EDIT 3:
I've followed the algorithm explain in this tuto for openGL, which is giving this C+ code to me :
cv::Mat exposureTonemap (cv::Mat m, float gamma = 2.2, float exposure = 1)
{
// Exposure tone mapping
cv::Mat exp;
cv::exp( (-m) * exposure, exp );
cv::Mat mapped = cv::Vec3f(1.0) - exp;
// Gamma correction
cv::pow(exp, 1.0f / gamma, exp);
cv::imshow("exposure tonemap", exp );
cv::waitKey();
return exp;
}
Applying this algo on my .HDR picture i got very bright result even with a correction of 1 and 1 for gamma and exposure :
Reading the code, there is something wrong because 1 and 1 as argument should not modify the picture.
Fixed that, the answer is posted. thanks a lot to #user3896254 (Ge saw it too in comment)
Consider using Retinex. It uses single image for input and is included in GIMP, so is easy to toy around, besides you can get its source code (or roll your own, which is pretty simple anyway). Since you have renders instead of photos - there's no noise, and you theoretically are able to adjust the colours to your needs.
But as #mark-ransom has already said, you may have trouble recovering information from your rendered output. you said you have HDR images as render output, but I am not sure what do you mean. Is it a single RGB image? What is the colour depth of each channel? I have tried to apply retinex to your sample, but obviously it doesn't look good, because of compression, and limited range that was applied before saving. If your output has high range and is not compressed - you'll get better results.
EDIT: I have tried retinex on your input and it turned out not very good - the bright parts of image (lamps/windows) introduced ugly dark halos around them.
In this case simple tonemapping&gamma correction looks a lot better. Your code was almost fine, you just had a little typo:
instead of cv::pow(exp, 1.0f / gamma, exp); you should have had v::pow(mapped, 1.0f / gamma, exp);
I have messed around with your code, and noticed that this tonemapping seems to degrade color saturation. To overcome this I perform it only on V channel of HSV image. Compare results yourself (left - full space tonemapping, right - V only):
Note floor color, sky in window and yellowish light color that got preserved with this approach.
Here is full code for the sake of completeness:
#include <opencv2/opencv.hpp>
using namespace cv;
Mat1f exposureTonemap (Mat1f m, float gamma = 2.2, float exposure = 1) {
// Exposure tone mapping
Mat1f exp;
cv::exp( (-m) * exposure, exp );
Mat1f mapped = 1.0f - exp;
// Gamma correction
cv::pow(mapped, 1.0f / gamma, mapped);
return mapped;
}
Mat3f hsvExposureTonemap(Mat &a) {
Mat3f hsvComb;
cvtColor(a, hsvComb, COLOR_RGB2HSV);
Mat1f hsv[3];
split(hsvComb, hsv);
hsv[2] = exposureTonemap(hsv[2], 2.2, 10);
merge(hsv, 3, hsvComb);
Mat rgb;
cvtColor(hsvComb, rgb, COLOR_HSV2RGB);
return rgb;
}
int main() {
Mat a = imread("first.HDR", -1);
Mat b = imread("withwindow.HDR", -1);
imshow("a", hsvExposureTonemap(a));
imshow("b", hsvExposureTonemap(b));
waitKey();
return 0;
}
What kind of scene lighting are you currently using? It looks like you are using point lights where the lightbulbs would be, but they aren't bright enough. In your unrendered scene, the scene is going to be full brightness. In your rendered scene, you'll get darkness.
I would maybe recommend at least a minimal sky light so that you always have some light across your scene (unless you have areas of actual darkness)
cv::Mat exposureTonemap (cv::Mat m, float gamma = 2.2, float exposure = 1)
{
// Exposure tone mapping
cv::Mat exp;
cv::exp( (-m) * exposure, exp );
cv::Mat mapped = cv::Scalar(1.0f, 1.0f, 1.0f) - exp;
// Gamma correction
cv::pow(mapped, 1.0f / gamma, mapped);
/*
cv::imshow("exposure tonemap", mapped );
cv::waitKey();
*/
return mapped;
}
This algorithm is a Tonemapper trying to simulate exposure bias in an HDR
If you want to use it in openCv 3.0 don't forget to open with -1 as last argument of imread cv::Mat img = cv::imread("mypicture.HDR", -1);

Create mask from color Image in C++ (Superimposing a colored image mask)

I've wrote a code which detects squares (white) in realtime and draws a frame around it. Each side of length l of the squares is divided in 7 parts. Then I draw a line of length h=l/7 at each of the six points evolving from the deviation perpendicular to the side of the triangle (blue). The corners are marked in red. It then looks something like this:
For the drawing of the blue lines and circles I have a 3 Channel (CV_8UC3) matrix drawing, which is zero everywhere except at the positions of the red, blue and white lines. Then what I do to lay this matrix over my webcam image is using the addWeighted function of opencv.
addWeighted( drawing, 1, webcam_img, 1, 0.0, dst); (Description for addWeighted here).
But then, as you can see I get the effect that the colors for my dashes and circles are wrong outside the black area (probably also not correct inside the black area, but better there). It makes totally sense why it happens, as it just adds the matrices with a weight.
I'd like to have the matrix drawing with the correct colors over my image. Problem is, I don't no how to fix it. I somehow need a mask drawing_mask where my dashes are, sort of, superimposed to my camera image. In Matlab something like dst=webcam_img; dst(drawing>0)=drawing(drawing>0);
Anyone an idea how to do this in C++?
1. Custom version
I would write it explicitly:
const int cols = drawing.cols;
const int rows = drawing.rows;
for (int j = 0; j < rows; j++) {
const uint8_t* p_draw = drawing.ptr(j); //Take a pointer to j-th row of the image to be drawn
uint8_t* p_dest = webcam_img.ptr(j); //Take a pointer to j-th row of the destination image
for (int i = 0; i < cols; i++) {
//Check all three channels BGR
if(p_draw[0] | p_draw[1] | p_draw[2]) { //Using binary OR should ease the optimization work for the compiler
p_dest[0] = p_draw[0]; //If the pixel is not zero,
p_dest[1] = p_draw[1]; //copy it (overwrite) in the destination image
p_dest[2] = p_draw[2];
}
p_dest += 3; //Move to the next pixel
p_draw += 3;
}
}
Of course you can move this code in a function with arguments (const cv::Mat& drawing, cv::Mat& webcam_img).
2. OpenCV "purist" version
But the pure OpenCV way would be the following:
cv::Mat mask;
//Create a single channel image where each pixel != 0 if it is colored in your "drawing" image
cv::cvtColor(drawing, mask, CV_BGR2GRAY);
//Copy to destination image only pixels that are != 0 in the mask
drawing.copyTo(webcam_img, mask);
Less efficient (the color conversion to create the mask is somehow expensive), but certainly more compact. Small note: It won't work if you have one very dark color, like (0,0,1) that in grayscale will be converted to 0.
Also note that it might be less expensive to redraw the same overlays (lines, circles) in your destination image, basically calling the same draw operations that you made to create your drawing image.

Image Processing How to Apply gradient [-1 | 0 | 1 ] to RGB image

I need to apply gradient operator to RGB bitmap image. It works for 8 bit image but having the difficulty in implementing same for 24 bit image. Here is my code. Can anyone see how
to correct the zorizontal gradient operation to RGB image.
if (iBitPerPixel == 24) ////RGB 24 bits image
{
for(int i=0; i<iHeight; i++)
for(int j=1; j<iWidth-4; j++)
{
//pImg_Gradient[i*Wp+j] = pImg[i*Wp+j+1] - pImg[i*Wp+j-1] ;
int level = pImg[i*Wp+j*3+1] - pImg[i*Wp+j*3-1] ;
pImg_Gradient[i*Wp+j*3] = level;
// pImg_Gradient[i*Wp+j*3] = level;
// pImg_Gradient[i*Wp+j*3+1] = level;
// pImg_Gradient[i*Wp+j*3+2]= level;
}
for(int i=0; i<iHeight; i++)
for(int j=0; j<iWidth; j++)
{
// Copy the convetred values to original image.
pImg[i*Wp+j] = (BYTE) pImg_Gradient[i*Wp+j];
}
//delete pImg_Gradient;
}
Unfortunately, it is not clear how to define a gradient of an RGB image. The best way to go is to transform the image into a color space that separates intensity from color, such as HSV, and compute the gradient of the intensity component. Alternatively, you can compute the gradient of each color channel separately, and then combine the results in some way, such as taking the average.
Also see Edge detectors for RGB images?
In order to calculate the Gradient of an image (Which is a vector) you need to calculate both the horizontal and vertical derivative of the image.
Since we're dealing with a discrete image we should use Finitie Difference approximations of the derivative.
There are many ways to approximate, many of them are listed on the Wikipedia Pages:
http://en.wikipedia.org/wiki/Finite_difference
http://en.wikipedia.org/wiki/Finite_difference_method
http://en.wikipedia.org/wiki/Finite_difference_coefficients
Basically those are Spatial Coefficients hence you can define a filter using them and just filter the image.
This would be the most efficient way to calculate the gradient.
So, all you need is to find a library (Such as Open CV) which supports filtering images and you're done.
For color images, usually, you just calculate the Gradient per Color Channel.
Good Luck.
From your code; you are trying to calculate gradient from RGB but there is nothing to indicate how RGB is stored in your image. A complete guess is that in your image you have BGRBGRBGR...etc.
In that case your code is getting the gradient from the green channel, then storing it in the red of the gradient image. You don't show the gradient image being cleared to 0 - if you don't do this then it will probably be full of junk.
My suggestion is to convert to a greyscale image first; then you can use your original code.
Or calculate a gradient for each colour channel.

How to apply overlay transparency to RGBA image

Here's my dilemma: I have to RGBA RAW images: a master image (the first) and a subtitle track (the second), and I want to overlay them in a way based on the alpha channel of the second image: If it's zero, then take the pixels from the second image, if it's 0xFF take the pixels from the first image, otherwise create an overlay of the second image on the first one. Here's the code used for this:
if(frame->bytes[pc + 3] == 0xFF) /* this is NO transparency in the overlay image, meaning: take over the overlay 100% */
{
pFrameRGB->data[0][pc] = frame->bytes[pc]; // Red
pFrameRGB->data[0][pc+1] = frame->bytes[pc+1];// Green
pFrameRGB->data[0][pc+2] = frame->bytes[pc+2];// Blue
}
else
if(frame->bytes[pc + 3] != 0) /* this is full transparency in the overlay image, meaning: take over the image 100% */
{
pFrameRGB->data[0][pc] |= frame->bytes[pc]; // Red
pFrameRGB->data[0][pc+1] |= frame->bytes[pc+1];// Green
pFrameRGB->data[0][pc+2] |= frame->bytes[pc+2];// Blue
pFrameRGB->data[0][pc+3] = frame->bytes[pc+3]; // Alpha
}
In the code above the pFrameRGB is the target RGBA image, already containing somet image there, frame->bytes is the "overlay/subtitle" image ... And here comes my question: with some colourful overlay/subtitle images the destination is too colourful... so it's not like the subtitle image is overlayed which effect I want to obtain but you can see a whole range of colors (For example: I have a red/green overlay image with an increasing alpha and I would like the overlay image to look like a "pale" red/green overlay with image below it, however with the approach above I get a lot of colourful pixels on the image below). Do you have a somewhat better approach to this?
Thanks,
fritzone
An equation for doing alpha blending is more complex than just a bitwise or. Supposing a linear response model for RGB a quite common implementation is:
dst_R = (src_R*src_A + dst_R*(255 - src_A)) / 255;
dst_G = (src_G*src_A + dst_G*(255 - src_A)) / 255;
dst_B = (src_B*src_A + dst_B*(255 - src_A)) / 255;
dst_A = min(src_A + dst_A, 255);