Here's my dilemma: I have to RGBA RAW images: a master image (the first) and a subtitle track (the second), and I want to overlay them in a way based on the alpha channel of the second image: If it's zero, then take the pixels from the second image, if it's 0xFF take the pixels from the first image, otherwise create an overlay of the second image on the first one. Here's the code used for this:
if(frame->bytes[pc + 3] == 0xFF) /* this is NO transparency in the overlay image, meaning: take over the overlay 100% */
{
pFrameRGB->data[0][pc] = frame->bytes[pc]; // Red
pFrameRGB->data[0][pc+1] = frame->bytes[pc+1];// Green
pFrameRGB->data[0][pc+2] = frame->bytes[pc+2];// Blue
}
else
if(frame->bytes[pc + 3] != 0) /* this is full transparency in the overlay image, meaning: take over the image 100% */
{
pFrameRGB->data[0][pc] |= frame->bytes[pc]; // Red
pFrameRGB->data[0][pc+1] |= frame->bytes[pc+1];// Green
pFrameRGB->data[0][pc+2] |= frame->bytes[pc+2];// Blue
pFrameRGB->data[0][pc+3] = frame->bytes[pc+3]; // Alpha
}
In the code above the pFrameRGB is the target RGBA image, already containing somet image there, frame->bytes is the "overlay/subtitle" image ... And here comes my question: with some colourful overlay/subtitle images the destination is too colourful... so it's not like the subtitle image is overlayed which effect I want to obtain but you can see a whole range of colors (For example: I have a red/green overlay image with an increasing alpha and I would like the overlay image to look like a "pale" red/green overlay with image below it, however with the approach above I get a lot of colourful pixels on the image below). Do you have a somewhat better approach to this?
Thanks,
fritzone
An equation for doing alpha blending is more complex than just a bitwise or. Supposing a linear response model for RGB a quite common implementation is:
dst_R = (src_R*src_A + dst_R*(255 - src_A)) / 255;
dst_G = (src_G*src_A + dst_G*(255 - src_A)) / 255;
dst_B = (src_B*src_A + dst_B*(255 - src_A)) / 255;
dst_A = min(src_A + dst_A, 255);
Related
I am trying to make my watermark transparent with low opacity, but it seems just setting the colors to white:
This is the code I'm using which BTW I found in some website
/////////////////// Blending Images (Making Alpha) ////////////////////////
int main()
{
Mat img, img_bgra;
string img_path = "res/test.png";
img = imread(img_path);
if (img.data == NULL)
{
cout << "Image is not loaded!" << endl;
return -1;
}
cvtColor(img, img_bgra, ColorConversionCodes::COLOR_BGR2BGRA);
vector<Mat> channels(4);
split(img_bgra, channels);
channels[3] = channels[3] * 0.1;
merge(channels.data(), 4, img_bgra);
imwrite("res/transparent.png", img_bgra);
imshow("Image", img_bgra);
waitKey(0);
return 0;
}
I want the watermark to be displayed like this:
How can I achieve that?
i`m no good with C++, so i will try to explain with python example, hopefully this will be readable enough to help
alpha = 0.1 # maximum watermark opacity
imageSource = cv2.imread("res/test.png") # assuming BGR, uint8
imageWatermark = cv2.imread("res/transparent.png") # assuming BGRA, uint8
maskWatermark = imageWatermark[:,:, 3] # copy the alpha(transparency) channel, uint8
maskWatermark = np.float32(maskWatermark)*(1/255)*alpha # convert to float, normalize, apply transparency mul
maskSource = 1 -maskWatermark # float32, mask out the things we want to keep
imageWatermark = cv2.cvtColor(imageWatermark, cv2.COLOR_BGRA2BGR) # convert to same colorspace as source (3 channels), uint8
imageResult = np.uint8( np.float32(imageSource)*maskSource
+np.float32(imageWatermark)*maskWatermark)) # blend, convert to uint8
cv2.imshow('result', imageResult)
Key points here are:
some sort of mask is needed to tell which pixels of watermark are
going to affect the resulting image
blending is like interpolation between two color vectors, where
opacity acts like t-coordinate; this is done for each correspoinding
pixel pairs of two images
carefully watch data types to avoid overflow
images must be of same dimensions; if they`re not, you should shrink
or extend them in some way. I think that watermark is most likely is
much smaller than the image is. In this case you may want to copy the
watermarke part of the image (which matches watermark dimensions),
apply watermark and then copy back the watermarked fragment
I have a grayscale image converted into numpy array.
I am trying to render this image on the sdl2 window surface.
sdl2.ext.init()
self.window = sdl2.ext.Window("Hello World!", size=(W, H))
self.window.show()
self.events = sdl2.ext.get_events()
for event in self.events:
if event.type == sdl2.SDL_QUIT:
exit(0)
self.windowsurface = sdl2.SDL_GetWindowSurface(self.window.window)
self.windowArray = sdl2.ext.pixels2d(self.windowsurface.contents)
self.windowArray[:] = frame[:,:,1].swapaxes(0,1)
self.window.refresh()
Right now I see the image in blue form. I want to render it as grayscale image. I have also tried to explore the sdl2.ext.colorpalettes but no success.
How can I display the grayscale numpy array on the sdl2 window surface
I've been playing around with this today, and from what I can tell the reason is a difference in dtypes, the surface is a numpy.uint32 while an image loaded from a gray scale image is only numpy.uint8. so full white in uint8 is 0xff when stored as auin32 it becomes 0x000000ff which is blue.
My dummy approach for testing is some numpy bit shifting:
self.windowArray[:] = self.windowArray[:] + (self.windowArray[:] << 8) + (self.windowArray[:] << 16)
I'm sure there is a better approach but at least it identifies the problem
I have an image in two different formats. one is BGR and the other is black and white (there is only black and white, no gray colored pixels). Its the same exact image (same size and pixels). I want to find all the white pixels in the black and white image, mark them down and then find the exact same pixels in the BGR image (obviously they are colored in the BGR image) and color them black.
I tried it but the thing is the black and white image has 1 channel and the BGR one has 3 channels so i failed...
i am using opencv in c++
thanks for your help! :)
for(int y=0;y<inputImage.rows;y++){
for(int x=0;x<inputImage.cols;x++){
Vec3b color = inputImage.at<Vec3b>(Point(x,y));
if(blackWhite.at<uchar>(y,x) == 255){
//cout << "found white pixel\n";
color[0] = 0;
color[1] = 0;
color[2] = 0;
inputImage.at<Vec3b>(Point(x,y)) = color;
}
}
}
inputImage is my BGR image and blackWhite is an image of same size with black and white pixels. both are Mat objects.
i want to go through the blackWhite image and whenever i find a white pixel, color the same pixel from the inputImage image in black color.
My strategy is to construct an array similar to your blackWhite image. This array will be zeros and ones and we'll multiply it with inputImage to get the desired output.
Currently, your blackWhite image looks something like (for example)
blackWhite = 255 255 0 0 ....
....
in pseudocode. If we transform this image to become
newArray = 0 0 1 1 ....
....
You could then use cv::multiply(newArray, inputImage) to get the desired output.
One way to directly transform your existing blackWhite image into newArray would be to perform y = (-1/255)*x + 1 on every pixel in blackWhite. You can accomplish this with cv::Mat::convertTo(outputImage, 8, -1/255, 1)
I've been tearing my hair out over how to do this simple effect. I've got an image (see below), and when this image is used in a game, it produces a clockwise transition to black effect. I have been trying to recreate this effect in SDL(2) but to no avail. I know it's got something to do with masking but I've no idea how to do that in code.
The closest I could get was by using "SDL_SetColorKey" and incrementing the RGB values so it would not draw the "wiping" part of the animation.
Uint32 colorkey = SDL_MapRGBA(blitSurf->format,
0xFF - counter,
0xFF - counter,
0xFF - counter,
0
);
SDL_SetColorKey(blitSurf, SDL_TRUE, colorkey);
// Yes, I'm turning the surface into a texture every frame!
SDL_DestroyTexture(streamTexture);
streamTexture = SDL_CreateTextureFromSurface(RENDERER, blitSurf);
SDL_RenderCopy(RENDERER, streamTexture, NULL, NULL);
I've searched all over and am now just desperate for an answer for my own curiosity- and sanity! I guess this question isn't exactly specific to SDL; I just need to know how to think about this!
Arbitrarily came up with a solution. It's expensive, but works. By iterating through every pixel in the image and mapping the colour like so:
int tempAlpha = (int)alpha + (speed * 5) - (int)color;
int tempColor = (int)color - speed;
*pixel = SDL_MapRGBA(fmt,
(Uint8)tempColor,
(Uint8)tempColor,
(Uint8)tempColor,
(Uint8)tempAlpha
);
Where alpha is the current alpha of the pixel, speed is the parameterised speed of the animation, and color is the current color of the pixel. fmt is the SDL_PixelFormat of the image. This is for fading to black, the following is for fading in from black:
if ((255 - counter) > origColor)
continue;
int tempAlpha = alpha - speed*5;
*pixel = SDL_MapRGBA(fmt,
(Uint8)0,
(Uint8)0,
(Uint8)0,
(Uint8)tempAlpha
);
Where origColor is the color of the pixel in the original grayscale image.
I made a quick API to do all of this, so feel free to check it out: https://github.com/Slynchy/SDL-AlphaMaskWipes
im working on a vignette filter in openCV and i tried the code in this question ( Creating vignette filter in opencv? ), and it works perfectly.
But now I'm trying to modify it to create a white vignetting filter and I can't find a way to turn it so that it shows white color vignette instead of black.
ADDITIONALY TO ANSWER
After modifying the code there are some points I'd like to make clear for any future programmers/developers or people interested in the problem.
What is said in the answer is basically to do a weighted addition of pixels. Simple addition can be easily done using openCV's AddWeighted. This can be use to do blending with any color, not just black or white. However this is not simple addition since we do not have the same blending level everyuwhere, but instead level of blending is given by the gradient;
pseudocode looks like:
pixel[][] originalImage; //3 channel image
pixel[][] result; //3 channel image
pixel[][] gradient; //1 channel image
pixel color; //pixel for color definition of color to blend with
generateGradient(gradient); //generates the gradient as one channel image
for( x from 0 to originalImage.cols )
{
for( y from 0 to originalImage.rows )
{
pixel blendLevel = gradient[x][y];
pixel pixelImage = originalImage[x][y];
pixel blendcolor = color;
//this operation is called weighted addition
//you have to multiply the whole pixel (every channel value of the pixel)
//by the blendLevel, not just one channel
result[x][y] = (blendLevel * pixelImage) + ( ( 1 - blendLevel ) * blendColor );
}
}
Say, you darken your colour fore by a factor of x. Then to blend it with a different colour back, you take x * fore + (1 - x) * back. I don't remember the exact OpenCV syntax; looking at your link, I would write something like this:
cv::Mat result = maskImage * img + (1.0 - maskImage) * white;
If you convert your image to the CIE Lab colour space (as in the vignette code), which would be a good idea, don't forget to do the same for white.