Alpha transparency in Cairo - c++

I have a problem with displaying alpha transparency using GTK and Cairo. I try to display this image 1
If I do the alpha blending my self, everything works.
If I pass the alpha values directly to Cairo, the shadow seems to render fine, but the glow effect is corrupted.
Is this a bug in Cairo 1.14.2, or am I missing something?
//Need deprecated API to get background color
GdkColor color = gtk_widget_get_style(widget)->bg[GTK_STATE_NORMAL];
Pixel color_blend
{
uint8_t(255*color.red/65535.0f)
,uint8_t(255*color.green/65535.0f)
,uint8_t(255*color.blue/65535.0f)
,255
};
while(ptr!=ptr_end)
{
// TODO: Interpolate
auto row_src=size_t(row*factor);
auto col_src=size_t(col*factor);
auto alpha=ptr_src[row_src*width_in + col_src].v3/255.0f;
*ptr=
{
// Using manual alpha blend works
uint8_t(alpha*ptr_src[row_src*width_in + col_src].v2 + (1-alpha)*color_blend.v2)
,uint8_t(alpha*ptr_src[row_src*width_in + col_src].v1 + (1-alpha)*color_blend.v1)
,uint8_t(alpha*ptr_src[row_src*width_in + col_src].v0 + (1-alpha)*color_blend.v0)
,255
/* This appears to be broken
ptr_src[row_src*width_in + col_src].v2
,ptr_src[row_src*width_in + col_src].v1
,ptr_src[row_src*width_in + col_src].v0
,ptr_src[row_src*width_in + col_src].v3*/
};
++col;
if(col==width_out)
{
col=0;
++row;
}
++ptr;
}
I push the pixels using
auto surface=cairo_image_surface_create_for_data((uint8_t*)pixels.begin(),CAIRO_FORMAT_ARGB32,width_out,height_out,width_out*sizeof(Pixel));
cairo_set_source_surface(cr, surface, 0.5*(width-width_out), 0.0);
cairo_paint(cr);
cairo_surface_destroy(surface);
Explicitly setting the operator to CAIRO_OPERATOR_OVER does not help, the result is still the same.

As you mention in your comment above, your pixel values are wrong. You need to use pre-multiplied alpha. Coming back to my example from the question (and ignoring endianness), fully red with 50% transparency is 0x7f << 24 | 0x7f in Cairo. Pixels with invalid values (some color component is larger than the alpha value) produce undefined results and your 0xff << 24 | 0x7f falls into this category.
See http://www.cairographics.org/manual/cairo-Image-Surfaces.html#cairo-format-t:
Pre-multiplied alpha is used. (That is, 50% transparent red is
0x80800000, not 0x80ff0000.)
P.S.: In my opinion, the correct way to access pixel data is via a uint32_t and shifting, e.g. uint32_t pixel = (r << 24) | (g << 16) | (b << 8) | a;. That way you don't have to worry about endianness at all.
P.P.S.: For OVER and a fully-opaque target, the formula Cairo uses simplifies to source_color + target_color * (1 - source_alpha) while your code uses source_color * source_alpha + target_color * (1 - source_alpha). See http://www.cairographics.org/operators/. These two formulas clearly are not equivalent.
Edit: Ok, perhaps they are equivalent when using pre-multiplie alpha. Sorry for the confusion there.

Related

How can I make the faces of my cube smoothly transition between all colors of the rainbow?

I have a program in Visual Studio that is correctly rendering a 3D cube that is slowly spinning. I have a working FillTriangle() function that fills in the faces of the cube with any color whose hex code I enter as a parameter (for example, 0x00ae00ff for purple). I have set the color of each face to start at red (0xFF000000), and then I have a while loop in main() that updates the scene and draws new pixels every frame. I also have a Timer class that handles all sorts of time-related things, including the Update() method that updates things every frame. I want to make it so that the colors of the faces smoothly transitions from one color to the next, through every color of the rainbow, and I want it to loop and do it as long as the program is running. Right now, it is smoothly transitioning between a few colors before suddenly jumping to another color. For example, it might smoothly transition from yellow to orange to red, but then suddenly jump to green. Here is the code that is doing that right now:
...
main()
{
...
float d = 0.0f; //float for the timer to increment
//screenPixels is the array of all pixels on the screen, numOfPixels is the number of pixels being displayed
while(Update(screenPixels, numOfPixels))
{
...
timer.Signal(); //change in time between the last 2 signals
d += timer.Delta(); //timer.Delta() is the average current time
if(d > (1/30)) // 1 divided by number of frames
{
//Reset timer
d = 0.0f;
//Add to current pixel color being displayed
pixelColor += 0x010101FF;
}
...
}
...
}
Is there a better way to approach this? Adding to the current pixel color was the first thing that came to my mind, and it's kind of working, but it keeps skipping colors for some reason.
That constant is going to overflow with each addition. Not just as a whole number, but across each component of the color spectrum: R, G, and B.
You need to break your pixelColor into separate Red, Green, and Blue colors and do math on each byte independently. And leave Alpha fixed at 255 (fully opaque). And check for overflow/underflow along the way. When you reach an overflow or underflow moment, just change direction from incrementing to decrementing.
Also, I wouldn't increment each component by the same value (1) on each step. With the same increment on R,G, and B, you'd just be adding "more white" to the color. If you want a more natural rainbow loop, we can do something like the following:
Change this:
pixelColor += 0x010101FF;
To this:
// I'm assuming pixelColor is RGBA
int r = (pixelColor >> 24) & 0x0ff;
int g = (pixelColor >> 16) & 0x0ff;
int b = (pixelColor >> 8) & 0x0ff;
r = Increment(r, &redInc);
r = Increment(g, &greenInc);
g = Increment(g, &blueInc);
pixelColor = (r << 24) | (g << 16) | (b << 8) | 0x0ff;
Where redInc, greenInc, and blueInc are defined and initialized outside your main while loop as follows:
int redInc = -1;
int greenInc = 2;
int blueInc = 4;
And the increment function is something like this:
void Increment(int color, int* increment) {
color += *increment;
if (color < 0) {
color = 0;
*increment = (rand() % 4 + 1);
} else if (color > 255) {
color = 255;
*increment = -(rand() % 4 + 1);
}
}
That should cycle through the colors in a more natural fashion (from darker to brighter to darker again) with a bit of randomness so it's never the same pattern twice. You can play with the randomness by adjusting the initial colorInc constants at initialization time as well as how the *increment value gets updated in the Increment function.
If you see any weird color flickering, it's quite possible that you have the alpha byte in the wrong position. It might be the high byte, not the low byte. Similarly, some systems order the colors in the integer as RGBA. Others do ARGB. And quite possible RGB is flipped with BGR.

Havok - Can you change color of objects during runtime?

To anybody who has some experience with the Havok Physics Engine:
Is there a way to change the color of meshes/objects during runtime? I am working with the demo framework and I want to change the color of all the meshes/objects (in a demo) that are in-motion (velocity > 0). Its my first time using Havok. Can't find anything about it in the documentation I have.
Thanks!
On a sidenote: I have noticed that there are very little questions about Havok on stackoverflow and when I search questions about Havok online I can't seem to find anything. Where do all the Havok devs go to chat? They have a forum or something?
The solution using HVD - Havok Visual Debugger:
// Needed for calling color change macro
#include <common\visualize\hkdebugdisplay.h>
// You'll of course need any other headers for any other physics stuff
// you're doing in your file
void SetColorForPhysicsDebugger( unsigned int Red, unsigned int Green,
unsigned int Blue, unsigned int Alpha,
const hkpCollidable* pCollidable )
{
// Havok takes an unsigned int (32-bit), allowing 8-bits for
// each channel (alpha, red, green, and blue, in that
// order).
// Because we only need 8-bits from each of the 32-bit ints
// passed into this function, we'll mask the first 24-bits.
Red &= 0x000000FF;
Green &= 0x000000FF;
Blue &= 0x000000FF;
Alpha &= 0x000000FF;
// Now we pack the four channels into a single int
const uint32_t color = (Alpha << 24) | (Red << 16) | (Green << 8) | Blue;
// We use the macro provided by Havok
HK_SET_OBJECT_COLOR( reinterpret_cast<hkulong>( pCollidable ), color );
}
For more information about HVD: HVD and camera ,Setting mesh color

How to most efficiently modify R / G / B values?

So I wanted to implement lighting in my pixel based rendering system, googled and found out to display R / G / B values lighter or darker I have to multiply each red green and blue value by a number < 1 to display it darker and by a number > 1 to display it lighter.
So I implemented it like this, but its really dragging down my performance since I have to do this for each pixel:
void PixelRenderer::applyLight(Uint32& color){
Uint32 alpha = color >> 24;
alpha << 24;
alpha >> 24;
Uint32 red = color >> 16;
red = red << 24;
red = red >> 24;
Uint32 green = color >> 8;
green = green << 24;
green = green >> 24;
Uint32 blue = color;
blue = blue << 24;
blue = blue >> 24;
red = red * 0.5;
green = green * 0.5;
blue = blue * 0.5;
color = alpha << 24 | red << 16 | green << 8 | blue;
}
Any ideas or examples on how to improve the speed?
Try this: (EDIT: as it turns out, this is only a readability improvement, but read on for more insights.)
void PixelRenderer::applyLight(Uint32& color)
{
Uint32 alpha = color >> 24;
Uint32 red = (color >> 16) & 0xff;
Uint32 green = (color >> 8) & 0xff;
Uint32 blue = color & 0xff;
red = red * 0.5;
green = green * 0.5;
blue = blue * 0.5;
color = alpha << 24 | red << 16 | green << 8 | blue;
}
That having been said, you should understand that performing operations of that sort using a general-purpose processor such as the CPU of your computer is bound to be extremely slow. That's why hardware-accelerated graphics cards were invented.
EDIT
If you insist on operating this way, then you will probably have to resort to hacks in order to improve efficiency. One type of hack which is very often used when dealing with 8-bit channel values is lookup tables. With a lookup table, instead of multiplying each individual channel value by a float, you precompute an array of 256 values where the index into the array is a channel value, and the value in that index is the precomputed result of multiplying the channel value by that float. Then, when converting your image, you just use channel values to lookup entries of the array instead of performing actual float multiplication. This is much, much faster. (But still not nearly as fast as programming dedicated, massively parallel hardware do that stuff for you.)
EDIT
As others have already pointed out, if you are not planning to operate on the alpha channel, then you do not need to extract it and then later apply it, you can just leave it unaltered. So, you can just do color = (color & 0xff000000) | red << 16 | green << 8 | blue;
Shifts and masks like this are generally very fast on a modern processor. I might look at a few other things:
Follow the first rule of optimisation - profile your code. You can do this simply by calling the method millions of times and timing it. Are your calculations slow, or is it something else? What is slow? Try omitting part of the method - do things speed up?
Make sure that this function is declared inline (and make sure it has actually been inlined). The function call overhead will massively outweigh the pixel manipulations (particularly if it is virtual).
Consider declaring your method Uint32 PixelRenderer::applyLight(Uint32 color) and returning the modified value, that may help avoid some dereferences and give the compiler some additional optimisation opportunities.
Avoid fp to integer conversions, they can be very expensive. If a plain integer divide is insufficient, look at using fixed-point math.
Finally, look at the assembler to see what the compiler has generated (with optimisations on). Are there any branches or conversions? Has your method actually been inlined?
To preserve the alpha value in the front use:
(color>>1)&0x7F7F7F | (color&0xFF000000)
(A tweak on what Wimmel offered in the comments).
I think the 'learning curve' here is that you were using shift and shift back to mask out bits. You should use & with a masking value.
For a more general solution (where 0.0<=factor<=1.0) :
void PixelRenderer::applyLight(Uint32& color, double factor){
Uint32 alpha=color&0xFF000000;
Uint32 red= (color&0x00FF0000)*factor;
Uint32 green= (color&0x0000FF00)*factor;
Uint32 blue=(color&0x000000FF)*factor;
color=alpha|(red&0x00FF0000)|(green&0x0000FF00)|(blue&0x000000FF);
}
Notice there is no need to shift the components down to the low order bits before performing the multiplication.
Ultimately you may find that the bottleneck is floating point conversions and arithmetic.
To reduce that you should consider either:
Reduce it to a scaling factor for example in the range 0-256.
Precompute factor*component as a 256 element array and 'pick' the components out oft.
I'm proposing a range of 257 because you can achieve the factor as follows:
For a more general solution (where 0<=factor<=256) :
void PixelRenderer::applyLight(Uint32& color, Uint32 factor){
Uint32 alpha=color&0xFF000000;
Uint32 red= ((color&0x00FF0000)*factor)>>8;
Uint32 green= ((color&0x0000FF00)*factor)>>8;
Uint32 blue=((color&0x000000FF)*factor)>>8;
color=alpha|(red&0x00FF0000)|(green&0x0000FF00)|(blue&0x000000FF);
}
Here's a runnable program illustrating the first example:
#include <stdio.h>
#include <inttypes.h>
typedef uint32_t Uint32;
Uint32 make(Uint32 alpha,Uint32 red,Uint32 green,Uint32 blue){
return (alpha<<24)|(red<<16)|(green<<8)|blue;
}
void output(Uint32 color){
printf("alpha=%"PRIu32" red=%"PRIu32" green=%"PRIu32" blue=%"PRIu32"\n",(color>>24),(color&0xFF0000)>>16,(color&0xFF00)>>8,color&0xFF);
}
Uint32 applyLight(Uint32 color, double factor){
Uint32 alpha=color&0xFF000000;
Uint32 red= (color&0x00FF0000)*factor;
Uint32 green= (color&0x0000FF00)*factor;
Uint32 blue=(color&0x000000FF)*factor;
return alpha|(red&0x00FF0000)|(green&0x0000FF00)|(blue&0x000000FF);
}
int main(void) {
Uint32 color1=make(156,100,50,20);
Uint32 result1=applyLight(color1,0.9);
output(result1);
Uint32 color2=make(255,255,255,255);
Uint32 result2=applyLight(color2,0.1);
output(result2);
Uint32 color3=make(78,220,200,100);
Uint32 result3=applyLight(color3,0.05);
output(result3);
return 0;
}
Expected Output is:
alpha=156 red=90 green=45 blue=18
alpha=255 red=25 green=25 blue=25
alpha=78 red=11 green=10 blue=5
One thing that I don't see anyone else mentioning is parallelizing your code. There are at least 2 ways to do this: SIMD instructions, and multiple threads.
SIMD instructions (like SSE, AVX, etc.) perform the same math on multiple pieces of data at the same time. So you could, for example, multiply the red, green, blue, and alpha of a pixel by the same values in 1 instruction, like this:
vec4 lightValue = vec4(0.5, 0.5, 0.5, 1.0);
vec4 result = vec_Mult(inputPixel, lightValue);
That's the equivalent of:
lightValue.red = 0.5;
lightValue.green = 0.5;
lightValue.blue = 0.5;
lightValue.alpha = 1.0;
result.red = inputPixel.red * lightValue.red;
result.green = inputPixel.green * lightValue.green;
result.blue = inputPixel.blue * lightValue.blue;
result.alpha = inputPixel.alpha * lightValue.alpha;
You can also cut your image into tiles and perform the lightening operation on several tiles at once using threads run on multiple cores. If you're using C++11, you can use std::thread to start multiple threads. Otherwise your OS probably has functionality for threading, such as WinThreads, Grand Central Dispatch, pthreads, boost threads, Threading Building Blocks, etc.
You can combine both of the above and have multithreaded code that operates on whole pixels at a time.
If you want to take it even further, you can do your processing on the GPU of your machine using OpenGL, OpenCL, DirectX, Metal, Mantle, CUDA, or one of the other GPGPU technologies. GPUs are generally hundreds of cores that can very quickly process many tiles in parallel, each of which processes whole pixels (rather than just channels) at a time.
But an even better option may be not to write any code at all. It's extremely likely that someone has already done this work and you can leverage it. For example, on MacOS there's CoreImage and the Accelerate framework. On iOS you also have CoreImage, and there's also GPUImage. I'm sure there are similar libraries on Windows, Linux, and other OSes you might be working with.
Another solution without using bit shifters, is to convert your 32 bits uint into a struct.
Try to keep your implementation in the .h include file, so that it can be inlined
If you don't want to have the implementation inlined (see above), modify your applyLight method to accept an array of pixels. Method call overhead can be significant for such a small method
Enable "loop unroll" optimisation on your compiler, which will enable the usage of SIMD instructions
Implementation:
class brightness {
private:
struct pixel { uint8_t b, g, r, a; };
float factor;
static inline void apply(uint8_t& p, float f) {
p = max(min(int(p * f), 255),0);
}
public:
brightness(float factor) : factor(factor) { }
void apply(uint32_t& color){
pixel& p = (pixel&)color;
apply(p.b, factor);
apply(p.g, factor);
apply(p.r, factor);
}
};
Implementation with a lookup table (slower when you use "loop unroll"):
class brightness {
struct pixel { uint8_t b, g, r, a; };
uint8_t table[256];
public:
brightness(float factor) {
for(int i = 0; i < 256; i++)
table[i] = max(min(int(i * factor), 255), 0);
}
void apply(uint32_t& color){
pixel& p = (pixel&)color;
p.b = table[p.b];
p.g = table[p.g];
p.r = table[p.r];
}
};
// usage
brightness half_bright(0.5);
uint32_t pixel = 0xffffffff;
half_bright.apply(pixel);

Drawing anti-aliased lines without color change due to background?

All the anti-aliased line drawing algorithms I've come across simply say that the "intensity" of the pixels needs to be a function of how much of the line passes through it. This works fine on constant backgrounds (ie white), but I want to be able to draw on a background of arbitrary complexity, which means replacing intensity with transparency and alpha blending the line with the background.
Doing this necessarily changes the color of the line depending on what the background is, since for a 1px line it rarely passes exactly through a single pixel, giving it full opacity. I'm curious if there's a technique for drawing these blended lines while maintaining the appearance of the original color.
Here's an example of my rendering attempt on a colorful background. You'll note the vertical/horizontal lines are drawn as a special case with the real color, and the anti-aliased diagonal lines have a blue tint to them.
Is there a proper way to blend anti-aliased lines into the background while maintaining the appearance of the proper line color?
Edit: and code for actually plotting points:
// Plot pixel at (x,y) with color at transparency alpha [0,1]
static inline void plot(pixel_t *pixels, uint16_t stride, const pixel_t &color, uint16_t x, uint16_t y, uint8_t alpha) {
pixel_t pix = pixels[y*stride+x];
pixels[y*stride+x].r = (uint16_t)color.r * alpha/255 + pix.r * (255 - alpha) / 255;
pixels[y*stride+x].g = (uint16_t)color.g * alpha/255 + pix.g * (255 - alpha) / 255;
pixels[y*stride+x].b = (uint16_t)color.b * alpha/255 + pix.g * (255 - alpha) / 255;
}
Edit: For future generations, blending green and blue can give your lines a blue-ish tint.
I'm glad you spotted the bug in your code.
Another problem to watch out for is gamma correction. Anti-aliasing must be applied in a linear color space to look correct, but most of the time to save some processing steps it is applied in a gamma-corrected color space instead. The effects are much more subtle than your example.

adjust bitmap image brightness/contrast using c++

adjust image brightness/contrast using c++ without using any other 3rd party library or dependancy
Image brightness is here - use the mean of the RGB values and shift them.
Contrast is here with other languages solutions available as well.
Edit in case the above links die:
The answer given by Jerry Coffin below covers the same topic and has links that still live.
But, to adjust brightness, you add a constant value to each for the R,G,B fields of an image. Make sure to use saturated math - don't allow values to go below 0 or above the maximum allowed in your bit-depth (8-bits for 24-bit color)
RGB_struct color = GetPixelColor(x, y);
size_t newRed = truncate(color.red + brightAdjust);
size_t newGreen = truncate(color.green + brightAdjust);
size_t newBlue = truncate(color.blue + brightAdjust);
For contrast, I have taken and slightly modified code from this website:
float factor = (259.0 * (contrast + 255.0)) / (255.0 * (259.0 - contrast));
RGB_struct color = GetPixelColor(x, y);
size_t newRed = truncate((size_t)(factor * (color.red - 128) + 128));
size_t newGreen = truncate((size_t)(factor * (color.green - 128) + 128));
size_t newBlue = truncate((size_t)(factor * (color.blue - 128) + 128));
Where truncate(int value) makes sure the value stays between 0 and 255 for 8-bit color. Note that many CPUs have intrinsic functions to do this in a single cycle.
size_t truncate(size_t value)
{
if(value < 0) return 0;
if(value > 255) return 255;
return value;
}
Read in the image with a library just as the Independent JPEG library. When you have raw data, you can convert it from RGB to HSL or (preferably) CIE Lab*. Both contrast and brightness will basically just involve adjustments to the L channel -- to adjust brightness, just adjust all the L values up or down by an appropriate amount. To adjust contrast, you basically adjust the difference between a particular value and the center value. You'll generally want to do this non-linearly, so values near the middle of the range are adjusted quite a bit, but values close to the ends or the range aren't affected nearly as much (and any that are at the very ends, aren't changed at all).
Once you've done that, you can convert back to RGB, and then back to a normal format such as JPEG.