How to treat alpha transparency from PNG directly in memory? - c++

I would appreciate the help of you low level programmers... My problem is this:
I want to print a bitmap of format ARGB8888 directly into video memory. The form of the bitmap is alright, the problem is the alpha channel that I can't figure out how to use. I've seen code in wikipedia that overlays the pixels like this:
CompositedPixelColor = Alpha * ForegroundPixelColor + (1 - Alpha) * BackgroundPixelColor
Where a color varies from 0 - 1. This is done for each channel R G B.
What I'm doing is copy each byte for each color of each pixel of my bitmap directly to the video memory using the formula above, but I'm missing something because the colors don't present theirselves alright.
I'm trying to do something like the code posted in this thread:
http://www.badadev.com/create-a-photo-editing-app/
But here they don't treat transparency, and that is my problem. Thanxs!

In the code you posted alpha is treated as a value between 0 and 1, which of course doesn't work if you use you alpha channel as an unsigned char, use the following if you want to do it in integer space:
unsigned short background = 0x40;
unsigned short foreground = 0xe0;
unsigned short alpha = 0xc0;
unsigned short compositedcolor = (alpha * foreground + (0xff - alpha) * background) >> 8;
note that while these are shorts the values should all be 0 - 255, the short is just needed as computational space for the char * char products, you could also use intermediate casts, but I just used unsigned short types for all to make it more readable.

Related

Read SDL2 texture without duplication

I tried to create heightmap with an png or jpg file. And it works too 75% but I can't solve the last 25...
Here is a picture of the map as png
And this is the resulting heightmap/terrain
As you can see the symbols starts to repeat and I have no clue why.
The code:
auto image = IMG_Load(path.c_str());
int lineOffSet = i*(image->pitch/4);
uint32 pixel = static_cast<uint32*>(image->pixels)[lineOffSet + j];
uint8 r, g ,b;
SDL_GetRGB(pixel,image->format,&r, &g, &b);
What I tried:
The number of vertices is correct(256x256).
int lineOffSet = i*(image->pitch/4);
4 represents the bytes per pixel which should be in this case 3 but than I get a complete different terrain (The pitch is 768). The range from i and j goes from 0-255.
I hope someone has a hint to solve this thing
I think you calculate the address of the desired pixel wrong. You assume that one pixel is 4 bytes in size. It's usually more reliable to directly calculate the address in bytes and then cast to uint32. Try this:
uint32 pixel = *static_cast<uint32*>(image->pixels +
image->pitch * i +
image->format->BytesPerPixel * j);

How do you convert a 16 bit unsigned integer to a larger 8 bit unsigned integer?

I have a function that needs to return a 16 bit unsigned int vector, but for another from which I also call this one, I need the output in 8 bit unsigned int vector format. For example, if I start out with:
std::vector<uint16_t> myVec(640*480);
How might I convert it to the format of:
std::vector<uint8_t> myVec2(640*480*4);
UPDATE (more information):
I am working with libfreenect and its getDepth() method. I have modified it to output a 16 bit unsigned integer vector so that I can retrieve the depth data in millimeters. However, I would also like to display the depth data. I am working with some example code c++ from the freenect installation, which uses glut and requires an 8 bit unsigned int vector to display the depth, however, i need the 16 bit to retrieve the depth in millimeters and log it to a text file. Therefore, i was looking to retrieve the data as a 16 bit unsigned int vector in glut's draw function, and then convert it so that I can display it with the glut function that's already written.
As per your update, assuming the 8-bit unsigned int is going to be displayed as a gray scale image, what you need is akin to a Brightness Transfer Function. Basically, your output function is looking to map the data to the values 0-255, but you don't necessarily want those to correspond directly to millimeters. What if all of your data was from 0-3mm? Then your image would look almost completely black. What if it was all 300-400mm? Then it'd be completely white because it was clipped to 255.
A rudimentary way to do it would be to find the minimum and maximum values, and do this:
double scale = 255.0 / (double)(maxVal - minVal);
for( int i = 0; i < std::min(myVec.size(), myVec2.size()); ++i )
{
myVec2.at(i) = (unsigned int)((double)(myVec.at(i)-minVal) * scale);
}
depending on the distribution of your data, you might need to do something a little more complex to get the most out of your dynamic range.
Edit: This assumes your glut function is creating an image, if it is using the 8-bit value as an input to a graph then you can disregard.
Edit2: An update after your other update. If you want to fill a 640x480x4 vector, you are clearly doing an image. You need to do what I outlined above, but also the 4 dimensions that it is looking for are Red, Green, Blue, and Alpha. The Alpha channel needs to be 255 at all times (this controls how transparent it is, you don't want it to be transparent), as for the other 3... that value you got from the function above (the scaled value) if you set all 3 channels (channels being red, green, and blue) to the same value it will appear as grayscale. For example, if my data ranged from 0-25mm, for a pixel who's value is 10mm, I would set the data to 255/(25-0)* 10 = 102 and therefore the pixel would be (102, 102, 102, 255)
Edit 3: Adding wikipedia link about Brightness Transfer Functions - https://en.wikipedia.org/wiki/Color_mapping
How might I convert it to the format of:
std::vector myVec2; such that myVec2.size() will be twice as
big as myVec.size()?
myVec2.reserve(myVec.size() * 2);
for (auto it = begin(myVec); it!=end(myVec); ++it)
{
uint8_t val = static_cast<uint8_t>(*it); // isolate the low 8 bits
myVec2.push_back(val);
val = static_cast<uint8_t>((*it) >> 8); // isolate the upper 8 bits
myVec2.push_back(val);
}
Or you can change the order of push_back()'s if it matters which byte come first (the upper or the lower).
Straightforward way:
std::vector<std::uint8_t> myVec2(myVec.size() * 2);
std::memcpy(myVec2.data(), myVec.data(), myVec.size());
or with the use of the standard library
std::copy( begin(myVec), end(myVec), begin(myVec2));

Converting color bmp to grayscale bmp?

I am trying to convert a colored BMP file to gray-scale BMP. The input bmp is 24 bit and I am producing the same 24 bit bmp at the output, only this time in gray-scale.
The code I am using is
for(int x = 0; x < max; x++)
{
int lum;
lum = (r[x]*0.30) + (g[x]*0.59) + (b[x]*0.11);
r[x] = lum;
g[x] = lum;
b[x] = lum;
}
The r, g, b arrays are the RGB color components and I have them in a char *r,*g,*b.
For some reasons I am not getting a clean output. I am attaching the output I am getting with this question, its patchy and contains white and black areas at palces. So what am I doing wrong here?
Is it due to data loss in calculationg of lum or is there something wrong in storing int as a char?
Can gray-scale bmp not be 24 bit? or is it something wrong in the way I am storing rgb values after conversion?
Any help in this will be much appriciated. Thanks.
These should really be unsigned char; if char happens to be signed on your platform, then this code won't do what you expect.
You need to clamp the output of your calculation to be in [0,255]. Your calculation looks ok, but it's always good to be sure.
Also make sure that the r, g, b arrays are unsigned char. You can get away with a lot of signed/unsigned mixing in int (due to 2's complement overflow covering the mistakes) but when you convert to float the signs must be right.

How to composite argb image data on top of xrgb image data

I have a pointer to an 32bit argb image's pixel data and a 32bit xrgb image's pixel data. How can I composite the argb on top of xrgb image while making use of the alpha component?
Visual Studio 2008 C++
Edit:
Is there a quicker (faster processing) way to do the compositing than this:
float alpha = (float)Overlay[3] / 255;
float oneLessAlpha = 1 - alpha;
Destination[2] = (Overlay[2] * alpha + Background[2] * oneLessAlpha);
Destination[1] = (Overlay[1] * alpha + Background[1] * oneLessAlpha);
Destination[0] = (Overlay[0] * alpha + Background[0] * oneLessAlpha);
This depends on what you are trying to achieve. You can assume your second image to have an alpha of 255 everywhere, then compose each pixel by linear interpolation / alpha blending (assuming float values in [0,1], adjust accordingly):
out(x,y) = argb(x,y).rgb * argb(x,y).a + xrgb(x,y).rgb * (1.-argb(x.y).a)
This way, all pixel with no transparency from your argb image will be always displayed "atop", while pixels with full transparency are invisible and replaced by those from the xrgb pixels. All pixels inbetween are linearly blended.
Presumably by XRGB you mean a bitmap with four bytes per pixel, but what would be the alpha channel left at some constant value.
An obvious starting point would be to draw the XRGB bitmap first, and the RGBA bitmap second. When you draw the second, enable blending (glEnable(GL_BLEND);) and set your blend function with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);. This way the blending depends only on the alpha channel in the source (the RGBA) and ignores any in the destination (the XRGB bitmap that's already been drawn).
Edit: Oops -- somehow I thought I saw some reference to OpenGL, but rereading (and noting the comment) no such thing is there. Doing the job without OpenGL isn't terribly difficult, just generally slower. Let's call the pixels from the two input bitmaps S and D, and the corresponding pixel in the result C. In this case we can compute each pixel in C as:
Cr = Sr * Sa + Dr * (1-Sa)
Cg = Sg * Sa + Dg * (1-Sa)
Cb = Sb * Sa + Db * (1-Sa)
This assumes that you normalize (at least) the A channel to the range of 0..1, and that the ARGB bitmap is S and the XRGB is D.
Here's some code that should work more or less (didn't test it, no compiler on this machine...).
DWORD* pSrc; // Pointer to current row of ARGB bitmap
DWORD* pDst; // Pointer to current row of XRGB bitmap
...
BYTE* src_r = GetRValue(*pSrc);
BYTE* src_g = GetGValue(*pSrc);
BYTE* src_b = GetBvalue(*pSrc);
BYTE* src_a = *pSrc >> 24;
BYTE* dst_r = GetRValue(*pDst);
BYTE* dst_g = GetGValue(*pDst);
BYTE* dst_b = GetBvalue(*pDst);
BYTE* dst_a = 255 - src_a;
*pDst = RGB(((src_r * src_a) + (dst_r * dst_a)) >> 8,
((src_g * src_a) + (dst_g * dst_a)) >> 8,
((src_b * src_a) + (dst_b * dst_a)) >> 8);

Trying to read raw image data into Java through JNI

I'm using JNI to obtain raw image data in the following format:
The image data is returned in the format of a DATA32 (32 bits) per pixel in a linear array ordered from the top left of the image to the bottom right going from left to right each line. Each pixel has the upper 8 bits as the alpha channel and the lower 8 bits are the blue channel - so a pixel's bits are ARGB (from most to least significant, 8 bits per channel). You must put the data back at some point.
The DATA32 format is essentially an unsigned int in C.
So I obtain an int[] array and then try to create a Buffered Image out of it by
int w = 1920;
int h = 1200;
BufferedImage b = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
int[] f = (new Capture()).capture();
for(int i = 0; i < f.length; i++){;
b.setRGB(x, y, f[i]);
}
f is the array with the pixel data.
According to the Java documentation this should work since BufferedImage.TYPE_INT_ARGB is:
Represents an image with 8-bit RGBA color components packed into integer pixels. The image has a DirectColorModel with alpha. The color data in this image is considered not to be premultiplied with alpha. When this type is used as the imageType argument to a BufferedImage constructor, the created image is consistent with images created in the JDK1.1 and earlier releases.
Unless by 8-bit RGBA, them mean that all components added together are encoded in 8bits? But this is impossible.
This code does work, but the image that is produced is not at all like the image that it should produce. There are tonnes of artifacts. Can anyone see something obviously wrong in here?
Note I obtain my pixel data with
imlib_context_set_image(im);
data = imlib_image_get_data();
in my C code, using the library imlib2 with api http://docs.enlightenment.org/api/imlib2/html/imlib2_8c.html#17817446139a645cc017e9f79124e5a2
i'm an idiot.
This is merely a bug.
I forgot to include how I calculate x,y above.
Basically I was using
int x = i%w;
int y = i/h;
in the for loop, which is wrong. SHould be
int x = i%w;
int y = i/w;
Can't believe I made this stupid mistake.