I have a monochrome bitmap. I am using it for collision detection.
// creates the monochrome bitmap
bmpTest = new Bitmap(200, 200, PixelFormat1bppIndexed);
// color and get the pixel color at point (x, y)
Color color;
bmpTest->GetPixel(110,110,&color);
// the only method I know of that I can get a 0 or 1 from.
int b = color.GetB();
// b is 0 when the color is black and 1 when it is not black as desired
Is there a faster way of doing this? I can only use it on Get A R G B() values. I am using GetB() because any ARGB value is 0 or 1, correctly, but seems messy to me.
Is there a way I can read a byte from a monochrome bitmap returning either a 0 or 1? (is the question)
You should use LockBits() method for faster access:
BitmapData bitmapData;
pBitmap->LockBits(&Rect(0,0,pBitmap->GetWidth(), pBitmap->GetHeight()), ImageLockModeWrite, PixelFormat32bppARGB, &bitmapData);
unsigned int *pRawBitmapOrig = (unsigned int*)bitmapData.Scan0; // for easy access and indexing
unsigned int curColor = pRawBitmapCopy[curY * bitmapData.Stride / 4 + curX];
int b = curColor & 0xff;
int g = (curColor & 0xff00) >> 8;
int r = (curColor & 0xff0000) >> 16;
int a = (curColor & 0xff000000) >> 24;
Related
I'm currently attempting to create a color gradient class for my Mandelbrot Set explorer.
It reads the color constraints (RGBA8888 color and position between 0 and 1) from a text file and adds them to a vector, which is lateron used to determine colors at a certain position.
To compute a color, the algorithm searches the next constraint to either side from the given position, splits the color into the four single channels, and then, for each one, searches the lower of both and adds a portion of the difference equal to the ratio (x-lpos)/(upos-lpos) to the lower color. Afterwards, the channels are shifted and ORed together, and then returned as RGBA8888 unsigned integer. (See the code below.)
EDIT: I completely rewrote the gradient class, fixing some issues and making it more readable for the sake of debugging (It gets slow as hell, though, but -Os more or less takes care of that). However, It's still not as it's supposed to be.
class Gradient { //remade, Some irrelevant methods and de-/constructors removed
private:
map<double, unsigned int> constraints;
public:
unsigned int operator[](double value) {
//Forbid out-of-range values, return black
if (value < 0 || value > 1+1E-10) return 0xff;
//Find upper and lower constraint
auto upperC = constraints.lower_bound(value);
if (upperC == constraints.end()) upperC = constraints.begin();
auto lowerC = upperC == constraints.begin() ? prev(constraints.end(), 1) : prev(upperC, 1);
if (value == lowerC->first) return lowerC->second;
double lpos = lowerC->first;
double upos = upperC->first;
if (upos < lpos) upos += 1;
//lower color channels
unsigned char lred = (lowerC->second >> 24) & 0xff;
unsigned char lgreen = (lowerC->second >> 16) & 0xff;
unsigned char lblue = (lowerC->second >> 8) & 0xff;
unsigned char lalpha = lowerC->second & 0xff;
//upper color channels
unsigned char ured = (upperC->second >> 24) & 0xff;
unsigned char ugreen = (upperC->second >> 16) & 0xff;
unsigned char ublue = (upperC->second >> 8) & 0xff;
unsigned char ualpha = upperC->second & 0xff;
unsigned char red = 0, green = 0, blue = 0, alpha = 0xff;
//Compute each channel using
// lower color + dist(lower, x)/dist(lower, upper) * diff(lower color, upper color)
if (lred < ured)
red = lred + (value - lpos)/(upos - lpos) * (ured - lred);
else red = ured + (upos - value)/(upos - lpos) * (ured - lred);
if (lgreen < ugreen)
green = lgreen + (value - lpos)/(upos - lpos) * (ugreen - green);
else green = ugreen + (upos - value)/(upos - lpos) * (ugreen - lgreen);
if (lblue < ublue)
blue = lblue + (value - lpos)/(upos - lpos) * (ublue - lblue);
else blue = ublue + (upos - value)/(upos - lpos) * (ublue - lblue);
if (lalpha < ualpha)
alpha = lalpha + (value - lpos)/(upos - lpos) * (ualpha - lalpha);
else alpha = ualpha + (upos - value)/(upos - lpos) * (ualpha - lalpha);
//Merge channels together and return
return (red << 24) | (green << 16) | (blue << 8 ) | alpha;
}
void addConstraint(unsigned int color, double position) {
constraints[position] = color;
}
};
Usage in the update method:
image[r + rres*i] = grd[ratio];
//With image being a vector<unsigned int>, which is then used as data source for a `SDL_Texture` using `SDL_UpdateTexture`
It only works partially, though. When I only use a black/white gradient, the resulting image is as intended:
Gradient file:
2
0 000000ff
1 ffffffff
However, when I use a more colorful gradient (a linear version of the Ultra Fractal gradient, input file below), the image is far from the intended result the image still doesn't show the desired coloring:
Gradient file:
5
0 000764ff
.16 206bcbff
.42 edffffff
.6425 ffaa00ff
0.8575 000200ff
What am I doing wrong? I've rewritten the operator[] method multiple times, without anything changing.
Questions for clarification or general remarks on my code are welcome.
Your problem is due to an over-complicated interpolation function.
When linearly interpolating in the range a .. b using another factor r (with range 0 .. 1) to indicate the position in that range it's completely unnecessary to determine whether a or b is greater. Either way around you can just use:
result = a + r * (b - a)
If r == 0 this is trivially shown to be a, and if r == 1 the a - a cancels out leaving just b. Similarly if r == 0.5 then the result is (a + b) / 2. It simply doesn't matter if a > b or vice-versa.
The preferred formulation in your case, since it avoids the b - a subtraction that possibly hits range clamping limits is:
result = (1 - r) * a + r * b;
which given appropriate * and + operators on your new RGBA class gives this trivial implementation of your mid function (with no need for per-component operations since they're handled in those operators):
static RGBA mid(const RGBA& a, const RGBA& b, double r) {
return (1.0 - r) * a + r * b;
}
See https://gist.github.com/raybellis/4f69345d8e0c4e83411b, where I've also refactored your RGBA class to put the clamping operations in the constructor rather than within the individual operators.
After some extensive trial-and-error, I finally managed to get it working. (at this point many thanks to #Alnitak, who suggested using a separate RGBA color class.)
The major problem was that, when a color value of the upper constraint was lower than the one of the lower one, I still multiplied with the ratio (x-l)/(u-l), when instead I should have used its pendant, 1 - (x-l)/(u-l), to refer to the color of the upper constraint as the basis for the new one.
Here follows the implementation of the RGBA class and the fixed gradient class:
class RGBA {
private:
unsigned int red = 0, green = 0, blue = 0, alpha = 0;
public:
static RGBA mid(RGBA a, RGBA b, double r) {
RGBA color;
if (a.red < b.red) color.red = a.red + (b.red - a.red) * r;
else color.red = b.red + (a.red - b.red) * (1-r);
if (a.green < b.green) color.green = a.green + (b.green - a.green) * r;
else color.green = b.green + (a.green - b.green) * (1-r);
if (a.blue < b.blue) color.blue = a.blue + (b.blue - a.blue) * r;
else color.blue = b.blue + (a.blue - b.blue) * (1-r);
if (a.alpha < b.alpha) color.alpha = a.alpha + (b.alpha - a.alpha) * r;
else color.alpha = b.alpha + (a.alpha - b.alpha) * (1-r);
return color;
}
RGBA() {};
RGBA(unsigned char _red, unsigned char _green, unsigned char _blue, unsigned char _alpha) :
red(_red), green(_green), blue(_blue), alpha(_alpha) {};
RGBA(unsigned int _rgba) {
red = (_rgba >> 24) & 0xff;
green = (_rgba >> 16) & 0xff;
blue = (_rgba >> 8) & 0xff;
alpha = _rgba & 0xff;
};
operator unsigned int() {
return (red << 24) | (green << 16) | (blue << 8 ) | alpha;
}
RGBA operator+(const RGBA& o) const {
return RGBA((red + o.red) & 0xff, (green + o.green) & 0xff, (blue + o.blue) & 0xff, (alpha + o.alpha) & 0xff);
}
RGBA operator-(const RGBA& o) const {
return RGBA(min(red - o.red, 0u), min(green - o.green, 0u), min(blue - o.blue, 0u), min(alpha - o.alpha, 0u));
}
RGBA operator~() {
return RGBA(0xff - red, 0xff - green, 0xff - blue, 0xff - alpha);
}
RGBA operator*(double _f) {
return RGBA((unsigned int) min(red * _f, 0.) & 0xff, (unsigned int) min(green * _f, 0.) & 0xff,
(unsigned int) min(blue * _f, 0.) & 0xff, (unsigned int) min(alpha * _f, 0.) & 0xff);
}
};
class Gradient {
private:
map<double, RGBA> constraints;
public:
Gradient() {
constraints[0] = RGBA(0x007700ff);
constraints[1] = RGBA(0xffffffff);
}
~Gradient() {}
void addConstraint(RGBA color, double position) {
constraints[position] = color;
}
void reset() {
constraints.clear();
}
unsigned int operator[](double value) {
if (value < 0 || value > 1+1E-10) return 0xff;
auto upperC = constraints.lower_bound(value);
if (upperC == constraints.end()) upperC = constraints.begin();
auto lowerC = upperC == constraints.begin() ? prev(constraints.end(), 1) : prev(upperC, 1);
if (value == lowerC->first) return lowerC->second;
double lpos = lowerC->first;
double upos = upperC->first;
if (upos < lpos) upos += 1;
RGBA lower = lowerC->second;
RGBA upper = upperC->second;
RGBA color = RGBA::mid(lower, upper, (value-lpos)/(upos-lpos));
return color;
}
size_t size() {
return constraints.size();
}
};
This is the result:
I have a starting color: 0xffff00ff, which is a:255, r:255, g:0, b:255.
The goal is to change the alpha channel of the color to be less opaque based on a percentage. i.e. 50% opacity for that color is roughly 0x80ff00ff.
How I've tried to reach the solution:
DWORD cx = 0xffff00ff;
DWORD cn = .5;
DWORD nc = cx*cn;
DWORD cx = 0xffff00ff;
float cn = .5;
DWORD alphaMask=0xff000000;
DWORD nc = (cx|alphaMask)&((DWORD)(alphaMask*cn)|(~alphaMask));
This should do the trick. all I'm doing here is setting the first 8 bits of the DWORD to 1's with the or (symbolized by '|') and then anding those bits with the correct value you want them to be which is the alpha mask times cn. Of course I casted the result of the multiplication to make it a DWORD again.
This is tested code (in linux). However, you might find a simpler answer. Note: this is RGBA, not ARGB as you have referenced in your question.
double transparency = 0.500;
unsigned char *current_image_data_iterator = reinterpret_cast<unsigned char*>( const_cast<char *>( this->data.getCString() ) );
unsigned char *new_image_data_iterator = reinterpret_cast<unsigned char*>( const_cast<char *>( new_image_data->data.getCString() ) );
size_t x;
//cout << "transparency: " << transparency << endl;
for( x = 0; x < data_length; x += 4 ){
//rgb data is the same
*(new_image_data_iterator + x) = *(current_image_data_iterator + x);
*(new_image_data_iterator + x + 1) = *(current_image_data_iterator + x + 1);
*(new_image_data_iterator + x + 2) = *(current_image_data_iterator + x + 2);
//multiply the current opacity by the applied transparency
*(new_image_data_iterator + x + 3) = uint8_t( double(*(current_image_data_iterator + x + 3)) * ( transparency / 255.0 ) );
//cout << "Current Alpha: " << dec << static_cast<int>( *(current_image_data_iterator + x + 3) ) << endl;
//cout << "New Alpha: " << double(*(current_image_data_iterator + x + 3)) * ( transparency / 255.0 ) << endl;
//cout << "----" << endl;
}
typedef union ARGB
{
std::uint32_t Colour;
std::uint8_t A, R, G, B;
};
int main()
{
DWORD cx = 0xffff00ff;
reinterpret_cast<ARGB*>(&cx)->A = reinterpret_cast<ARGB*>(&cx)->A / 2;
std::cout<<std::hex<<cx;
}
The solution I chose to go with:
DWORD changeOpacity(DWORD color, float opacity) {
int alpha = (color >> 24) & 0xff;
int r = (color >> 16) & 0xff;
int g = (color >> 8) & 0xff;
int b = color & 0xff;
int newAlpha = ceil(alpha * opacity);
UINT newColor = r << 16;
newColor += g << 8;
newColor += b;
newColor += (newAlpha << 24);
return (DWORD)newColor;
}
I understand your question as: I wish to change a given rgba color component by a certain factor while keeping the same overall transparency.
For a color with full alpha (1.0 or 255), this is trivial: simply multiply the component without touching the others:
//typedef unsigned char uint8
enum COMPONENT {
RED,
GREEN,
BLUE,
ALPHA
};
struct rgba {
uint8 components[4];
// uint8 alpha, blue, green, red; // little endian
uint8 &operator[](int index){
return components[index];
}
};
rgba color;
if (color[ALPHA] == 255)
color[RED] *= factor;
else
ComponentFactor(color, RED, factor);
There's'probably not a single answer to that question in the general case. Consider that colors may be encoded alternatively in HSL or HSV. You might want to keep some of these parameters fixed, and allow other to change.
My approach to this problem would be to first try to find the hue distance between the source and target colors at full alpha, and then convert the real source color to HSV, apply the change in hue, then convert back to RGBA. Obviously, that second step is not necessary if the alpha is actually 1.0.
In pseudo code:
rgba ComponentFactor(rgba color, int component, double factor){
rgba fsrc = color, ftgt;
fsrc.alpha = 1.0; // set full alpha
ftgt = fsrc;
ftgt[component] *= factor; // apply factor
hsv hsrc = fsrc, htgt = ftgt; // convert to hsv color space
int distance = htgt.hue - hsrc.hue; // find the hue difference
hsv tmp = color; // convert actual color to hsv
tmp.hue += distance; // apply change in hue
rgba res = tmp; // convert back to RGBA space
return res;
}
Note how the above rely on type rgba and hsv to have implicit conversion constructors. Algorithms for conversion may be easily found with a web search. It should be also easy to derive struct definitions for hsv from the rgba one, or include individual component access as field members (rather than using the [] operator).
For instance:
//typedef DWORD uint32;
struct rgba {
union {
uint8 components[4];
struct {
uint8 alpha,blue,green,red; // little endian plaform
}
uint32 raw;
};
uint8 &operator[](int index){
return components[4 - index];
}
rgba (uint32 raw_):raw(raw_){}
rgba (uint8 r, uint8 g, uint8 b, uint8 a):
red(r), green(g), blue(b),alpha(a){}
};
Perhaps you will have to find a hue factor rather than a distance, or tweak other HSV components to achieve the desired result.
I create an image using
UIGraphicsBeginImageContextWithOptions(image.size, NO, 0);
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)];
// more code - not relevant - removed for debugging
image = UIGraphicsGetImageFromCurrentImageContext(); // the image is now ARGB
UIGraphicsEndImageContext();
Then I try to find the color of a pixel (using the code by Minas Petterson from here: Get Pixel color of UIImage).
But since the image is now in ARGB format I had to modified the code with this:
alpha = data[pixelInfo];
red = data[(pixelInfo + 1)];
green = data[pixelInfo + 2];
blue = data[pixelInfo + 3];
However this did not work.
The problem is that (for example) a red pixel, that in RGBA would be represented as 1001 (actually 255 0 0 255, but for simplicity I use 0 to 1 values), in the image is represented as 0011 and not (as I thought) 1100.
Any ideas why? Am I doing something wrong?
PS. The code I have to use looks like it has to be this:
alpha = 255-data[pixelInfo];
red = 255-data[(pixelInfo + 1)];
green = 255-data[pixelInfo + 2];
blue = 255-data[pixelInfo + 3];
There are some problems that arises there:
"In some contexts, primarily OpenGL, the term "RGBA" actually means the colors are stored in memory such that R is at the lowest address, G after it, B after that, and A last. OpenGL describes the above format as "BGRA" on a little-endian machine and "ARGB" on a big-endian machine." (wiki)
Graphics hardware is backed by OpenGL on OS X/iOS, so I assume that we deal with little-endian data(intel/arm processors). So, when format is kCGImageAlphaPremultipliedFirst (ARGB) on little-endian machine it's BGRA. But don't worry, there is easy way to fix that.
Assuming that it's ARGB, kCGImageAlphaPremultipliedFirst, 8 bits per component, 4 components per pixel(That's what UIGraphicsGetImageFromCurrentImageContext() returns), don't_care-endiannes:
- (void)parsePixelValuesFromPixel:(const uint8_t *)pixel
intoBuffer:(out uint8_t[4])buffer {
static NSInteger const kRedIndex = 0;
static NSInteger const kGreenIndex = 1;
static NSInteger const kBlueIndex = 2;
static NSInteger const kAlphaIndex = 3;
int32_t *wholePixel = (int32_t *)pixel;
int32_t value = OSSwapHostToBigConstInt32(*wholePixel);
// Now we have value in big-endian format, regardless of our machine endiannes (ARGB now).
buffer[kAlphaIndex] = value & 0xFF;
buffer[kRedIndex] = (value >> 8) & 0xFF;
buffer[kGreenIndex] = (value >> 16) & 0xFF;
buffer[kBlueIndex] = (value >> 24) & 0xFF;
}
Get RGB Channels From Pixel Value Without Any Library
I`m trying to get the RGB channels of each pixel that I read from an image.
I use getchar by reading each byte from the image.
so after a little search I did on the web I found the on BMP for example the colors data start after the 36 byte, I know that each channle is 8 bit and the whole RGB is a 8 bit of red, 8 bit of green and 8 bit of blue. my question is how I extract them from a pixel value? for example:
pixel = getchar(image);
what can I do to extract those channels? In addition I saw this example on JAVA but dont know how to implement it on C++ :
int rgb[] = new int[] {
(argb >> 16) & 0xff, //red
(argb >> 8) & 0xff, //green
(argb ) & 0xff //blue
};
I guess that argb is the "pixel" var I mentioned before.
Thanks.
Assuming that it's encoded as ABGR and you have one integer value per pixel, this should do the trick:
int r = color & 0xff;
int g = (color >> 8) & 0xff;
int b = (color >> 16) & 0xff;
int a = (color >> 24) & 0xff;
When reading single bytes it depends on the endianness of the format. Since there are two possible ways this is of course always inconsistent so I'll write both ways, with the reading done as a pseudo-function:
RGBA:
int r = readByte();
int g = readByte();
int b = readByte();
int a = readByte();
ABGR:
int a = readByte();
int b = readByte();
int g = readByte();
int r = readByte();
How it's encoded depends on how your file format is laid out. I've also seen BGRA and ARGB orders and planar RGB (each channel is a separate buffer of width x height bytes).
It looks like wikipedia has a pretty good overview on what BMP files look like:
http://en.wikipedia.org/wiki/BMP_file_format
Since it seems to be a bit more complicated I'd strongly suggest using a library for this instead of rolling your own.
If I do the following using Qt:
Load a bitmap in QImage::Format_RGB32.
Convert its pixels to RGB565 (no Qt format for this so I must do it "by hand").
Create a new bitmap the same size as the one loaded in step 1.
Convert the RGB565 buffer pixels back to RGB88 in to the image created in step 3.
The image created from step 4 looks like the image from step 1, however they're not exactly the same if you compare the RGB values.
Repeating steps 2 to 5 results in the final image losing colour - it seems to become darker and darker.
Here are my conversion functions:
qRgb RGB565ToRGB888( unsigned short int aTextel )
{
unsigned char r = (((aTextel)&0x01F) <<3);
unsigned char g = (((aTextel)&0x03E0) >>2);
unsigned char b = (((aTextel)&0x7C00 )>>7);
return qRgb( r, g, b, 255 );
}
unsigned short int RGB888ToRGB565( QRgb aPixel )
{
int red = ( aPixel >> 16) & 0xFF;
int green = ( aPixel >> 8 ) & 0xFF;
int blue = aPixel & 0xFF;
unsigned short B = (blue >> 3) & 0x001F;
unsigned short G = ((green >> 2) < 5) & 0x07E0;
unsigned short R = ((red >> 3) < 11) & 0xF800;
return (unsigned short int) (R | G | B);
}
An example I found from my test image which doesn't convert properly is 4278192128 which gets converted back from RGB565 to RGB888 as 4278190080.
Edit: I should also mention that the original source data is RGB565 (which my test RGB888 image was created from). I am only converting to RGB888 for display purposes but would like to convert back to RGB565 afterwards rather than keeping two copies of the data.
Beforehand I want to mention that the component order in your two conversion functions aren't the same. In 565 -> 888 conversion, you assume that the red component uses the low order bits (0x001F), but when encoding the 5 bits of the red component, you put them at the high order bits (0xF800). Assuming that you want a component order analogous to 0xAARRGGBB (binary representation in RGB565 is then 0bRRRRRGGGGGGBBBBB), you need to change the variable names in your RGB565ToRGB888 method. I fixed this in the code below.
Your RGB565 to RGB888 conversion is buggy. For the green channel, you extract 5 bits, which gives you only 7 bit instead of 8 bit in the result. For the blue channel you take the following bits which is a consequential error. This should fix it:
QRgb RGB565ToRGB888( unsigned short int aTextel )
{
// changed order of variable names
unsigned char b = (((aTextel)&0x001F) << 3);
unsigned char g = (((aTextel)&0x07E0) >> 3); // Fixed: shift >> 5 and << 2
unsigned char r = (((aTextel)&0xF800) >> 8); // shift >> 11 and << 3
return qRgb( r, g, b, 255 );
}
In the other function, you accidentally wrote less-than operators instead of left-shift operators. This should fix it:
unsigned short int RGB888ToRGB565( QRgb aPixel )
{
int red = ( aPixel >> 16) & 0xFF; // why not qRed(aPixel) etc. ?
int green = ( aPixel >> 8 ) & 0xFF;
int blue = aPixel & 0xFF;
unsigned short B = (blue >> 3) & 0x001F;
unsigned short G = ((green >> 2) << 5) & 0x07E0; // not <
unsigned short R = ((red >> 3) << 11) & 0xF800; // not <
return (unsigned short int) (R | G | B);
}
Note that you can use the already existing (inline) functions qRed, qGreen, qBlue for component extraction analogous to qRgb for color construction from components.
Also note that the final bit masks in RGB888ToRGB565 are optional, as the component values are in the 8-bit-range and you cropped them by first right-, then left-shifting the values.