I ran a demo bmp file format helper program "DDDemo.exe" to help me visualize the format of a 32x1 pixel bmp file (monochromatic). I'm okay with the the two header sections but dont seem to understand the color table and pixel bits portions. I made two 32x1 pixel bmp files to help me compare (please see attached).
Can someone assit me understand how the "pixel bits" relates to the color map?
UPDATE: After some trial and error I finally was able to write a 32x1 pixel monochromatic BMP. Although it has different pixel bits as the attached images, this tool helped with the header and color mapping concept. Thank you for everyones input.
An unset bit in the PIXEL BITS refers to the first color table entry (0,0,0), black, and a set bit refers to the second color table entry (ff,ff,ff), white.
"The 1-bit per pixel (1bpp) format supports 2 distinct colors, (for example: black and white, or yellow and pink). The pixel values are stored in each bit, with the first (left-most) pixel in the most-significant bit of the first byte. Each bit is an index into a table of 2 colors. This Color Table is in 32bpp 8.8.8.0.8 RGBAX format. An unset bit will refer to the first color table entry, and a set bit will refer to the last (second) color table entry." - BMP file format
The color table for these images is simply indicating that there are two colors in the image:
Color 0 is (00, 00, 00) -- pure black
Color 1 is (FF, FF, FF) -- pure white
The image compression method shown (BI_RGB -- uncompressed) doesn't make sense with the given pixel data and images, though.
Related
I was wondering how to determine the equivalent of RGB values for a grayscale image. The original image is grayscale and everything I have found online is converting an RGB image pixel values to the grayscale pixel values. I already can read in the image. Ideally, this would be for xCode.
I was wondering if there was a class which would do this for me. If so, and you could point me to it, that would be great. I will read on it.
Any help is greatly appreciated.
NOTE: I am a beginner in C++ and do not have time to learn everything formally; I have to learn all of my programming on the fly.
You need more information to transform from a simple Greyscale to RGB, when you do reverse operation, the color information is "lost", as the three channels are set to same value(depending on the algorithm each channel will have a different/same weight in the final color computation).
Digital cameras, usually store more information per pixel, 12 bits per channel in 35mm and 14 bits per channel in medium format (those bits number are the average, some products offer less or even more quality).
Thanks to those additional bits per channel, the camera can compute the "real" color, or what it thinks is the real color based on some parameters.
TL;DR: You can't without more data from your source, in this case the image.
You can convert a gray value to RGB by setting each component of the RGB value to the gray value:
ColorRGB myColorRGB = ColorRGBMake(myGrayValue, myGrayValue, myGrayValue);
I would like to extract pixel data from a frame which is saved in rgbx8888 using c++. Can someone provide a detailed description of the format and any extra information which will help me analyse the image.
Each pixel consists of four 8-bit bytes. The first three are 8-bit primary colour components - Red, Green, Blue, in that order, on a linear scale from 0 (none) to 255 (saturated). The fourth (X) is unused, to align each pixel on a 4-byte boundary.
I have data for every pixel red one byte, green one byte, blue one byte. I need to pack this to 8 bits bitmap, so I have only one byte for pixel. How to transform rgb (three bytes) to one byte for bitmap format ?
( I am using C++ and cannot use any external libraries)
I think you misunderstood how to form a bitmap structure. You do not need to pack (somehow) 3 bytes into one. That is not possible after all, unless you throw away information (like using special image formats GL_R3_G3_B2).
The BMP file format wiki page shows detailed BMP format : it is a header, followed by data. Now depending on what you set in your header, it is possible to form a BMP image containing RBG data component, where each component is one byte.
First you need to decide how many bits you want to allocate for each color.
3bit per color will overflow a byte (9bits)
2bits per color will underflow;
In three byte RGB bitmap you have one byte to represent each color's intensity. Where 0 is minimum and 255 is max intensity. When you convert it to 1 byte bitmap (assuming you will choose 2bits per color ) transform should be:
1-byte red color/64
i.e you will get only 4 shades out of a spectrum of 265 shades per color.
First you have to produce 256 colors palette that best fits your source image.
Then you need to dither the image using the palette you've generated.
Both problems have many well-known solutions. However, it's impossible to produce high-quality result completely automatic: for different source images, different approaches work best. For example, here's the Photoshop UI that tunes the parameters of the process:
I was reading this paper for a project work using imagmagick and C++.
We train on 1.6 million 32*32 color images that have been preprocessed
by subtracting from each pixel its mean value over all images and then
dividing by the standard deviation of all pixels over all images.
I've trouble distinguishing between "from each pixel its mean value over all images" and "standard deviation of all pixels over all images".
Since, I'm dealing with color images, can I just take rgb values of each pixel as one value or should I calculate the mean and SD for every color differently?
For example if I have r=255, g=255, b=255, can I take pixel value as (in binary), (r<<16)+(g<<8)+b ?
Color channel values should be used independently. If you would use 32 bit representation of the pixels, you would get big value differences between very near colors which differ in red or green channel.
I'm working in Quartz/Core-graphics. I'm trying to create a black and white, 1b per pixel graphics context.
I currently have a CGImageRef with a grayscale image (which is really black and white). I want to draw it into a black and white BitmapContext so I can get the bitmap out and compress it with CCITT-group 4. (For some reason Quartz won't let you save in any TIFF format other than LZW).
So, I need the 1bit per pixel data. I figure that drawing into a 1bpp context would do that. However, it won't let me create the context with:
context = CGBitmapContextCreate (data,
pixelsWide,
pixelsHigh,
1,
pixelsWide/8,
CGColorSpaceCreateDeviceGray(),
kCGImageAlphaNone
);
Is there a colorspace smaller than gray?
Even if 1-bit bitmaps were supported, if pixelsWide is not a multiple of 8, then the number of bytes per row is not an integer: for example, if your image is 12 pixels wide, then the number of bytes per row is one and a half. Your division expression will truncate this to one byte per row, which is wrong.
But that's if 1-bit bitmaps were supported, which they aren't.