C++ Unicode for Color16 Values - c++

This has been one huge headache. Ive googled everything and found very little, and have little knowledge on unicode, learned a bit from the searching. What I am needing is really simple, right? A struct I am using requires COLOR16.
So I know 0x0000 and 0x00FF is 0 to 255, which for COLOR16 is useless.
The four zeros each can represent 0 to 15 Ive seen.
I know COLOR16 represents all 16^4 colors.
But I cannot for the life of me figure out how to convert say, (R:100; G:35; B:42) to a unicode value.
I could really use some info on this, or a tutorial or anything.
Thanks.

I know what you're looking for. You're just asking the wrong way. You mean a short value, not a Unicode value. The common name for this is RGB565. That means 5-bits for red, 6 for green and 5 for blue.
That adds up to 16 bits. You pack the bits in like this:
unsigned short val = ((r<<8) & 0xf800) | ((g<<3) & 0x07e0) | (b>>3);
The bits are like this:
R 00000000 12345xxx -> 12345000 00000000 (shift left by 8 and masked)
G 00000000 123456xx -> 00000123 45600000 (shift left by 3 and masked)
B 00000000 12345xxx -> 00000000 00012345 (shift right by 3, no mask required)
Obviously information is lost in this process. You are just taking the most significant bits of the colour and using that. It's like lossy compression, but pretty good for video when you don't notice the loss of colour definition as much. The reason green gets the extra bit is because human eyes are more sensitive to colours in that spectrum.

Finally found a random example that had the solution in it. This takes a COLORREF and extracts 16bit colors for the TRIVERTEX struct:
vertex[1].Red = GetRValue(clrStart)<<8;
vertex[1].Green = GetGValue(clrStart)<<8;
vertex[1].Blue = GetBValue(clrStart)<<8;

It is misleading in the Windows API if that is what you are asking, RGBA all "COLOR16" references should be stored as byte [0-255] for each channel in the TRIVERTEX structure. though I have seen most use short values which are 8 bits
Red [0-255]; <= byte
Green [0-255]; <= byte
Blue [0-255]; <= byte
Alpha [0-255]; <= byte
Microsoft: what can you say

Related

Rendering a bit array with OpenGL

I have a bit array representing an image mask, stored in a uint8_t[] container array, in row first order. Hence, for each byte, I have 8 pixels.
Now, I need to render this with OpenGL ( >= 3.0 ). A positive bit is drawn as a white pixel and a negative bit is drawn as a black pixel.
How could I do this? Please
The first idea that comes to mind is to develop a specific shader for this. Can anyone give some hints on that?
You definitely must write a shader for this. First and foremost you want to prevent the OpenGL implementation to reinterpret the integer bits of your B/W bitmap as numbers in a certain range and map them to [0…1] floats. Which means you have to load your bits into an integer image format. Since your image format is octet groups of binary pixels (byte is a rather unspecific term and can refer to any number of bits, though 8 bits is the usual), a single channel format 8 bits format seems the right choice. The OpenGL-3 moniker for that is GL_R8UI. Keep in mind that the "width" of the texture will be 1/8th of the actual width of your B/W image. Also for unnormalized access you must use a usampler (for unsigned) or an isampler (for signed) (thanks #derhass for noticing that this was not properly written here).
To access individual bits you use the usual bit manipulation operators. Since you don't want your bits to become filtered, texel fetch access must be used. So to access the binary pixel at integer location x,y the following would be used.
uniform usampler2D tex;
uint shift = x % 8;
uint mask = 1 << shift;
uint octet = texelFetch(tex, ivec2(x/8,y)).r;
value = (octet & mask) >> shift;
The best solution would be to use a shader, you could also hack something like this:
std::bitset<8> bits = myuint;
Then get the values of the single bits with bits.at(position) and finally do a simple point drawing.

Converting 12 bit color values to 8 bit color values C++

I'm attempting to convert 12-bit RGGB color values into 8-bit RGGB color values, but with my current method it gives strange results.
Logically, I thought that simply dividing the 12-bit RGGB into 8-bit RGGB would work and be pretty simple:
// raw_color_array contains R,G1,G2,B in a bayer pattern with each element
// ranging from 0 to 4096
for(int i = 0; i < array_size; i++)
{
raw_color_array[i] /= 16; // 4096 becomes 256 and so on
}
However, in practice this actually does not work. Given, for example, a small image with water and a piece of ice in it you can see what actually happens in the conversion (right most image).
Why does this happen? and how can I get the same (or close to) image on the left, but as 8-bit values instead? Thanks!
EDIT: going off of #MSalters answer, I get a better quality image but the colors are still drasticaly skewed. What resources can I look into for converting 12-bit data to 8-bit data without a steep loss in quality?
It appears that your raw 12 bits data isn't on a linear scale. That is quite common for images. For a non-linear scale, you can't use a linear transformation like dividing by 16.
A non-linear transform like sqrt(x*16) would also give you an 8 bits value. So would std::pow(x, 12.0/8.0)
A known problem with low-gradient images is that you get banding. If your images has an area where the original value varies from say 100 to 200, the 12-to-8 bit reduction will shrink that to less than 100 different values. You get rounding , and with naive (local) rounding you get bands. Linear or non-linear, there will then be some inputs x that all map to y, and some that map to y+1. This can be mitigated by doing the transformation in floating point, and then adding a random value between -1.0 and +1.0 before rounding. This effectively breaks up the band structure.
After you clarified that this 12bit data is only for one color, here is my simple answer:
Since you want to convert its value to its 8 bit equivalent, it obviously means you lost some of the data (4bits). This is the reason why you are not getting the same output.
After clarification:
If you want to retain the actual colour values!
Apply de-mosaicking in the 12 Bit image and then scale the resultant data to 8 - Bit. So that the colour loss due to de-mosaicking will be less compared to the previous approach.
You say that your 12-bits represent 2^12 bits of one colour. That is incorrect. There are reds, greens and blues in your image. Look at the histogram. I made this with ImageMagick at the command line:
convert cells.jpg histogram:png:h.png
If you want 8-bits per pixel, rather than trying to blindly/statically apportion 3 bits to Green, 2 bits to Red and 3 bits to Blue, you would probably be better off going with an 8-bit palette so you can have 250+ colours of all variations rather than restricting yourself to just 8 blue shades, 4 reds an 8 green. So, like this:
convert cells.jpg -colors 254 PNG8:result.png
Here is the result of that beside the original:
The process above is called "quantisation" and if you want to implement it in C/C++, there is a writeup here.

RLE Encoding bit sequence, not bytes

I need to implement a compression algorithm for binary data, that need to work on embedded constrained devices (256kB ROM, 48 KB RAM).
I'm thinking to the RLE compression algorithm. Unless implementing it from scratch, I've found a lot of C implementations, (for example: http://sourceforge.net/projects/bcl/?source=typ_redirect ), but they apply the RLE algorithm over the byte sequence (the word of the dictionary are 1 to 255, that is 8-bit encoding.
I'm finding for an implementation that, starting from a sequence of bytes, applies the RLE encoding over the bit-sequence corresponding to the input (0 and 1). Note that also another algorithm can work (I need a compression ratio <0.9, so I think any algorithm can do it), but the implementation need to work on a bit-basis, not bytes.
Can anyone help me? Thank you!
I think that you can encode bytes such as 0, 1, 2, 3, 255… etc. (Where lots of 0 and 1)
Let's encode this bit sequence:
000000011111110
1. Shift bits and increment the counter if bit compare to last bit
2. If NOT— shift 111 to output buffer and write control bit 1 before bit sequence
3. If bits can not be packed — write 0 bit and rest of data
Output:
111101110100
To decompress simply shift first control bit:
If 0 — write next bit to output buffer
If 1 — read 3 bits (can be other length) and convert them to the decimal number. Shift next bit which will mean what bit to repeat and start loop to represent the original sequence
But this compression method will work only on files, which have lots of 0 and 255 bytes (00000000 and 11111111 in binary), such as BMP files with black or white background.
Hope I helped you!

How can I store each pixel in an image as a 16 bit index into a colortable?

I have a 2D array of float values:
float values[1024][1024];
that I want to store as an image.
The values are in the range: [-range,+range].
I want to use a colortable that goes from red(-range) to white(0) to black(+range).
So far I have been storing each pixel as a 32 bit RGBA using the BMP file format. The total memory for storing my array is then 1024*1024*4 bytes = 4MB.
This seems very vasteful knowing that my colortable is "1 dimensional" whereas the 32 RGBA is "4 dimensional".
To see what I mean; lets assume that my colortable went from black(-range) to blue(+range).
In this case the only component that varies is clearly the B, all the others are fixed.
So I am only getting 8bits of precision whereas I am "paying" for 32 :-(.
I am therefore looking for a "palette" based file format.
Ideally I would like each pixel to be a 16 bit index (unsigned short int) into a "palette" consisting of 2^16 RGBA values.
The total memory used for storing my array in this case would be: 1024*1024*2 bytes + 2^16*4bytes = 2.25 MB.
So I would get twice as good precision for almost half the "price"!
Which image formats support this?
At the moment I am using Qt's QImage to write the array to file as an image. QImage has an internal 8 bit indexed ("palette") format. I would like a 16 bit one. Also I did not understand from Qt's documentation which file formats support the 8 bit indexed internal format.
Store it as a 16 bit greyscale PNG and do the colour table manually yourself.
You don't say why your image can be decomposed in 2^16 colours but using your knowledge of this special image you could make an algorithm so that indices that are near each other have similar colours and are therefore easier to compress.
"I want to use a colortable that goes from red(-range) to white(0) to black(+range)."
Okay, so you've got FF,00,00 (red) to FF,FF,FF (white) to 00,00,00 (black). In 24 bit RGB, that looks to me like 256 values from red to white and then another 256 from white to black. So you don't need a palette size of 2^16 (16384); you need 2^9 (512).
If you're willing to compromise and use a palette size of 2^8 then the GIF format could work. That's still relatively fine resolution: 128 shades of red on the negative size, plus 128 shades of grey on the positive. Each of a GIF's 256 palette entries can be an RGB value.
PNG is another candidate for palette-based color. You have more flexibility with PNG, including RGBA if you need an alpha channel.
You mentioned RGBA in your question but the use of the alpha channel was not explained.
So independent of file format, if you can use a 256 entry palette then you will have a very well compressed image. Back to your mapping requirement (i.e. mapping floats [-range -> 0.0 -> +range] to [red -> white -> black], here is a 256 entry palette that covers the range red-white-black you wanted:
float entry# color rgb
------ ------- ----- --------
-range 0 00 red FF,00,00
1 01 FF,02,02
2 02 FF,04,04
... ...
... ...
127 7F FF,FD,FD
0.0 128 80 white FF,FF,FF
129 81 FD,FD,FD
... ....
... ...
253 FD 04,04,04
254 FE 02,02,02
+range 255 FF black 00,00,00
If you double the size of the color table to be 9 bits (512 values) then you can make the increments between RGB entries more fine: increments of 1 instead of 2. Such a 9-bit palette would give you full single-channel resolution in RGB on both the negative and positive sides of the range. It's not clear that allocating 16 bits of palette would really be able to store any more visual information given the mapping you want to do. I hope I understand your question and maybe this is helpful.
PNG format supports paletted format up to 8-bits, but should also support grayscale images up to 16-bits. However, 16-bit modes are less used, and software support may be lacking. You should test your tools first.
But you could also test with plain 24-bit RGB truecolor PNG images. They are compressed and should produce better result than BMP in any case.

Understanding PNG file format IDAT segment

From the sample image below, I have a border in yellow just for display purposes only.
The actual .png file is a simple black/white image 3 pixels by 3 pixels. I was originally thinking to try as a 2x2, but that would not help trying to interpret low/hi vs hi/low drawing stream. At least this way, I would have two black, one white from the top, or one white, two black from the bottom..
So I read the chunks of data, get to the IDAT chunk, decode that (zlib) and come up with 12 bytes as follows
00 20 00 40 00 80
So, my question, how does the above get broken down into the 3x3 black and white sample... Also, it is saved in palette format and properly recognizes the bit depth of 1 and color palette of 2... color pallet[0] is RGBA all zeros. Palette1 has RGBA of 255, 255, 255, 0
I'll eventually get into the multiple other depth formats later, just wanted to start with what would expect to be the easiest.
Part II. Any guidance on handling the other depth formats would help if anything special to be considered especially regarding alpha channel (which I am already looking for in the palette) that might trip me up.
It wouuld be easier if you use libpng, so I guess this is for learning purposes.
The thing is if you decompress the IDAT chunk directly, you get some data that is not supposed to be displayed and/or may need to be transformed (because a filter was applied) to get the actual bytes. In PNG format each line starts with an extra byte that tells you which filter was applied to that line, the remaining bytes contain the line pixels.
BTW, 00 20 00 40 00 80 are 6 bytes only (not 12, as you think). Now if you see this data as binary, your 3 lines would look like this:
00000000 00100000
00000000 01000000
00000000 10000000
Now, your image is 1 bit per pixel, so 1 byte is required to save a line of 3 pixels. The 3 highest bits are actually used (the 5 lower bits are ignored). I replaced the ignored bits with a x, so I think is easier to see the actual pixels (0 is black, 1 is white):
00000000 001xxxxx
00000000 010xxxxx
00000000 100xxxxx
In this case, no filter was applied to any line, because the first byte of each line is zero (0 means no filter applied, values from 1 to 4 means a filter was applied).