how to wide gamut ,convert from 8bit to 10bit - c++

I know that the video we watched on the tv is compressed.The range
of the gamut will be narrow.I want to know if there is a magical way
can achieve the extending.Dolby claimd the Perceptual
Quantizer(PQ)EOTE .there is two function Eg.the Perceptual
Quantizer(PQ)EOTE this quetion is like the way to convert bt709 to bt2020.
I want to convert 8bit to 10bit like the x264. the computer suports only the 8 |16|32 bit.so which kinde of data type to save the 10bit .someone said that we use the 16bit ,the last six bit we use zero to fill.is that right ,i dont think thats a good way.many thanks

There's no magical way to extend color data to a wider gamut. You could try to "stretch" data from the existing gamut, and the result would achieve the vividness of the wider gamut, but lose accuracy.
Normally the three 10-bit channels are packed into a 32-bit integer.

Related

Converting 12 bit color values to 8 bit color values C++

I'm attempting to convert 12-bit RGGB color values into 8-bit RGGB color values, but with my current method it gives strange results.
Logically, I thought that simply dividing the 12-bit RGGB into 8-bit RGGB would work and be pretty simple:
// raw_color_array contains R,G1,G2,B in a bayer pattern with each element
// ranging from 0 to 4096
for(int i = 0; i < array_size; i++)
{
raw_color_array[i] /= 16; // 4096 becomes 256 and so on
}
However, in practice this actually does not work. Given, for example, a small image with water and a piece of ice in it you can see what actually happens in the conversion (right most image).
Why does this happen? and how can I get the same (or close to) image on the left, but as 8-bit values instead? Thanks!
EDIT: going off of #MSalters answer, I get a better quality image but the colors are still drasticaly skewed. What resources can I look into for converting 12-bit data to 8-bit data without a steep loss in quality?
It appears that your raw 12 bits data isn't on a linear scale. That is quite common for images. For a non-linear scale, you can't use a linear transformation like dividing by 16.
A non-linear transform like sqrt(x*16) would also give you an 8 bits value. So would std::pow(x, 12.0/8.0)
A known problem with low-gradient images is that you get banding. If your images has an area where the original value varies from say 100 to 200, the 12-to-8 bit reduction will shrink that to less than 100 different values. You get rounding , and with naive (local) rounding you get bands. Linear or non-linear, there will then be some inputs x that all map to y, and some that map to y+1. This can be mitigated by doing the transformation in floating point, and then adding a random value between -1.0 and +1.0 before rounding. This effectively breaks up the band structure.
After you clarified that this 12bit data is only for one color, here is my simple answer:
Since you want to convert its value to its 8 bit equivalent, it obviously means you lost some of the data (4bits). This is the reason why you are not getting the same output.
After clarification:
If you want to retain the actual colour values!
Apply de-mosaicking in the 12 Bit image and then scale the resultant data to 8 - Bit. So that the colour loss due to de-mosaicking will be less compared to the previous approach.
You say that your 12-bits represent 2^12 bits of one colour. That is incorrect. There are reds, greens and blues in your image. Look at the histogram. I made this with ImageMagick at the command line:
convert cells.jpg histogram:png:h.png
If you want 8-bits per pixel, rather than trying to blindly/statically apportion 3 bits to Green, 2 bits to Red and 3 bits to Blue, you would probably be better off going with an 8-bit palette so you can have 250+ colours of all variations rather than restricting yourself to just 8 blue shades, 4 reds an 8 green. So, like this:
convert cells.jpg -colors 254 PNG8:result.png
Here is the result of that beside the original:
The process above is called "quantisation" and if you want to implement it in C/C++, there is a writeup here.

OpenCV convertTo()

I came across this code:
image.convertTo(temp_image,CV_16SC3);
I saw the description of the convertTo() function from here, but what confuses me is image. How can we read the above code? What would be the relation between image and temp_image?
Thanks.
The other answers here are correct, but lack some details. Let me try.
image.convertTo(temp_image,CV_16SC3);
You have a source image image, and a destination image temp_image. You didn't specify the type of image, but probably is CV_8UC3 or CV_32FC3, i.e. a 3 channel image (since convertTo doesn't change the number of channels), where each channel has depth 8 bit (unsigned char, CV_8UC3) or 32 bit (float, CV_32FC3).
This line of code will change the depth of each channel, so that temp_image has each channel of depth 16 bit (short). Specifically it's a signed short, since the type specifier has the S: CV_16SC3.
Note that if you are narrowing down the depth, as in the case from float to signed short, then saturate_cast will make sure that all the values in temp_image will be in the correct range, i.e. in [–32768, 32767] for signed short.
Why you need to change the depth of an image?
Some OpenCV functions require input images with a specific depth.
You need a matrix to contain a different range of values. E.g. if you need to sum (or subtract) some images CV_8UC3 (tipically BGR images), you'd better store the result in a CV_16SC3 or you'll probably get wrong results due to saturations, since the range for CV_8U images is in [0,255]
You read with imread, or want to store with imwrite images with 16bit depth. This are usually used (AFAIK) in medical or graphics application to allow a wider range of colors. However, most monitors do not support 16bit image visualization.
There may be other cases, let me know if I miss the one important to you.
An image is a matrix of pixel information (i.e. a 1080p image will be a 1,920 × 1,080 matrix where each entry contains rbg values for that pixel). All you are doing is reformatting that matrix (each pixel entry, iteratively) into a new type (CV_16SC3) so it can be read by different programs.
The temp_image is a new matrix of pixel information based off of image formatted into CV_16SC3.
The first one is a source, the second one - destination. So, it takes image, converts it into type CV_16SC3 and stores in temp_image.

RGB color to HSL bytes

I've seen some implementations for converting RGB to HSL. Most are accurate and work in both directions.
To me its not important that it will work in 2 directions (no need to put back to RGB)
But i want code that returns values from 0 to 255 max, also for the Hue channel.
And I wouldnt like to do devisions like Hue/360*250 i am searching for integer based math no Dwords (its for another system), nice would be some kind of boolean logix (and/or/xor)
It should not do any integer or real number based math, the goal is code
working only using byte math.
Maybe someone already has found such math when he used code like
c++ or
c# or
python
Which i would be able to translate to c++
Checkout the colorsys module, it has methods like:
colorsys.rgb_to_hls(r,g,b)
colorsys.hls_to_rgb(h,l,s)
The easyrgb site has many code snippets for color space conversion. Here's the rgb->hsl code.

C++: How to interpret a byte array representation of an image?

I'm trying to work with this camera SDK, and let's say the camera has this function called CameraGetImageData(BYTE* data), which I assume takes in a byte array, modifies it with the image data, and then returns a status code based on success/failure. The SDK provides no documentation whatsoever (not even code comments) so I'm just guestimating here. Here's a code snippet on what I think works
BYTE* data = new BYTE[10000000]; // an array of an arbitrary large size, I'm not
// sure what the exact size needs to be so I
// made it large
CameraGetImageData(data);
// Do stuff here to process/output image data
I've run the code w/ breakpoints in Visual Studio and can confirm that the CameraGetImageData function does indeed modify the array. Now my question is, is there a standard way for cameras to output data? How should I start using this data and what does each byte represent? The camera captures in 8-bit color.
Take pictures of pure red, pure green and pure blue. See what comes out.
Also, I'd make the array 100 million, not 10 million if you've got the memory, at least initially. A 10 megapixel camera using 24 bits per pixel is going to use 30 million bytes, bigger than your array. If it does something crazy like store 16 bits per colour it could take up to 60 million or 80 million bytes.
You could fill this big array with data before passing it. For example fill it with '01234567' repeated. Then it's really obvious what bytes have been written and what bytes haven't, so you can work out the real size of what's returned.
I don't think there is a standard but you can try to identify which values are what by putting some solid color images in front of the camera. So all pixels would be approximately the same color. Having an idea of what color should be stored in each pixel you may understand how the color is represented in your array. I would go with black, white, reg, green, blue images.
But also consider finding a better SDK which has the documentation, because making just a big array is really bad design
You should check the documentation on your camera SDK, since there's no "standard" or "common" way for data output. It can be raw data, it can be RGB data, it can even be already compressed. If the camera vendor doesn't provide any information, you could try to find some libraries that handle most common formats, and try to pass the data you have to see what happens.
Without even knowing the type of the camera, this question is nearly impossible to answer.
If it is a scientific camera, chances are good that it adhers to the IEEE 1394 (aka IIDC or DCAM) standard. I have personally worked with such a camera made by Hamamatsu using this library to interface with the camera.
In my case the camera output was just raw data. The camera itself was monochrome and each pixel had a depth-resolution of 12 bit. Therefore, each pixel intensity was stored as 16-bit unsigned value in the result array. The size of the array was simply width * height * 2 bytes, where width and height are the image dimensions in pixels the factor 2 is for 16-bit per pixel. The width and height were known a-priori from the chosen camera mode.
If you have the dimensions of the result image, try to dump your byte array into a file and load the result either in Python or Matlab and just try to visualize the content. Another possibility is to load this raw file with an image editor such as ImageJ and hope to get anything out from it.
Good luck!
I hope this question's solution will helps you: https://stackoverflow.com/a/3340944/291372
Actually you've got an array of pixels (assume 1 byte per pixel if you camera captires in 8-bit). What you need - is just determine width and height. after that you can try to restore bitmap image from you byte array.

C++ Fast way to convert between image formats

Ive got some in memory images in various simple formats, which I need to quickly convert to another format. In cases where the target format contains an alpha channel but the source does not, alpha should be taken as its full value (eg 0xFF for an 8bit target alpha channel).
I need to be able to deal with various formats, such as A8R8G8B8, R8G8B8A8, A1R4G4B4, R5G6B5, etc.
Conversion of pixels between any of these formats is simple, however I don't want to have to manually code every single combination, I also don't want to have a 2 pass solution of converting to a common format (eg A8R8G8B8) before converting again to the final format both for performance reasons, and that if I want to use a higher definition format, say A16B16G16R16 or a floating point I'd either loose some data converting to the intermediate format, or have to rewrite everything to use a different higher definition format...
Ideally id like some sort of enum with values for each format, and then a single method to convert, eg say
enum ImageFormat
{
FMT_A8R8G8B8,//32bit
FMT_A1R4G4B4,//16bit
FMT_R5G6B5,//16bit
FMT_A32R32G32B32F,//128bit floating point format
...
};
struct Image
{
ImageFormat Format;
size_t Width, Height;
size_t Pitch;
void *Data;
};
Image *ConvertImage(ImageFormat *targetFormat, const Image *sourceImage);
You might want boost::gil.
Take a look at FreeImage; it's a library that will convert between various images.
You can also try ImageMagick to just convert back and forth, if you don't want to do anything to them.
10 years ago I used the hermes pixel conversion library, which was very fast. It was possible to convert 640x480 32bit images to 15 or 16 bit images with at least 30 pics per second. We used this for a demo engine. Unfortunately I cannot find a link at the moment. On debian the package is orphaned..
But this is exactly what you want to use for real time usage..