Creating a continuous image lookup table - opengl

I need to generate a color map which I am not sure exist. I have a 1024x1024 image which would contain 2^20 pixels. I have 3 color channels which each have 8 bits which would leave us with 2^24 possible colors. This problem is easy to solve with non continuous functions where you simply use 4 bits of the final channel on both of the first two channels to create two 12 bit channels.
Unfortunately, I have a new constraint where all three channels of the map must remain continuos (I mean that each individual neighboring pixels channel value does not change by more than one) as neighboring values may be interpolated together. As this is being used as a lookup table, the interpolation of non continuous values would result in inaccuracies.
To put it in a slightly different way, I need a function f and f^-1
f(x, y) = r, g, b
f^-1(r, g, b) = x, y (only existing in the original x,y range)
with r, g, b, being 8 bit numbers (the integers 0 - 255) and x and y being 10 bit numbers (the integers 0 - 1023). All neighboring r,g,b values must be continuous. By continuous, I mean that each individual neighboring pixels channel value does not change by more than one. Do such functions exist, and if so, what are they?
EDIT:
just for reference, this is the previous NON-continuous solution with bit padding. This will not work due to OpenGL interpolating the pixels for my applications.
Although it might not be obvious, the blue channel is changing on the sub squares.

Someone over at MathOverflow wrote an excellent answer for this:
https://mathoverflow.net/questions/181663/a-continuous-function-for-defining-unique-values-to-a-1024x1024-image-with-a-24

Related

Reduce Image bit C++

How can I reduce the number of bits from 24 bits to a number between 0 and 8 bits and distribute the bits for the three colors Red, Green and Blue
Any idea ?
This is called "Color Quantization". You have 16.777.216 colors and you want to map them to a smaller number (2 to 256).
Step 1: choose the colors you want to use. First their number, then the colors themselves. You need to chose if the colors are fixed for all images, or if they change based on the image (you will need to ship a palette of colors with every image).
Step 2: substitute the colors of your image with those in the selection.
If the colors are fixed and you want to stay very simple you can use 1 bit per channel (8 colors in total) or 2 bits per channel (64 colors in total).
Slightly more complex, use these values for each channel 0, 51, 102, 153, 204, 255, in any possible way, leading to 216 different color combinations. Make a table which associates every color combination with an index. That index requires 8 bits (with some spare). This is called a "web safe palette" (this brings me back to the late 1999). Now you are ready for substitution: take every pixel in your image and the quantized color can be found as x*6//256*51 (// is integer division).
If you want a better looking palette, look for the Median cut algorithm.
Keep only the most significant bits of the pixel's red channel. Do likewise for the green and blue channel. Then use C++'s bit manipulation operations to move those bits values into a single byte. There are multiple ways of doing so. For example, do an Internet search for "rgb332" for one example (where you keep 3 red bits, 3 green bits, and 2 blue bits).

Converting 12 bit color values to 8 bit color values C++

I'm attempting to convert 12-bit RGGB color values into 8-bit RGGB color values, but with my current method it gives strange results.
Logically, I thought that simply dividing the 12-bit RGGB into 8-bit RGGB would work and be pretty simple:
// raw_color_array contains R,G1,G2,B in a bayer pattern with each element
// ranging from 0 to 4096
for(int i = 0; i < array_size; i++)
{
raw_color_array[i] /= 16; // 4096 becomes 256 and so on
}
However, in practice this actually does not work. Given, for example, a small image with water and a piece of ice in it you can see what actually happens in the conversion (right most image).
Why does this happen? and how can I get the same (or close to) image on the left, but as 8-bit values instead? Thanks!
EDIT: going off of #MSalters answer, I get a better quality image but the colors are still drasticaly skewed. What resources can I look into for converting 12-bit data to 8-bit data without a steep loss in quality?
It appears that your raw 12 bits data isn't on a linear scale. That is quite common for images. For a non-linear scale, you can't use a linear transformation like dividing by 16.
A non-linear transform like sqrt(x*16) would also give you an 8 bits value. So would std::pow(x, 12.0/8.0)
A known problem with low-gradient images is that you get banding. If your images has an area where the original value varies from say 100 to 200, the 12-to-8 bit reduction will shrink that to less than 100 different values. You get rounding , and with naive (local) rounding you get bands. Linear or non-linear, there will then be some inputs x that all map to y, and some that map to y+1. This can be mitigated by doing the transformation in floating point, and then adding a random value between -1.0 and +1.0 before rounding. This effectively breaks up the band structure.
After you clarified that this 12bit data is only for one color, here is my simple answer:
Since you want to convert its value to its 8 bit equivalent, it obviously means you lost some of the data (4bits). This is the reason why you are not getting the same output.
After clarification:
If you want to retain the actual colour values!
Apply de-mosaicking in the 12 Bit image and then scale the resultant data to 8 - Bit. So that the colour loss due to de-mosaicking will be less compared to the previous approach.
You say that your 12-bits represent 2^12 bits of one colour. That is incorrect. There are reds, greens and blues in your image. Look at the histogram. I made this with ImageMagick at the command line:
convert cells.jpg histogram:png:h.png
If you want 8-bits per pixel, rather than trying to blindly/statically apportion 3 bits to Green, 2 bits to Red and 3 bits to Blue, you would probably be better off going with an 8-bit palette so you can have 250+ colours of all variations rather than restricting yourself to just 8 blue shades, 4 reds an 8 green. So, like this:
convert cells.jpg -colors 254 PNG8:result.png
Here is the result of that beside the original:
The process above is called "quantisation" and if you want to implement it in C/C++, there is a writeup here.

Controlling Brightness of an Image using numericupdown

I have done the brightness increasing but unable to decrease after trying a lot. For example: my rgb values of a pixel are r=100 , g = 200 ; b =125 . I'm using numericupdown to increase and decrease the value. When I add ,for example, 100 using numupdown. the new values will be r=200 , g=300 and b=255. But we take g=300 -> g=255 because we can't go further than 255. When I decrease the value to 100 , the values should be r=100 , g=200 , b=125 back. Due to changing the value of g It would be no more g=200 because g is equal to 255 and 255-100=155 which is not equal to 200..Seeking help to set the pixel values again to same while decreasing .
P.s : I'm a learner
Store the original image and display a copy. Every time you run your algorithm you read the pixel values of the original and write the modified pixel values into the copy.
Note: this is a very simple approach. Brightness is a well discussed subject with a lot of options. For sophisticated solutions you often also drag in saturation and much more. Per pixel options are maybe not the best approach, but for the sake of this post I have constructed an answer that will solve your specific problem below.
// edit 2
Thinking about this some more, I did not think about the solution to the equation not being unique. You indeed need to store the original and recalculate from the original image. I would still advice using an approved brightness equation like the ones found in the link above. Simply modifying R,G, and B channels might not be what your users expect.
The below answer must be combined with working on the original image and displaying a modified copy as mentioned in other answers.
I would not increase R, G, and B channels directly but go with a perceived brightness option like found in here.
Lets say you take:
L = (0.299*R + 0.587*G + 0.114*B)
You know the min(L) will be 0, and the max(L) will be 255. This is where your numeric up/down will be limited to [0,255]. Next you simply increase/decrease L and calculate the RGB using the formula.
//edit
You case as example:
r=100 , g = 200 ; b =125
L = (0.299*100 + 0.587*200 + 0.114*125)
L = 161.5
Now lets go to the max (limited) to get the extreme case and see this still works:
L = 255
L = (0.299*255 + 0.587 * 255 + 0.114 * 255)
RGB = (255,255,255)
Going back will also always work, 0 gives black everything in between has a guaranteed RGB in range R,G,B in [0,255].
Another solution, possibly more elegant, would be to map your RGB values to the HSV color space.
Once you are in the HSV color space you can increase and decrease the value (V) to control brightness without losing hue or saturation information.
This question gives some pointers on how to do the conversion from RGB to HSV.

Multiple scans by key

I have one 4-channel HSVL image - Hue, Saturation, Value (floats), Label (unsigned int).
The task is to compute an array of sums of Hues, Saturations, and Values, for each unique label. For example, I will be able to access the output Sum[of pixels with label 455] = { Hue: 500, Sat: 100, Val: 200 }. The size of the image is about 5 MP, and there are about 3000 different labels.
My idea is to have ~32 scans over parts of the image, that will produce 32 x nLabels sums. Then I can scan over the 32 partitions of the image, to arrive at nLabel sum structures.
Does a "scan by key?" algorithm exist that is a solution to this exact type of problem?
If you want to do this by CUDA, the following could help.
Since you only need the sum values, I think what you need is "reduce by key". Thrust provides an implementation thrust::reduce_by_key() which could meet your needs.
But before using it, you have to sort all the pixels by the labels. This can be done with thrust::sort_by_key()
You may also be interested in thrust::zip_iterator, which can zip the 3 channels HSV into a single value iterator for sorting and reduction.

How to resize an image in C++?

I have an image which is representative of an Array2D:
template<class T = uint8_t>
Array2D<T> mPixData[4]; ///< 3 component channels + alpha channel.
The comment is in the library. I have no clues about the explanation.
Would someone:
explain what are the 3 component channels + alpha channel are about
show how I could resize this image based on the mPixData
Without know what library this is, here is a stab in the dark:
The type definition implies that it is creating a 2D array of unsigned chars (allowing you to store values up to 255.
template<class T = uint8_t> Array2D<T>
Then, mPixData itself is an array, which implies that at each co-ordinate, you have four values (bytes to contend with), 3 for the colours (let's say RGB, but could be something else) and 1 for Alpha.
The "image" is basically this three dimensional array. Presumably when loading stuff into it, it resizes to the input - what you need to do is to find some form of resizing algorithm (not an image processing expert myself, but am sure google will reveal something), which will then allow you to take this data and do what you need...
1) 3 component channels - Red Green Blue channels. alpha channel tells about image transparency
2) There are many algorithms you can use to resize the image. The simplest would be to discard extra pixels. Another simple is to do interpolation
The 3 component channels represent the Red Green Blue (aka RGB) channels. The 4th channel, ALPHA, is the transparency channel.
A pixel is defined by mPixData[4]
mPixData[0] -> R
mPixData[1] -> G
mPixData[2] -> B
mPixData[3] -> A
Therefore, an image can be represented as a vector or array of mPixData[4]. As you already stated, in this case is Array2D<T> mPixData[4];
Resize/rescale/resample an image is not a trivial process. There are lots of materials available on the web about it and I think you should consider using a library to do this. Check CxImage (Windows/Linux).
There are some code here but I haven't tested it. Check the resample() function.
Hi the 3 channels are the rgb + alpha channel. So red green and blue channels and the alpha channel. There are several methods to downscaling. You could take for example every 4 pixel, but the result would look quite bad, take a look at different interpolation methods e.g.: http://en.wikipedia.org/wiki/Bilinear_interpolation.
Or if you want to use a library use: http://www.imagemagick.org/Magick++/
or as mentioned by karlphillip:
http://www.xdp.it/cximage.htm