OpenGL texture coordinates and the precision of small floats - opengl

I'm using floats to specify texture coordinates, in the range 0-1. OpenGL likes things in this range, and I'm fine specifying coordinates this way, but I'm concerned when I start using larger textures (say up 4096 or 8192 pixels), that I may start losing precision. For example, if I want to specify a coordinate of (1,1) in a 8192x8192px texture, that would map to 1/8192=0.0001220703125. That seems to evaluate to 0.000122070313 as a float though... I'm concerned that my OpenGL shader won't map that to the same pixel I intended.
I could keep the coordinates as integers in pixels for awhile, but sooner or later I have to convert it (perhaps as late as in the shader itself). Is there a workaround for this, or is this something I should even be concerned about?
Multiplying it back out, I get 1.000000004096 which I guess would still be interpreted as 1? Actually, OpenGL does blending if its not a whole number, doesn't it? Perhaps not with "nearest neighbour", but with "linear" it ought to.
1/4096f * 4096 = 1, error = 0
1/8192f * 8192 = 1.000000004096, error = 0.000000004096
1/16384f * 16384 = 1.0000000008192, error = 0.0000000008192
1/32768f * 32768 = 0.9999999991808, error = 0.0000000008192
...
1/1048576f * 1048576 = 0.9999999827968, error = 0.0000000172032
(I'm using Visual Studio's debugger to compute the float, and then multiplying it back out with Calculator)
Is the lesson here that the error is negligible for any reasonably sized texture?

That seems to evaluate to 0.000122070313 as a float though... I'm concerned that my OpenGL shader won't map that to the same pixel I intended.
You should not be concerned. Floating point is called floating point because the decimal floats. You get ~7 digits of precision for your mantissa, more or less regardless of how large or small the float is.
The float isn't stored as 0.000122070313; it's stored as 1.22070313x10^-4. The mantissa is 1.22070313, the exponent is -4. If the exponent were -8 instead, you would have the same precision.
Your exponent, with single-precision floats, can go down to + or - ~38. That is, you can have 38 zeros between the decimal and the first non-zero digit of the mantissa.
So no, you shouldn't be concerned.
The only thing that should concern you would be the precision of the interpolated value and the precision in the texture fetching. But these have nothing to do with the precision of data you store your texture coordinates in.

Related

Representing a Roughness texture with a 16 to one compression ratio using functions or Compression losslessly

I have a 4 by 4 texture of bits black and white texture and I want to use a lossless compression or Function to represent the Bits with less values accurately at least 16 to 1
Bit orientation
F[100][120]=1 function
That means compressed or Equation it must average 4bits
Here is an example function that takes low Additional bits due to constants which each take 1 bit and do the whole texture. It need to be for 60 bitwise Ops piecewise and max 16 functions for the whole texture(4bit). It must do the same.
for (q=0;q<122880;q++){
screen[122879-q]=((127-(((q>>10)%128)+Math.abs(((q%256)%128)-64)>>1))>>4)
}
for (q=0;q<122880;q++){
screen[q]=(q>>8)%128+Math.abs(((q%256)%128)-64)>>1
}
Good Bumpy
for (q=0;q<122880;q++){
screen[q]=(((q>>8)%25+Math.abs(((q%256)%128)-64)>>1)^2)
}
This is an attempt for a bumpy and terrain texture at 256*480

Rendering a bit array with OpenGL

I have a bit array representing an image mask, stored in a uint8_t[] container array, in row first order. Hence, for each byte, I have 8 pixels.
Now, I need to render this with OpenGL ( >= 3.0 ). A positive bit is drawn as a white pixel and a negative bit is drawn as a black pixel.
How could I do this? Please
The first idea that comes to mind is to develop a specific shader for this. Can anyone give some hints on that?
You definitely must write a shader for this. First and foremost you want to prevent the OpenGL implementation to reinterpret the integer bits of your B/W bitmap as numbers in a certain range and map them to [0…1] floats. Which means you have to load your bits into an integer image format. Since your image format is octet groups of binary pixels (byte is a rather unspecific term and can refer to any number of bits, though 8 bits is the usual), a single channel format 8 bits format seems the right choice. The OpenGL-3 moniker for that is GL_R8UI. Keep in mind that the "width" of the texture will be 1/8th of the actual width of your B/W image. Also for unnormalized access you must use a usampler (for unsigned) or an isampler (for signed) (thanks #derhass for noticing that this was not properly written here).
To access individual bits you use the usual bit manipulation operators. Since you don't want your bits to become filtered, texel fetch access must be used. So to access the binary pixel at integer location x,y the following would be used.
uniform usampler2D tex;
uint shift = x % 8;
uint mask = 1 << shift;
uint octet = texelFetch(tex, ivec2(x/8,y)).r;
value = (octet & mask) >> shift;
The best solution would be to use a shader, you could also hack something like this:
std::bitset<8> bits = myuint;
Then get the values of the single bits with bits.at(position) and finally do a simple point drawing.

Converting 12 bit color values to 8 bit color values C++

I'm attempting to convert 12-bit RGGB color values into 8-bit RGGB color values, but with my current method it gives strange results.
Logically, I thought that simply dividing the 12-bit RGGB into 8-bit RGGB would work and be pretty simple:
// raw_color_array contains R,G1,G2,B in a bayer pattern with each element
// ranging from 0 to 4096
for(int i = 0; i < array_size; i++)
{
raw_color_array[i] /= 16; // 4096 becomes 256 and so on
}
However, in practice this actually does not work. Given, for example, a small image with water and a piece of ice in it you can see what actually happens in the conversion (right most image).
Why does this happen? and how can I get the same (or close to) image on the left, but as 8-bit values instead? Thanks!
EDIT: going off of #MSalters answer, I get a better quality image but the colors are still drasticaly skewed. What resources can I look into for converting 12-bit data to 8-bit data without a steep loss in quality?
It appears that your raw 12 bits data isn't on a linear scale. That is quite common for images. For a non-linear scale, you can't use a linear transformation like dividing by 16.
A non-linear transform like sqrt(x*16) would also give you an 8 bits value. So would std::pow(x, 12.0/8.0)
A known problem with low-gradient images is that you get banding. If your images has an area where the original value varies from say 100 to 200, the 12-to-8 bit reduction will shrink that to less than 100 different values. You get rounding , and with naive (local) rounding you get bands. Linear or non-linear, there will then be some inputs x that all map to y, and some that map to y+1. This can be mitigated by doing the transformation in floating point, and then adding a random value between -1.0 and +1.0 before rounding. This effectively breaks up the band structure.
After you clarified that this 12bit data is only for one color, here is my simple answer:
Since you want to convert its value to its 8 bit equivalent, it obviously means you lost some of the data (4bits). This is the reason why you are not getting the same output.
After clarification:
If you want to retain the actual colour values!
Apply de-mosaicking in the 12 Bit image and then scale the resultant data to 8 - Bit. So that the colour loss due to de-mosaicking will be less compared to the previous approach.
You say that your 12-bits represent 2^12 bits of one colour. That is incorrect. There are reds, greens and blues in your image. Look at the histogram. I made this with ImageMagick at the command line:
convert cells.jpg histogram:png:h.png
If you want 8-bits per pixel, rather than trying to blindly/statically apportion 3 bits to Green, 2 bits to Red and 3 bits to Blue, you would probably be better off going with an 8-bit palette so you can have 250+ colours of all variations rather than restricting yourself to just 8 blue shades, 4 reds an 8 green. So, like this:
convert cells.jpg -colors 254 PNG8:result.png
Here is the result of that beside the original:
The process above is called "quantisation" and if you want to implement it in C/C++, there is a writeup here.

C++ OpenGL - Colours

I'm trying to draw an 8-bit style games character (link from Zelda) as i'm practicing OpenGL.
I've started with his face, which is the big square to the right, and have drawn his eye which is two blocks to the right of the start of his face... (6 blocks, the 2 left most is an eye)
The top of the eye (the block above the green block) should be dark green (see code) but it keeps adopting the colour of the first larger block (the face).
I hope this makes sense...
Please see this picture:
What am i doing wrong for it to keep changing its colour?
I'm assuming i need to do something more for it to accept RGB colours? glColor3f(29, 137, 59);...
glColor3f accepts a floating point argument. By doing this, the large numbers will be cast to floats, and therefore become 29.0f, 137.0f and 59.0f. Given colours are represented in the range of 0-1, these get clamped to the range 0-1 and of course, appear white (1.0, 1.0, 1.0).
Use glColor3ub instead. It accepts an unsigned byte as its argument, which is in the range of 0-255, which is probably what you're most used to. There's other forms such as glColor3i, glColor3s, glColor3ui, glColor3us etc which accept integers and shorts (and their unsigned variants) which are defined over the range of integers and shorts. These simply get converted to the decimal variant internally (e.g. decimal = int / INT_MAX).

compact representation and delivery of point data

I have an array of point data, the values of points are represented as x co-ordinate and y co-ordinate.
These points could be in the range of 500 upto 2000 points or more.
The data represents a motion path which could range from the simple to very complex and can also have cusps in it.
Can I represent this data as one spline or a collection of splines or some other format with very tight compression.
I have tried representing them as a collection of beziers but at best I am getting a saving of 40 %.
For instance if I have an array of 500 points , that gives me 500 x and 500 y values so I have 1000 data pieces.
I around 100 quadratic beziers from this. each bezier is represented as controlx, controly, anchorx, anchory.
which gives me 100 x 4 = 400 pcs of data.
So input = 1000pcs , output = 400pcs.
I would like to further tighen this, any suggestions?
By its nature, spline is an approximation. You can reduce the number of splines you use to reach a higher compression ratio.
You can also achieve lossless compression by using some kind of encoding scheme. I am just making this up as I am typing, using the range example in previous answer (1000 for x and 400 for y),
Each point only needs 19 bits (10 for x, 9 for y). You can use 3 bytes to represent a coordinate.
Use 2 byte to represent displacement up to +/- 63.
Use 1 byte to represent short displacement up to +/- 7 for x, +/- 3 for y.
To decode the sequence properly, you would need some prefix to identify the type of encoding. Let's say we use 110 for full point, 10 for displacement and 0 for short displacement.
The bit layout will look like this,
Coordinates: 110xxxxxxxxxxxyyyyyyyyyy
Dislacement: 10xxxxxxxyyyyyyy
Short Displacement: 0xxxxyyy
Unless your sequence is totally random, you can easily achieve high compression ratio with this scheme.
Let's see how it works using a short example.
3 points: A(500, 400), B(550, 380), C(545, 381)
Let's say you were using 2 byte for each coordinate. It will take 16 bytes to encode this without compression.
To encode the sequence using the compression scheme,
A is first point so full coordinate will be used. 3 bytes.
B's displacement from A is (50, -20) and can be encoded as displacement. 2 bytes.
C's displacement from B is (-5, 1) and it fits the range of short displacement 1 byte.
So you save 10 bytes out of 16 bytes. Real compression ratio is totally depending on the data pattern. It works best on points forming a moving path. If the points are random, only 25% saving can be achieved.
If for example you use 32-bit integers for point coords and there is range limit, like x: 0..1000, y:0..400, you can pack (x, y) into a single 32-bit variable.
That way you achieve another 50% compression.
You could do a frequency analysis of the numbers you are trying to encode and use varying bit lengths to represent them, of course here I am vaguely describing Huffman coding
Firstly, only keep enough decimal points in your data that you actually need. Removing these would reduce your accuracy, but its a calculated loss. To do that, try converting your number to a string, locating the dot's position, and cutting of those many characters from the end. That could process faster than math, IMO. Lastly you can convert it back to a number.
150.234636746 -> "150.234636746" -> "150.23" -> 150.23
Secondly, try storing your data relative to the last number ("relative values"). Basically subtract the last number from this one. Then later to "decompress" it you can keep an accumulator variable and add them up.
A A A A R R
150, 200, 250 -> 150, 50, 50