Uint8ClampedList in Dart - list

I am curently playing with Dart and especially dart:typed_data. I stumbled across a class where I have no idea what its purpose/speciality is. I speak of Uint8ClampedList. The difference to the Uint8List in the documentation is the sentence
Indexed store clamps the value to range 0..0xFF.
What does that sentence actually mean? Why does this class exist? I am really curious.

"Clamping" means that values below 0 become 0 and values above 0xff become 0xff when stored into the Uint8ClampedList. Any value in the range 0..0xff can be stored into the list without change.
This differs from other typed lists where the value is instead just truncated to the low 8 (or 16 or 32) bits.
The clamped list (and the name itself) mirrors the Uint8ClampedArray of JavaScript.
One usage of clamping that I have seen is for RGB(A) color images, where really over-saturated colors (e.g., and R value > 255) would be capped at the maximum value instead of wrapping around and becoming dark. It allows you to make some transformations on the values without having to care about handling overflow. See the Uint8ClampedArray specification - it was introduced to have an array type matching the behavior of an existing canvas type.

Related

Why does QColor use 32-bit signed int to represent e.g. rgba values?

QColor can return rgba values of type int (32-bit signed integer). Why is that? The color values range from 0-255, don't they? Is there any situation where this might not be the case?
I'm considering to implicitly cast each of the rgba values returned by QColor.red()/green()/blue()/alpha() to quint8. It seems to work but I don't know if this will lead to problems in some cases. Any ideas?
I assume you are talking about QColor::rgba() which returns a QRgb.
QRgb is an alias to unsigned int. In these 32 bits all fours channels are encoded as #AARRGGBB, 8 bits each one (0-255, as you mentioned). So, a color like alpha=32, red=255, blue=127, green=0 would be 0x20FF7F00 (553615104 in decimal).
Now, regarding your question about casting to quint8, there should be no problem since each channel is guaranteed to be in the range 0..255 (reference). In general, Qt usually uses int as a general integer and do not pay too much attention to the width of the data type, unless in some specific situations (like when it is necessary for a given memory access, for example). So, do not worry about that.
Now, if these operations are done frequently in a high performance context, think about retrieving the 32 bits once using QColor::rgba and then extract the components from it. You can access the individual channels using bitwise operations, or through the convenience functions qAlpha, qRed, qBlue and qGreen.
For completeness, just to mention that the sibbling QColor::rgb method returns the same structure but the alpha channel is opaque (0xFF). You also have QColor::rgba64, which returns a QRgba64. It uses 16 bits per channel, for higher precision. You have the 64 bits equivalents to qAlpha, etc, as qAlpha64 and so on.

How can I get a random uint or the last digit of float in HLSL/GLSL?

I just need a random uint, better ranging from 0-6, but there is no enumeration type in openGL. I learned that I can get a random float ranging 0-1 from the code below:
frac(sin(dot(uv, float2(12.9898, 78.233))) * 43758.5453123)
I tried to do 1/above and get floor(), but it doesn't work. Then how can I get a random int? or is there a way to get the last digit of the float(so presumably still random)?
First, let's define what we mean by "random". In the context of this answer, a "random" variable is a variable whose values are unpredictable. That is, there is no function that determines/computes an outcome for the random variable when being evaluated (with any possible inputs). Or at least, no such function has been found (yet).
Obviously, when we are talking about computing here, there is no such thing as a true random variable as described above, because anything we do in computing (and by extension in a shader) is necessarily bound to the set of functions that are computable.
Your proposed function in the question:
f(uv) = frac(sin(dot(uv, float2(12.9898, 78.233))) * 43758.5453123)
is just a computable function. It takes as input a vector uv, which itself is a deterministic/computable value - such as derived from a built-in or custom varying variable giving you the "coordinates" of the current fragment.
After evaluation, the function's result itself was computable/deterministic and happens to be a value (which the input vector uv maps to). Taking different IEEE 754 rules and precisions aside (which may vary between different GPUs such as desktop ones and mobile ones), the function itself is purely deterministic/computable and therefore does not give you a random value.
We humans may think that the output is random, because we lack the intuition for the functions used to compute the result, such that when we "see" a number 0.623513632 followed by another number 0.9734126 for only slight variations in the input vector, we could draw the conclusion that "yeah, that looks pretty random", when it fact it obviously isn't. It is just what that function computed, given two input values.
So, when you already have a deterministic function like the above and wanted to obtain values in the closed range [0, 6] from it as a GLSL uint, you can simply scale the output of said function by multiplying the function's result with 7.0 and truncating the result:
g(uv) = uint(f(uv) * 7.0)
If you wanted to obtain true random numbers drawn from a random variable (whose deterministic function simply hasn't been found yet), you can obtain such values from universe background radiation (such as from random.org) and use that as an input to your shader (such as via textures or buffer objects).
But, from a computational perspective, a shader is just a function taking in values (ints, floats, ...) and computing (by means of computable functions) a deterministic result.
All we can do is to shuffle/scramble/diffuse the input bits in such a way, that the result "looks" like random to us. We then call these "pseudo-random" values.
Taking this a step further, we could now ask the question of the distribution quality of the obtained pseudo-random values. This has two qualities:
how evenly distributed are the pseudo-random values in their domain/interval? I.e. do all possible values have the same probability of occurring? Or: Do you even want to have uniformly-distributed values or should the values follow another distribution (like Guassian?)
how well are two values drawn from two sequential input values spaced apart? I.e. what is the frequency of the pseudo-random values?
There are different (deterministic) algorithms/functions depending on which distribution and which frequency spectrum your values should have. But first, you should define an answer to the two questions for your use-case.
And by the way, the commonly used function in your question to obtain pseudo-random numbers in a shader has a terrible distribution quality.
Last but not least, it should also be mentioned that true randomness (i.e. non-determinism), like when you do use an entropy source as input values, is oftentimes an undesirable property in computation, because it:
makes it difficult to repeat the same computation / output when needed, which is useful in various algorithms in the context of path tracing
makes it difficult to reproduce/debug/inspect your function for a particular run when every following execution/run will yield a different output

designing a large compressible binary float in Fortran

I would like to design a "fill" value for 4 and 8 byte float variables in a netcdf file to represent a special case of missing data. There is already a constant NF_FILL_FLOAT and my understanding about its design is that besides being a large, strange value it has a very compressible bit pattern. I believe it is different than huge(x). I already use NF_FILL_FLOAT to fill missing values -- my value has to be distinguishable. How do I go about this? What are the considerations for compression? Thanks.
What about if you take NF_FILL_FLOAT and divide it by 2**n, where n is an integer >=0 ? This would give you "n" individual "fill values" and essentially the division is simply shifting the bits by n, so should still be compressible.
(Of course, if you write out to a netcdf, then external software is only designed to recognise one value of MISSING, so your fill values would not be recognised).

What's the use case of glTexParameterIiv and glTexParameterIuiv?

The OpenGL documentation says very little about these two functions. When it would make sense to use glTexParameterIiv instead of glTexParameteriv or even glTexParameterfv?
If the values for GL_TEXTURE_BORDER_COLOR are specified with glTexParameterIiv or glTexParameterIuiv, the values are stored unmodified with an internal data type of integer. If specified with glTexParameteriv, they are converted to floating point with the following equation: f=(2c+1)/(2b−1). If specified with glTexParameterfv, they are stored unmodified as floating-point values.
You sort of answered your own question with the snippet you pasted. Traditional textures are fixed-point (unsigned normalized, where values like 255 are converted to 1.0 through normalization), but GL 3.0 introduced integral (signed / unsigned integer) texture types (where integer values stay integers).
If you had an integer texture and wanted to assign a border color (for use with the GL_CLAMP_TO_BORDER wrap mode), you would use one variant of those two functions (depending on whether you want signed or unsigned).
You cannot filter integer textures, but you can still have texture coordinate wrap behavior. Since said textures are integer and glTexParameteriv (...) normalizes the color values it is passed, an extra function had to be created to keep the color data integer.
You will find this same sort of thing with glVertexAttribIPointer (...) and so forth; adding support for integer data (as opposed to simply converting integer data to floating-point) to the GL pipeline required a lot of new commands.

Hash 16-bit integer to a 256-bit space efficiently

It sounds weird to be going bigger, but that's what I'm trying to do. I want to take the entire sequence of 16-bit integers and hash each one in such a way that it maps to 256-bit space uniformly.
The reason for this is that I'm trying to put a subset of the 16-bit number space into a 256-bit bloom filter, for fast membership testing.
I could use some well-known hashing function on each integer, but I'm looking for an extremely efficient implementation (just a few instructions) so that this runs well in a GPU shader program. I feel like the fact that the hash input is known to be only 16-bits can inform the hash function is designed somehow, but I am failing to see the solution.
Any ideas?
EDITS
Based on the responses, my original question is confusing. Sorry about that. I will try to restate it with a more concrete example:
I have a subset S1 of n numbers from the set S, which is in the range (0, 2^16-1). I need to represent this subset S1 with a 256-bit bloom filter constructed with a single hashing function. The reason for the bloom filter is a space consideration. I've chosen a 256-bit bloom filter because it fits my space requirements, and has a low enough probability of false positives. I'm looking to find a very simple hashing function that can take a number from set S and represent it in 256 bits such that each bit has roughly equal probability of being 1 or 0.
The reason for the requirement of simplicity in the hashing function is that this hashing function is going to have to run thousands of times per pixel, so anywhere where I can trim instructions is a win.
If you multiply (using uint32_t) a 16 bit value by prime (or for that matter any odd number) p between 2^31 and 2^32, then you "probably" smear the results fairly evenly across the 32 bit space. Then you might want to add another prime value, to prevent 0 mapping to 0 (you want each bit to have an equal probability of being 0 or 1, only one input value in 2^256 should have output all zeros, and since there are only 2^16 inputs that means you want none of them to have output all zeros).
So that's how to expand 16 bits to 32 with one operation (plus whatever instructions are needed to load the constant). Use four different values p1 ... p4 to get 256 bits, and run some tests with different p values to find good ones (i.e. those that produce not too many more false positives than what you expect for your Bloom filter given the size of the set you're encoding and assuming an ideal hashing function). For example I'm pretty sure -1 is a bad p-value.
No matter how good the values you'll see some correlations, though: for example as I've described it above the lowest bit of all 4 separate values will be equal, which is a pretty serious dependency. So you probably want a couple more "mixing" operations. For example you might say that each byte of the final output shall be the XOR of two of the bytes of what I've described (and not two least-siginficant bytes!), just to get rid of the simple arithmetic relations.
Unless I've misunderstood the question, though, this is not how a Bloom filter usually works. Usually you want your hash to produce an exact fixed number of set bits for each input, and all the arithmetic to compute the false positive rate relies on this. That's why for a Bloom filter 256 bits in size you'd normally have k 8-bit hashes, not one 256-bit hash. k is normally rather less than half the size of the filter in bits (the optimal value is the number of bits per value in the filter, times ln(2) which is about 0.7). So normally you don't want the probability of each bit being 1 to be anything like as high as 0.5.
The reason is that once you've ORed as few as 4 such 256-bit values together, almost all the bits in your filter are set (15 in 16 of them). So you're looking at a lot of false positives already.
But if you've done the math and you're happy with a single hash function producing a variable number of set bits averaging half of them, then fair enough. Or is the double-occurrence of the number 256 just a coincidence, because k happens to be 32 for the set size you have chosen and you're actually using the 256-bit hash as 32 8-bit hashes?
[Edit: your comment clarifies this, but anyway k should not get so high that you need 256 bits of hash in total. Clearly there's no point in this case using a Bloom filter with more than 16 bits per value (i.e fewer than 16 values), since using the same amount of space you could just list the values, and have a false positive rate of 0. A filter with 16 bits per value gives a false positive rate of something like 1 in 2200. Even there, optimal k is only 23, that is you should set 23 bits in the filter for each value in the set. If you expect the sets to be bigger than 16 values then you want to set fewer bits for each element, and you'll get a higher false positive rate.]
I believe there is some confusion in the question as posed. I will first try to clear up any inconsistencies I've noticed above.
OP originally states that he is trying to map a smaller space into a larger one. If this is truly the case, then the use of the bloom filter algorithm is unnecessary. Instead, as has been suggested in the comments above, the identity function is the only "hash" function necessary to set and test each bit. However, I make the assertion that this is not really what the OP is looking for. If so, then the OP must be storing 2^256 bits in memory (based on how the question is stated) in order for the space of 16-bit integers (i.e. 2^16) to be smaller than his set size; this is an unreasonable amount of memory to be using and is highly unlikely to be the case.
Therefore, I make the assumption that the problem constraints are as follows: we have a 256-bit bit vector in which we want to map the space of 16-bit integers. That is, we have 256 bits available to map 2^16 possible different integers. Thus, we are not actually mapping into a larger space, but, instead, a much smaller space. Similarly, it does appear (again, as previously pointed out in the comments above) that the OP is requesting a single hash function. If this is the case, there is clear misunderstanding about how bloom filters work.
Bloom filters typically use a set of hash independent hash functions to reduce false positives. Without going into too much detail, every input to the bloom filter runs through all n hash functions and then the resulting index in the bit vector is tested for each function. If all indices tested are set to 1, then the value may be in the set (with proper collisions in all n hash functions or overlap, false positives will occur). Moreover, if any of the indices is set to 0, then the value is absolutely not in the set. With this in mind, it is important to notice that an entirely saturated bloom filter has no benefit. That is, every query to the bloom filter will return that the item is in the set.
Hash Function Concerns
Now, back to the OP's original question. It is likely going to be best to use known hashing algorithms (since these are mathematically difficult to write and "rolling your own" typically doesn't end well). If you are worried about efficiency down to clock-cycles, implement the algorithm yourself in the appropriate assembly language for your architecture to reduce running time for each hash function. Remember, algorithmically, hash functions should run in O(1) time, so they should not contribute too much overhead if implemented properly. To start you off, I would recommend considering the modified bernstein hash. I have written a version for your specific case below (mostly for example purposes):
unsigned char modified_bernstein(short key)
{
unsigned ret = key & 0xff;
ret = 33 * ret ^ (key >> 8);
return ret % 256; // Try to do some modulo math to keep it in range
}
The bernstein method I have adapted generally runs as a function of the number of bytes of the input. Since a short type is 2 bytes or 16-bits, I have removed any variables and loops from the algorithm and simply performed some bit twiddling to get at each byte. Finally, an unsigned char can return a value in the range of [0,256) which forces the hash function to return a valid index in the bit vector.