Reinterpret bits cast float ->int in GLSL - opengl

I need the GLSL equivalent of
float f;
int i = *(int *)&f;
I know that there is floatBitsToInt() , but unfortunately it's in 3.3 GL version and upper. I need it for 3.0.
Any thoughts?

Related

Check int multiplication overflow in GLSL

I know that in C or C++, you can see how much a multiplication overflowed by using a long
E.g
int[] multiply(int a, int b){
long long r = a * b;
int result = r;
int overflow = r >> 32;
return {result, overflow};
}
However, in GLSL, there are no 64 bit integers. Is there a way to achieve the same result in GLSL without longs?
Context: GLSL 3.0, running in my browser via WebGL 2
You would have to break up the multiplication into pieces yourself.
Here is an algorithm that tries to do this for 64-bit multiplication: https://stackoverflow.com/a/51587262/736162
The same principles apply to 32-bit. You would need to change the code in a few ways:
Halve the sizes of things: instead of 64-bit types, use 32-bit types; halve the shift constants; instead of casting from 64-bit to 32-bit ((uint32_t)), convert from 32-bit to 16-bit using & 0xffff.
Make it valid GLSL 3.0 (e.g. use out instead of &, and uint instead of uint32_t)
I haven't tested this, but I think it should work.
By the way, you'll want to be sure you have highp precision on everything. precision highp int; at the top of your shader is enough.

ptxas "double is not supported" warning when using thrust::sort on a struct array

I'm trying to sort an array of structs on my GPU with thrust::sort. However, when I compile with nvcc, I get this warning:
ptxas /tmp/tmpxft_00005186_00000000-5_antsim.ptx, line 1520; warning : Double is not supported. Demoting to float
I've isolated the problem to my call to thrust::sort, here:
thrust::sort(thrustAnts, thrustAnts + NUM_ANTS, antSortByX());
thrustAnts is an array of Ant structs located on the GPU, while antSortByX is a functor as defined below:
typedef struct {
float posX;
float posY;
float direction;
float speed;
u_char life;
u_char carrying;
curandState rngState;
} Ant;
struct antSortByX {
__host__ __device__ bool operator()(Ant &antOne, Ant &antTwo) {
return antOne.posX < antTwo.posX;
}
};
It seems to me as though there aren't any doubles in this, though I'm suspicious the less-than operator in my functor evaluates those floats as doubles. I can solve this problem by compiling with -arch sm_13, but I'm curious as to why this is complaining at me in the first place.
The demotion happens because CUDA devices support double precision calculations at first with compute capability 1.3. NVCC knows the specifications and demotes every double to float for devices with CC < 1.3 just because the hardware cannot handle double precisions.
A good feature list could be found on wikipedia: CUDA
That you can’t see any doubles in this code doesn't mean that they are not there. Most commonly this error results from a missing f postfix on a floating point constant. The compiler performance an implicit cast from all floats to double when one double is part of the expression. A floating point constant without the f is a double value and the casting starts. However, for the less-operator a cast without constant expressions should not happen.
I can only speculate, but it seems to me that in your case a double precision value could be used within the thrust::sort implementation. Since you provide only a user function to a higher order function (functions that take functions as parameters).

GLSL(330) modulo returns unexpected value

I am currently working with GLSL 330 and came across some odd behavior of the mod() function.
Im working under windows 8 with a Radeon HD 6470M. I can not recreate this behavior on my desktop PC which uses windows 7 and a GeForce GTX 260.
Here is my test code:
float testvalf = -126;
vec2 testval = vec2(-126, -126);
float modtest1 = mod(testvalf, 63.0); //returns 63
float modtest2 = mod(testval.x, 63.0); //returns 63
float modtest3 = mod(-126, 63.0); //returns 0
Edit:
Here are some more test results done after IceCools suggestion below.
int y = 63;
int inttestval = -126;
ivec2 intvectest(-126, -126);
float floattestval = -125.9;
float modtest4 = mod(inttestval, 63); //returns 63
float modtest5 = mod(intvectest, 63); //returns vec2(63.0, 63.0)
float modtest6 = mod(intvectest.x, 63); //returns 63
float modtest7 = mod(floor(floattestval), 63); //returns 63
float modtest8 = mod(inttestval, y); //returns 63
float modtest9 = mod(-126, y); //returns 63
I updated my drivers and tested again, same results. Once again not reproducable on the desktop.
According to the GLSL docs on mod the possible parameter combinations are (GenType, float) and (GenType, GenType) (no double, since we're < 4.0). Also the return type is forced to float but that shouldn't matter for this problem.
I don't know that if you did it on intention but -126 is an int not a float, and the code might not be doing what you expect.
By the way about the modulo:
Notice that 2 different functions are called:
The first two line:
float mod(float, float);
The last line:
int mod(int, float);
If I'm right mod is calculated like:
genType mod(genType x, float y){
return x - y*floor(x/y);
}
Now note, that if x/y evaluates -2.0 it will return 0, but if it evaluates as -2.00000001 then 63.0 will be returned. That difference is not impossible between int/float and float/float division.
So the reason is might be just the fact that you are using ints and floats mixed.
I think I have found the answer.
One thing I've been wrong about is that mangsl's keyword for genType doesn't mean a generic type, like in a c++ template.
GenType is shorthand for float, vec2, vec3, and vec4 (see link - ctrl+f genType).
Btw genType naming is like:
genType - floats
genDType - doubles
genIType - ints
genBType - bools
Which means that genType mod(genType, float) implies that there is no function like int mod(int, float).
All the code above have been calling float mod(float, float) (thankfully there is implicit typecast for function parameters, so mod(int, int) works too, but actually mod(float, float) is called).
Just as a proof:
int x = mod(-126, 63);
Doesn't compile: error C7011: implicit cast from "float" to "int"
It only doesn't work because it returns float, so it works like this:
float x = mod(-126, 63);
Therefore float mod(float, float) is called.
So we are back at the original problem:
float division is inaccurate
int to float cast is inaccurate
It shouldn't be a problem on most GPU, as floats are considered equal if the difference between them is less than 10^-5 (it may vary with hardware, but this is the case for my GPU). So floor(-2.0000001) is -2. Highp floats are far more accurate than this.
Therefore either you are not using highp floats (precision highp float; should fix it then) or your GPU has stricter limit for float equality, or some of the functions are returning less accurate value.
If all else fails try:
#extension BlackMagic : enable
Maybe some driver setting is forcing default float precision to be mediump.
If this happens, all your defined variables will be mediump, however, numbers typed in the code will still remain highp.
Consider this code:
precision mediump float;
float x = 0.4121551, y = 0.4121552;
x == y; // true
0.4121551 == 0.4121552; // false, as highp they still differ.
So that mod(-126,63.0) could be still precise enough to return the correct value, as its working with high precision floats, however if you give a variable (like at all the other cases), which will only be mediump, the function won't have enough precision to calculate the correct value, and as you look at your tests, this is what's happening:
All the functions that take at least one variable are not precise enough
The only function call that takes 2 typed numbers return the correct value.

GLSL texelFetchOffset works with isampler2D but not usampler2D?

In a fragment shader, the following compiles fine:
uniform isampler2D testTexture;
/* in main() x, y, xoff and yoff are declared as int and assigned here, then... */
int tmp = texelFetchOffset(testTexture, ivec2(x, y), 0, ivec2(xoff, yoff)).r;
However, the following does not compile:
uniform usampler2D testTexture;
/* in main() x, y, xoff and yoff are declared as uint and assigned here, then... */
uint tmp = texelFetchOffset(testTexture, uvec2(x, y), 0, uvec2(xoff, yoff)).r;
The OpenGL 4.2 driver gives the following compiler error message:
error C1115: unable to find compatible overloaded function "texelFetchOffset(usampler2D, uvec2, int, uvec2)
This is Nvidia's Linux driver 290.* for a Quadro 5010M -- but I'm wondering if I made a (beginner) mistake and was not working to spec somehow here?
The texelFetchOffset function that takes a usampler2D still takes an ivec2 as its texture coordinates and offset. The u only applies to the sampler type and return value; not everything about the function becomes unsigned.
And remember: OpenGL doesn't allow implicit conversions between unsigned and signed integer types.

Int or Unsigned Int to float without getting a warning

Sometimes I have to convert from an unsigned integer value to a float. For example, my graphics engine takes in a SetScale(float x, float y, float z) with floats and I have an object that has a certain size as an unsigned int. I want to convert the unsigned int to a float to properly scale an entity (the example is very specific but I hope you get the point).
Now, what I usually do is:
unsigned int size = 5;
float scale = float(size);
My3DObject->SetScale(scale , scale , scale);
Is this good practice at all, under certain assumptions (see Notes)? Is there a better way than to litter the code with float()?
Notes: I cannot touch the graphics API. I have to use the SetScale() function which takes in floats. Moreover, I also cannot touch the size, it has to be an unsigned int. I am sure there are plenty of other examples with the same 'problem'. The above can be applied to any conversion that needs to be done and you as a programmer have little choice in the matter.
My preference would be to use static_cast:
float scale = static_cast<float>(size);
but what you are doing is functionally equivalent and fine.
There is an implicit conversion from unsigned int to float, so the cast is strictly unnecessary.
If your compiler issues a warning, then there isn't really anything wrong with using a cast to silence the warning. Just be aware that if size is very large it may not be representable exactly by a float.