Does GLfixed over GLint and GLfloat over float make sense? - opengl

I know that size of an int differs from CPU to another
2 bytes for 16-bit machines
4 bytes for 32-bit machines
Since we're talking to the GPU and not the CPU, We use GLint when passing OpenGL parameters, which is defined as
typedef int GLint
but there's GLfixed
GLfixed is defined as a GLint
typedef GLint GLfixed
I have a doubt that it can be used for a specific task or it has nothing to do rather than a reference to GLint
about floating numbers GL uses
typedef float GLfloat
a float, as I read it's a size of 4 bytes, So I think it's a constant which does not matter if I'd use GLfloat or float, they'd be the same number of 4 bytes or maybe GLfloat have more to do?
So, Does it make sense if I used GLint over GLfixed, a normal float over GLfloat?

The GL spec does define the types it is going to use, and the requirements on the representation.
The fact that GLint is an alias of int on your platform can by no means be generalized. GLint will always meet the requirements of the GL, while int can vary per platform / ABI.
The same is true for GLfloat vs. float, although in the real world, virtually every platform capable of OpenGL will use 32 bit IEE754 single precision floats for float.
Does it make sense if I used GLint over GLfixed?
No. GLfixed is semantically a type meant for representing fixed point 16.16 two's complement values.

I'm sure it has something to do with its value or it's just useless and a waste of memory to get multiple definitions of the same type
It is neither.
As you've pointed out, the bitsize of C and C++ types are not fixed by the C or C++ standards. However, the OpenGL standard does fix the OpenGL-defined types. You see typedef int GLint; only on platforms where int is a 32-bit, 2's complement signed integer. On platforms where int is smaller, they use a different type in that definition.
The visible typenames for a type are hardly useless. Even if you were absolutely certain that int and GLfixed were the same type, seeing GLfixed carries semantic meaning beyond int. GLfixed means to interpret the integer as a 16.16-bit fixed-point type. It is technically an int, but any OpenGL API that interprets a value as GLfixed will interpret it as a 16.16-bit fixed-point type.
Typedefs don't take up memory. They're pure syntactic sugar; their use or lack thereof will not make your program take up one byte more or less of storage.
The same applies to float and GLfloat.
So, Does it make sense if I used GLint over GLfixed, a normal float over GLfloat?
You should use OpenGL's types when talking to OpenGL. When not talking directly to OpenGL, that's up to you.

Related

Why can't I put a float into a ptr of any type without any kind of conversion going on?

I'm currently writing a runtime for my compiler project and I want general and easy to use struct for encoding different types (the source language is scheme).
My current approach is:
struct SObj {
SType type;
uint64_t *value;
};
Pointer are always 64 or 32 bit wide, so shouldn't it be possible to literally put a float into my value? Then, if I want the actual value of the float, I just take the raw bytes and interprete them as a float.
Thanks in advance.
Not really.
When you write C++ you're programming an abstraction. You're describing a program. Contrary to popular belief, it's not "all just bytes".
Compilers are complex. They can, and will, assume that you follow the rules, and use that assumption to produce the most efficient "actual" code (read: machine code) possible.
One of those rules is that a uint64_t* is a pointer that points to a uint64_t. When you chuck arbitrary bits into there — whether they are identical to the bits that form a valid float, or something else — it is no longer a valid pointer, and simply evaluating it has undefined behaviour.
There are language facilities that can do what you want, like union. But you have to be careful not to violate aliasing rules. You'd store a flag (presumably, that's what your type is) that tells you which union member you're using. Make life easier and have a std::variant instead, which does all this for you.
That being said, you can std::memcpy/std::copy bits in and copy bits out, in say a uint64_t as long as they are a valid representation of the type you've chosen on your system. Just don't expect reinterpret_cast to be valid: it won't be.
Pointer are always 64 or 32 bit wide
No.
so shouldn't it be possible to literally put a float into my value?
Yes, that is possible, although that would be very strongly advised against. C++ has many, many other facilities so you do not have to resort such things yourself. Anyway, you can interpret the bytes inside a pointer as another type. Like this:
static_assert(sizeof(float*) >= sizeof(float));
static_assert(std::is_pod<float>::value == true); // overdramatic
float *ptr; // just allocate sizeof(float*) bytes on stack
float a = 5;
// use the memory of the pointer to store float value
std::memcpy(&ptr, &a, sizeof(float));
float b;
std::memcpy(&b, &ptr, sizeof(float));
a == b; // true

Can I reinterpret cast in GLSL?

In C++ you can take a pointer of an unsigned int, and cast it to a pointer to a signed int (reinterpret_cast).
unsigned int a = 200;
int b = *(reinterpret_cast<int *>(&a));
I need to store an int generated in a shader as an unsigned int, to be written to a texture with an unsigned integer internal format. Is there any similar alternative to C++'s reinterpret_cast in GLSL?
In C++ (pre-20), signed and unsigned integers are permitted to be represented in very different ways. C++ does not require signed integers to be two's complement; implementations are allowed to use ones complement, or other representations. The only requirement C++ has on signed vs. unsigned is that conversion of all non-negative (or trap) signed values to unsigned values is possible.
And FYI: your code yields UB for violating the strict aliasing rule (accessing an object of type X through a pointer to an unrelated object of type Y). Though this is somewhat common in low-level code, the C++ object model does not really allow it. But I digress.
I brought up all the signed-vs-unsigned stuff because GLSL actually defines the representation of signed integers. In GLSL, a signed integer is two's complement. Because of that, GLSL can define how conversion from the entire range of unsigned value goes to signed values and vice-versa, simply by preserving the bitpattern of the value.
And that's exactly what it does. So instead of having to use casting gymnastics, you simply do an unsigned-to-signed conversion, just as you would have for float-to-signed or whatever:
int i = ...
uint j = uint(i);
This conversion preserves the bit-pattern.
Oh, and C++20 seems to be getting on-board with this too.
GLSL does not support this kind of casting (nor does it support pointers at all). Instead, in GLSL you construct values of a different type with constructor-style syntax:
int a = 5; // set an int to a constant
uint b = uint(a); // "cast" that int to a uint by constructing a uint from it.

Need some clarification with the concept of vectors in direct3D11

I thought at first that vectors were just arrays that can store multiple values of the same type. But i think direct3d uses a different terminology when it comes to "vectors"
Lets say for example, we create a vector by using the function XMVectorSet()
XMVECTOR myvector;
myvector = XMVectorSet(0.0f, 0.0f, -0.5f, 0.0f);
What exactly did i store inside myvector? did i just store an array of floating point values?
C++'s "vectors" are indeed array-like storage containers.
You're right in that Direct3D is using a different meaning of the term "vectors": their more global mathematical meaning. These vectors are quantities that have direction and size.
Further reading:
https://en.wikipedia.org/wiki/Euclidean_vector
https://en.wikipedia.org/wiki/Column_vector
https://en.wikipedia.org/wiki/Vector_space
In general vectors in Direct3D are an ordered collection of 2 to 4 elements of the same floating-point or integer type. Conceptually they're similar to an array, but more like a structure. The elements are usually referred to by names like x, y, z and w rather than numbers. Depending on the context you may be able to use either C++ structure or an C++ array to represent a Direct3D vector.
However the XMVECTOR type specifically is an ordered collection of 4 elements that simultaneously contains both 32-bit floating-point and 32-bit unsigned integer types. Each element has the value of a floating-point number and an unsigned integer that share the same machine representation. So using your example, the variable myvector has simultaneously holds both the floating-point vector (0.0, 0.0, -0.5, 0.0f) and the unsigned integer vector (0, 0, 0xbf000000, 0).
(If we use the usual XYZW interpretation of the floating-point value of myvector then it would represent a vector of length 0.5 pointing in the direction of the negative Z axis. If we were to use an unusual RGBA interpretation of the unsigned integer value of myvector then it would represent a 100% transparent blue colour.)
Which value gets used depends on the function that the XMVECTOR object is used with. So for example the XMVectorAdd function treats its arguments as two floating point vectors, while XMVectorAndInt treats is argument as two unsigned integer vectors. Most operations that can be preformed with XMVECTOR objects use the floating-point values. The unsigned integer operands are usually used manipulate bits in the machine representation of the floating-points values.
XMVECTOR has an unspecified internal layout:
In the DirectXMath Library, to fully support portability and
optimization, XMVECTOR is, by design, an opaque type. The actual
implementation of XMVECTOR is platform dependent.
So it might be an array with four elements, or it might be a structure with .x, .y, .z and .w members. Or it might be something completely different.

Is it appropriate to use off_t for non-byte offsets?

Suppose I'm writing a function which takes a float a[] and an offset, into this array, and returns the element at that offset. Is it reasonable to use the signature
float foo(float* a, off_t offset);
for it? Or is off_t only relevant to offsets in bytes, rather than pointer arithmetic with aribtrary element sizes? i.e. is it reasonable to say a[offset] when offset is of type off_t?
The GNU C Library Reference Manual says:
off_t
This is a signed integer type used to represent file sizes.
but that doesn't tell me much.
My intuition is that the answer is "no", since the actual address used in a[offset] is the address of a + sizeof(float) * offset , so "sizeof(float) * offset" is an off_t, and sizeof(float) is a size_t, and both are constants with 'dimensions'.
Note: The offset might be negative.
Is there any good reason why you just don't use int? It's the
default type for integral values in C++, and should be used
unless there is a good reason not to.
Of course, one good reason could be that it might overflow. If
the context is such that you could end up with very large
arrays, you might want to use ptrdiff_t, which is defined (in
C and C++) as the type resulting from the subtraction of two
pointers: in other words, it is guaranteed not to overflow (when
used as an offset) for all types with a size greater than 1.
You could use size_t or ptrdiff_t as the type of an index (your second parameter is more an index inside a float array than an offset).
Your use is an index, not an offset. Notice that the standard offsetof macro is defined to return byte offsets!
In practice, you could even use int or unsigned, unless you believe your array could have billions of components.
You may want to #include <stdint.h> (or <cstdint> with a recent C++) and have explicitly sized types like int32_t for your indexes.
For source readability reasons, you might define
typedef unsigned index_t;
and later use it, e.g.
float foo(float a[], index_t i);
My opinion is that you just should use int as the type of your indexes. (but handle out-of-bound indexes appropriately).
I would say it is not appropriate, since
off_t is (intended to be) used to represent file sizes
off_t is a signed type.
I would go for size_type (usually a "typedef"ed name for size_t), which is the one used by std containers.
Perhaps the answer is to use ptrdiff_t? It...
can be negative;
alludes to the difference not being in bytes, but in units of arbitrary size depending on the element type.
What do you think?

Why can I cast int and BOOL to void*, but not float?

void* is a useful feature of C and derivative languages. For example, it's possible to use void* to store objective-C object pointers in a C++ class.
I was working on a type conversion framework recently and due to time constraints was a little lazy - so I used void*... That's how this question came up:
Why can I typecast int to void*, but not float to void* ?
BOOL is not a C++ type. It's probably typedef or defined somewhere, and in these cases, it would be the same as int. Windows, for example, has this in Windef.h:
typedef int BOOL;
so your question reduces to, why can you typecast int to void*, but not float to void*?
int to void* is ok but generally not recommended (and some compilers will warn about it) because they are inherently the same in representation. A pointer is basically an integer that points to an address in memory.
float to void* is not ok because the interpretation of the float value and the actual bits representing it are different. For example, if you do:
float x = 1.0;
what it does is it sets the 32 bit memory to 00 00 80 3f (the actual representation of the float value 1.0 in IEEE single precision). When you cast a float to a void*, the interpretation is ambiguous. Do you mean the pointer that points to location 1 in memory? or do you mean the pointer that points to location 3f800000 (assuming little endian) in memory?
Of course, if you are sure which of the two cases you want, there is always a way to get around the problem. For example:
void* u = (void*)((int)x); // first case
void* u = (void*)(((unsigned short*)(&x))[0] | (((unsigned int)((unsigned short*)(&x))[1]) << 16)); // second case
Pointers are usually represented internally by the machine as integers. C allows you to cast back and forth between pointer type and integer type. (A pointer value may be converted to an integer large enough to hold it, and back.)
Using void* to hold integer values in unconventional. It's not guaranteed by the language to work, but if you want to be sloppy and constrain yourself to Intel and other commonplace platforms, it will basically scrape by.
Effectively what you're doing is using void* as a generic container of however many bytes are used by the machine for pointers. This differs between 32-bit and 64-bit machines. So converting long long to void* would lose bits on a 32-bit platform.
As for floating-point numbers, the intention of (void*) 10.5f is ambiguous. Do you want to round 10.5 to an integer, then convert that to a nonsense pointer? No, you want the bit-pattern used by the FPU to be placed into a nonsense pointer. This can be accomplished by assigning float f = 10.5f; void *vp = * (uint32_t*) &f;, but be warned that this is just nonsense: pointers aren't generic storage for bits.
The best generic storage for bits is char arrays, by the way. The language standards guarantee that memory can be manipulated through char*. But you have to mind data alignment requirements.
Standard says that 752 An integer may be converted to any pointer type. Doesn't say anything about pointer-float conversion.
Considering any of you want you transfer float value as void *, there is a workaround using type punning.
Here is an example;
struct mfloat {
union {
float fvalue;
int ivalue;
};
};
void print_float(void *data)
{
struct mfloat mf;
mf.ivalue = (int)data;
printf("%.2f\n", mf.fvalue);
}
struct mfloat mf;
mf.fvalue = 1.99f;
print_float((void *)(mf.ivalue));
we have used union to cast our float value(fvalue) as an integer(ivalue) to void*, and vice versa
The question is based on a false premise, namely that void * is somehow a "generic" or "catch-all" type in C or C++. It is not. It is a generic object pointer type, meaning that it can safely store pointers to any type of data, but it cannot itself contain any type of data.
You could use a void * pointer to generically manipulate data of any type by allocating sufficient memory to hold an object of any given type, then using a void * pointer to point to it. In some cases you could also use a union, which is of course designed to be able to contain objects of multiple types.
Now, because pointers can be thought of as integers (and indeed, on conventionally-addressed architectures, typically are integers) it is possible and in some circles fashionable to stuff an integer into a pointer. Some library API's have even documented and supported this usage — one notable example was X Windows.
Conversions between pointers and integers are implementation-defined, and these days typically draw warnings, and so typically require an explicit cast, not so much to force the conversion as simply to silence the warning. For example, both the code fragments below print 77, but the first one probably draws compiler warnings.
/* fragment 1: */
int i = 77;
void *p = i;
int j = p;
printf("%d\n", j);
/* fragment 2: */
int i = 77;
void *p = (void *)(uintptr_t)i;
int j = (int)p;
printf("%d\n", j);
In both cases, we are not really using the void * pointer p as a pointer at all: we are merely using it as a vessel for some bits. This relies on the fact that on a conventionally-addressed architecture, the implementation-defined behavior of a pointer/integer conversion is the obvious one, which to an assembly-language programmer or an old-school C programmer doesn't seem like a "conversion" at all. And if you can stuff an int into a pointer, it's not surprising if you can stuff in other integral types, like bool, as well.
But what about trying to stuff a floating-point value into a pointer? That's considerably more problematic. Stuffing an integer value into a pointer, though implementation-defined, makes perfect sense if you're doing bare-metal programming: you're taking the numeric value of the integer, and using it as a memory address. But what would it mean to try to stuff a floating-point value into a pointer?
It's so meaningless that the C Standard doesn't even label it "undefined".
It's so meaningless that a typical compiler won't even attempt it.
And if you think about it, it's not even obvious what it should do.
Would you want to use the numeric value, or the bit pattern, as the thing to try to stuff into the pointer? Stuffing in the numeric value is closer to how floating-point-to-integer conversions work, but you'd lose your fractional part. Using the bit pattern is what you'd probably want, but accessing the bit pattern of a floating-point value is never something that C makes easy, as generations of programmers who have attempted things like
uint32_t hexval = (uint32_t)3.0;
have discovered.
Nevertheless, if you were bound and determined to store a floating-point value in a void * pointer, you could probably accomplish it, using sufficiently brute-force casts, although the results are probably both undefined and machine-dependent. (That is, I think there's a strict aliasing violation here, and if pointers are bigger than floats, as of course they are on a 64-bit architecture, I think this will probably only work if the architecture is little-endian.)
float f = 77.75;
void *p = (void *)(uintptr_t)*(uint32_t *)&f;
float f2 = *(float *)&p;
printf("%f\n", f2);
dmr help me, this actually does print 77.75 on my machine.