Variable Types compatible with glGetFloatv - c++

What variable types are compatible with opengl's glGetFloat() or glGetFloatv()?
P.S. This is in c++.

The basic type you want to use is GLfloat. This matches the type in the function prototype. This is a 32-bit float value, which mostly matches the float type, but this is not guaranteed.
For cases where glGetFloatv() returns a single value, you can simply use the address of a GLfloat variable. For example:
GLfloat val;
glGetFloatv(GL_DEPTH_CLEAR_VALUE, &val);
For cases that return multiple values, you can either use an array:
GLfloat vals[4];
glGetFloatv(GL_COLOR_CLEAR_VALUE, vals);
Or, to make it more C++, a vector:
std::vector<GLfloat> vals(4);
glGetFloatv(GL_COLOR_CLEAR_VALUE, &vals[0]);
Or, even nicer in C++11:
std::vector<GLfloat> vals(4);
glGetFloatv(GL_COLOR_CLEAR_VALUE, vals.data());

Related

Vulkan Array of Specialization Constants

Is is possible to have an array of specialization constants such that the glsl code looks similar to the following:
layout(constant_id = 0) const vec2 arr[2] = vec2[] (
vec2(2.0f, 2.0f),
vec2(4.0f, 4.0f)
);
or, alternatively:
layout(constant_id = 0) const float arr[4] = float[] (
2.0f, 2.0f,
4.0f, 4.0f
);
As far as I have read there is no limit to the number of specialization constants that can be used so it feels strange that it wouldn't be possible but when I attempt the above the SPIR-V compiler notifies me that 'constant_id' can only be applied to a scalar. Currently I am using a uniform buffer to provide the data but I would like to eliminate the backed buffer and the need to bind the buffer before drawing as well as allow the system to optimize the code during pipeline creation if its possible.
The shading languages (both Vulkan-GLSL and SPIR-V) makes something of a distinction between the definition of a specialization constant within the shader and the interface for specializing those constants. But they go about this process in different ways.
In both languages, the external interface to a specialization constant only works on scalar values. That is, though you can set multiple constants to values, the constants you're setting are each a single scalar.
SPIR-V allows you to declare a specialization constant which is a composite (array/vector/matrix). However, the components of this composite must be either specialization constants or constant values. If those components are scalar specialization constants, you can OpDecorate them with an ID, which the external code will access.
Vulkan (and OpenGL) GLSL go about this slightly differently from raw SPIR-V. In GLSL, a const-qualified value with a constant_id is a specialization constant. These must be scalars.
However, you can also have a const-qualified value that is initialized by values that are either constant expressions or specialization constants. You don't qualify these with a constant_id, but you built them from things that are so qualified:
layout(constant_id = 18) const int scX = 1;
layout(constant_id = 19) const int scZ = 1;
const vec3 scVec = vec3(scX, 1, scZ); // partially specialized vector
const-qualified values that are initialized from specialization constants are called "partially specialized". When this GLSL is converted into SPIR-V, these are converted into OpSpecConstantComposite values.

C++ push to 3 different vectors with one shared method

I'm pretty new to C++, and I'm finding the whole concept of pointers, double pointers and references some-what confusing.
I am writing an object loader for an assignment and have came to the part where I want to optimise/modularise my solution.
Right now I have 3 vectors which contain information regarding the objects texture coords, faces and normals. Instead of having an operation for each, I wish to neaten my codebase by introducing a method to handle pushing to the following vectors.
std::vector<XMFLOAT3> vert_texture_coord;
std::vector<XMFLOAT3> vert_normals;
std::vector<XMFLOAT3> vert_position;
Currently I write to them like this:
vert_text_coord.push_back(XMFLOAT3(vert_x, vert_y, vert_z));
But to modularise I have written a method:
push_to_vector(float x, float y, float z, *vector)
{
// push code here
}
Calling it like this
push_to_vector(vert_x, vert_y, vert_z, &vert_text_coord);
Am I right to be passing the reference of vert_text_coord to the pointer parameter *vector in my push_to_vector method, or am I doing this wrong? Finally would it also make sense to have parameters vert_x, vert_y, vert_z as references too or have I completely misunderstood the concept of &?
Thanks in advance.
Actually, I think you are asking the wrong question here. Yes, you can perfectly well pass a pointer/reference to a function that pushes an XMFLOAT3 to the end of a vector, the working code would be
//function signature:
push_to_vector(std::vector<XMFLOAT3>* v, float x, float y, float z);
//call:
push_to_vector(&ver_normals, x, y, z);
or using references
//function signature:
push_to_vector(std::vector<XMFLOAT3>& v, float x, float y, float z);
//call:
push_to_vector(ver_normals, x, y, z);
However, as I said, that answers the wrong question. The right question would be: Is the idea of a push_to_vector() function a good one? And I believe, it is not. The reason is, that a push_to_vector() function is the wrong abstraction. Code that uses your three vectors will never want to abstract from which vector it uses, it will want to abstract from the fact that it uses a vector.
It is bad to have too long functions, but it is also bad to have tons of one-line functions like a push_to_vector() function. Each function should strive to have a sufficiently large difference in abstraction level between what it uses and what it provides. If that is not the case, you'll get lost in the deep call hierarchies that you will create.
(It is no accident, that the International Obfuscated C Code Contest has winning entries that either fuse everything into one function, or that have something like 50 functions, each of which are only a few characters long. Either method is equally efficient at obfuscating the code.)
Here is my two cents to the question whether it is better to use pointers or references:
Consider the following five functions:
void foo(int x);
void bar(int& x);
void baz(int* x);
void bim(const int& x);
void bam(const int* x);
and their corresponding calls:
int var = 7;
foo(var); //may not change var
bar(var); //may change var
baz(&var); //may change var
bim(var); //may not change var
bam(&var); //may not change var
The first call is the normal case in C++, and it cannot change its argument as it uses pass by value. I believe it is a really good idea, if you can see directly at the call whether the call will change its argument. Thus I restrict myself to either use pass by value, pass by pointer, or pass by const reference, i. e. these three variants:
foo(var); //may not change var
baz(&var); //may change var, visible by the take-address operator
bim(var); //may not change var, pass by value semantics optimized via a const reference
& takes the address of the object, so when you write &vert_text_coord you get a vector*
To pass by reference, you need to have the reference parameter on the function.
push_to_vector(float x, float y, float z, vector& vec)
{
vec.push_back(x,y,z);
}
You can then call the function like normal.
push_to_vector(vert_x, vert_y, vert_z, vert_text_coord);
So far it looks like you have the right idea.
A correction: a pointer parameter in a function looks like T*, not *T. So your push_to_vector fucntion would be like:
push_to_vector(float x, float y, float z, std::vector<XMFLOAT3>* v)
Also, unless you plan on changing x, y, and z, there is no need to pass them by reference.
EDIT:
As cocarin's answer says, passing by reference here is really the right way to do it.
Also as a side note, this sounds like it might be a good idea to wrap all of your vectors into a class. That way you don't have to be throwing these vectors around in functions a lot.
Learn to declare function parameters correctly.
As your function imitates method push_back that is written like
vert_text_coord.push_back(XMFLOAT3(vert_x, vert_y, vert_z));
then the first parameter should be reference to vector. For example
void push_to_vector( std::vector<XMFLOAT3> &, float, float, float );
In this case you can for example declare some parameters as having default arguments. For example
void push_to_vector( std::vector<XMFLOAT3> &, float = 0.0f, float = 0.0f, float = 0.0f );
And the function can be called like
push_to_vector( vert_text_coord, vert_x, vert_y, vert_z );
Or if you will declare some default arguments then like
push_to_vector( vert_text_coord );
You even can change default arguments of the function by means of redeclaring the function in a given (for example block) scope.
Also there is no sense to declare parameters of type float as references.
You could use a default argument for the last parameter as a mark that inside the function there should be used only two parameters.
void push_to_vector( std::vector<XMFLOAT3> &, float, float, float = 0.0f );
The function could be called either like
push_to_vector( vert_text_coord, vert_x, vert_y, vert_z );
or
push_to_vector( vert_text_coord, vert_x, vert_y );
In the last case vert_z woul be equal to 0.0f (or some other value) that would mean that the function itself should supply a required value.

Strange behaviour when passing values from a float array to a double array (C, C++)

I am developing an application that uses NI-DAQ and below are some methods which were given by the provider.
void someMethod(Calibration *cal, float myArray[], float result[])
{
newMethod(&cal->rt,myArray,result,cal->cfg.TempCompEnabled);
}
void newMethod(RTCoefs *coefs, double myArray[],float result[],BOOL tempcomp)
{
float newMyArray[6];
unsigned short i;
for (i=0; i < 6; i++)
{
newMyArray[i]=myArray[i];
}
}
I basically call someMethod(), providing an array with six elements ( [6] ) for both myArray[] and result[]. As you can see in the code, afterwards the newMethod() is called, and float myArray[6] is passed to the double myArray[] argument (I really do not understand why the developer of this code chose to use a double array, since the only array declared inside the newMethod() is a float type).
Now here comes my problem: inside the for loop, some values are passed without any problem, but when the fourth and fifth values are passed to newMyArray[], it receives "-1.#INF0000" for both values. At first glance, I thought that it would be some garbage value, but "-1.#INF0000" is there at every execution.
I know that C language can be tricky sometimes, but I really do not know why this is happening...
The second parameter to newMethod has type double *. But you pass to it a value of type float *.
In C++, or in C if there was a prototype of newMethod in scope, this must generate a compilation error. There is no implicit conversion between pointer types , except for void *.
However, in C, if there is no prototype in scope then you may get no warning; just undefined behaviour at runtime because the function was called with arguments of different type (after default argument promotions) to that with which it was defined.
If there is a non-prototype declaration of newMethod in scope then you get no warning; if there was no declaration at all then in C89 the code is legal (i.e. warning is optional) , but in C99 it must warn that newMethod was called without being declared.
(So it is important whether this is a C program or a C++ program).
If you are in C and getting no warning, add a prototype before all this:
void newMethod(RTCoefs *coefs, double myArray[],float result[],BOOL tempcomp) ;
Then you will get a compiler error.
The problem is with the types and the fact that arrays decays to pointers when passed to functions. So when you pass a pointer to a float array to someMethod it in turn pass it on as a pointer to an array of double to newMethod. And as you (should) know the size of float and the size of double is not the same, so in newMethod the code is reading data out of bounds from the initial (float) array you passed which leads to undefined behavior.
The compiler should have given you warnings about that.
-1.#INF0000 is, I believe, the MSC representation of
a negative infinity. Which is what you get from a double to
float conversion when the value of the double is too big to be
represented in a float.

How to provide a std::vector to match this parameter?

How can I pass a given std::vector<float> to a function to match the parameter type float (*parameter)[3]?
The function fills the parameter with coordinates of 3d points, that's the reason for the parameter type to be an array of 3-elements-long arrays. The function is provided by a library and I cannot change it.
I already initialized vector to cover enough elements.
void f(float (*parameter)[3])
{
}
int main()
{
vector<float> v(3);
f(reinterpret_cast<float(*)[3]>(&v[0]));
}
Because there is no way to ascertain that an std::allocator<T> involves in some configuration a float[3], it is not possible to point to such an array given an std::vector<T>. It is not a matter of finding the right cast, or the right value to cast.
The matter is that something like the function you described can only ever be passed a pointer to an actual float[3], or a null pointer.
The only strictly conformant thing you can do is to copy the data to and from a bona-fide float[3] variable, passing a pointer to it to your function.

How to convert a stl vector of 3-element structs into 2D C-style array

Suppose I have the following simple struct:
struct Vector3
{
double x;
double y;
double z;
};
and I create a list of vertices:
std::vector<Vector3> verticesList;
In addition to this I need to use a third-party library. The library has a function with the following signature:
typedef double[3] Real3;
external void createMesh(const Real3* vertices, const size_t verticesCount);
What is the best way to convert verticesList into something which could be passed into createMesh() as the vertices parameter?
At the moment I use the following approach:
static const size_t MAX_VERTICES = 1024;
if (verticesList.size() > MAX_VERTICES)
throw std::exception("Number of vertices is too big");
Real3 rawVertices[MAX_VERTICES];
for (size_t vertexInd = 0; vertexInd < verticesList.size(); ++vertexInd)
{
const Vector3& vertex = verticesList[vertexInd];
rawVertices[vertexInd][0] = vertex.x;
rawVertices[vertexInd][1] = vertex.y;
rawVertices[vertexInd][2] = vertex.z;
}
createMesh(rawVertices, verticesList.size());
But surely it is not the best way to solve the issue.
That is one proper way of doing it. There are also some other ways...
The type Vector3 is layout compatible with the type Real3, the implication of this is that you can force casting a pointer to one type to a pointer of the other:
createMesh( reinterpret_cast<Real3*>(&verticesList[0]), vertices.size() );
Other alternative, as Rook mentions, to remove the loop is using memcpy, since the types are POD:
Real3 rawVertices[MAX_VERTICES];
std::memcpy( rawVertices, &verticesList[0],
vertices.size()*sizeof verticesList[0] );
This is more concise, and probably more efficient, but it still is copying the whole container.
I believe that the standard does guarantee this behavior (at least C++11), two standard layout and standard compatible types have the same memory layout (duh?), and ยง9.2p19 states:
A pointer to a standard-layout struct object, suitably converted using a reinterpret_cast, points to its initial member (or if that member is a bit-field, then to the unit in which it resides) and vice versa.
This guarantee technically means something slightly different than what I claimed before: you can reinterpret_cast<double*>(&verticesList[0]) points to verticesList[0].x. But it also implies that the conversion from double* to Real3 pointer through reinterpret cast will also be fine.