I'm trying to compile old Qt project and i encounter this error:
error: cannot convert 'float*' to 'qreal* {aka double*}' in
initialization
Here's the fragment of code:
void Camera::loadProjectionMatrix()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
qreal *dataMat = projectionMatrix_.data();
GLfloat matriceArray[16];
for (int i= 0; i < 16; ++i)
matriceArray[i] = dataMat[i];
glMultMatrixf(matriceArray);
}
What are my options to overcome this error?
The projection matrix will return float* to you as per documentation:
float * QMatrix4x4::data()
Returns a pointer to the raw data of this matrix.
The best practice would be to eliminate the qreal usage in your codebase regardless this case. When the contributors went through the Qt 5 refactoring, the qreal ancient concept was dropped as much as possible and definitely should not be used much in new code where the API deals with float.
The recommendation is to use float these days in such cases. This is a bit historical, really. Back then, it made sense to define qreal to double where available, but float where not, e.g. ARM platforms. See the old documentation:
typedef qreal
Typedef for double on all platforms except for those using CPUs with ARM architectures. On ARM-based platforms, qreal is a typedef for float for performance reasons.
In Qt 5, the documentation is slightly different, although the main concept seems to have remained the same:
typedef qreal
Typedef for double unless Qt is configured with the -qreal float option.
I would fix your code the following way:
void Camera::loadProjectionMatrix()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float *dataMat = projectionMatrix_.data();
GLfloat matriceArray[16];
for (int i= 0; i < 16; ++i)
matriceArray[i] = dataMat[i];
glMultMatrixf(matriceArray);
}
Strictly speaking, you could also go an alternative way to solve the issue, namely by using this method rather than data():
float & QMatrix4x4::operator()(int row, int column)
Returns a reference to the element at position (row, column) in this matrix so that the element can be assigned to.
In which case, you could even eliminate the dataMat variable and assign the items directly to your matriceArray in the iteration.
Going even further than that, you should consider using a Qt library for this common task, namely e.g. the opengl classes either in QtGui or Qt3D. It would make more sense to mess with low-level opengl API calls if you do something custom.
Apparently, projectionMatrix_.data() returns a float*, and you cannot assign a float* to a double* (which is what qreal* is in this case).
Use
float *dataMat = projectionMatrix_.data();
or
auto dataMat = projectionMatrix_.data();
instead. The latter sometimes has the advantage that it might still be correct code if the return type of the function changes for some reason, although that is nothing to expect from a mature library. Additionally, you cannot get the type wrong on accident.
Related
I'm trying to use updated codes of "Frank Luna" book on directX11 where I using VS2017 with WindowsSDK10. I've read some notes about migration from Frank and did eveything he said in the link below :
http://www.d3dcoder.net/Data/Book4/d3d11Win10.htm
but got stuck here . I know there was same question from #poncho and answered well :
Access floats of XMMatrix - () operator not working
But I have trouble with type CXMMATRIX instead of XMMATRIX and I couldn't get result with the solution provided for him.
So I have to access the rows and columns of an CXMMATRIX :
void ExtractFrustumPlanes(XMFLOAT4 planes[6], CXMMATRIX M)
{
//
// Left
//
planes[0].x = M(0,3) + M(0,0);
planes[0].y = M(1,3) + M(1,0);
planes[0].z = M(2,3) + M(2,0);
planes[0].w = M(3,3) + M(3,0);
...
But I get :
call of an object of a class type without appropriate operator() or
conversion functions to pointer-to-function type
and
term does not evaluate to a function taking 2 arguments
It points to argument M of type CXMMATRIX where defined as below in DirectXMath.h :
// Fix-up for (2nd+) XMMATRIX parameters to pass by reference
typedef const XMMATRIX& CXMMATRIX;
What's all these errors about !?
Frank Luna's book is overall a great introduction to the Direct 11 API, but unfortunately suffers from heavily utilizing the legacy DirectX SDK which is deprecated per MSDN. One of those aspects is that he's actually using the xnamath library (a.k.a. xboxmath version 2) instead of the DirectXMath library (a.k.a. xboxmath version 3)
See Book Recommendations and Introducing DirectXMath
I made a number of changes when reworking the library as DirectXMath. First, the types are actually in C++ namespaces instead of the global namespace. In your headers, you should use full name specification:
#include <DirectXMath.h>
void MyFunction(..., DirectX::CXMMATRIX M);
In your cpp source files you should use:
#include <DirectXMath.h>
using namespace DirectX;
Another change was to strongly discourage the use of 'per-element' access on the XMVECTOR and XMMATRIX data types. As discussed in the DirectXMath Programmers Guide, these types are by design proxies for the SIMD register types which cannot be directly accessed by-element. Instead, you covert to the XMFLOAT4X4 representation which allows per-element access because that's a scalar structure.
You can see this by the fact that the operators you are trying to use are only defined for 'no-intrinsics' mode (i.e. when using scalar instead of SIMD operations like SSE, ARM-NEON, etc.):
#ifdef _XM_NO_INTRINSICS_
float operator() (size_t Row, size_t Column) const { return m[Row][Column]; }
float& operator() (size_t Row, size_t Column) { return m[Row][Column]; }
#endif
Again, by design, this process is a bit 'verbose' because it lets you know it's not free. Some people find this aspect of DirectXMath a little frustrating to use especially when they are first getting started. In that case, I recommend you take a look at the SimpleMath wrapper in the DirectX Tool Kit. You can use the types Vector3, Vector4, Matrix, etc. and they freely convert (through C++ operators and constructors) as needed to XMVECTOR and XMMATRIX. It's not nearly as efficient, but it's a lot more forgiving to use.
The particular function you wrote is also a bit problematic. First, it's a little odd to mix XMFLOAT4 and XMMATRIX parameters. For 'in-register, SIMD-friendly' calling convention, you'd use:
void XM_CALLCONV ExtractFrustumPlanes(XMVECTOR planes[6], FXMMATRIX M)
For details on why, see MSDN.
If you want to just entirely scalar math, use either the non-SIMD types:
void ExtractFrustumPlanes(XMFLOAT4 planes[6], const XMFLOAT4X4& M)
or better yet use SimpleMath so you can avoid having to write explicit conversions to/from XMVECTOR or XMMATRIX
using DirectX::SimpleMath;
void ExtractFrustumPlanes(Vector4 planes[6], const Matrix& M)
Note that the latest version of DirectXMath is on GitHub, NuGet, and vcpkg.
I am trying to get the minimum value from a collection of float values, by taking advantage of the Atomic operations provided by CUDA. . I cannot use reduction because of memory constraints. However, I get the error message: Instruction '{atom,red}.shared' requires .target sm_12 or higher when I try compiling the code below with a __Shared__ variable passed as the "SharedMem" arguement.
I have a 9400m GPU which has compute capability of 1.1.
__device__ static float* atomicMin(float* SharedMem, float value, float *old)
{
old[0] = *SharedMem;
float assumed;
if (old[0] <= value)
{
return old;
}
do
{
assumed = old[0];
old[0] = ::atomicCAS((unsigned int*)SharedMem, __float_as_int(assumed), __float_as_int(value));
} while (old[0] != assumed);
return old;
}
Take for example calling the function "getMin_Kernel" below:
__shared__ __device__ float LowestDistance;
__global__ void getMin_Kernel(float* AllFloats, int* NumberOfFloats)
{
int j = (blockDim.x * blockIdx.x + threadIdx.x);
if (j < NumberOfFloats[0])
{
float myFloat;
myFloat=*(atomicMin(&LowestDistance, NumberOfFloats[0], &myFloat));
}
}
However, if I pass a non-shared variable it compiles without issues, however, I get a runtime error. I am guessing the run time error occurs because atomicCAS requires a global or shared variable. Can anyone please help with a way to get around the compilation error.
Thanks.
This table http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#features-and-technical-specifications__feature-support-per-compute-capability provides a full description of various compute capabilities and their matching feature support.
Thanks guys I didn't notice the extra bullet points in the documentation stating the conditions for atomicCas and shared memory variables. I'm still learning the ropes of CUDA.
Thanks.
I'm trying to sort an array of structs on my GPU with thrust::sort. However, when I compile with nvcc, I get this warning:
ptxas /tmp/tmpxft_00005186_00000000-5_antsim.ptx, line 1520; warning : Double is not supported. Demoting to float
I've isolated the problem to my call to thrust::sort, here:
thrust::sort(thrustAnts, thrustAnts + NUM_ANTS, antSortByX());
thrustAnts is an array of Ant structs located on the GPU, while antSortByX is a functor as defined below:
typedef struct {
float posX;
float posY;
float direction;
float speed;
u_char life;
u_char carrying;
curandState rngState;
} Ant;
struct antSortByX {
__host__ __device__ bool operator()(Ant &antOne, Ant &antTwo) {
return antOne.posX < antTwo.posX;
}
};
It seems to me as though there aren't any doubles in this, though I'm suspicious the less-than operator in my functor evaluates those floats as doubles. I can solve this problem by compiling with -arch sm_13, but I'm curious as to why this is complaining at me in the first place.
The demotion happens because CUDA devices support double precision calculations at first with compute capability 1.3. NVCC knows the specifications and demotes every double to float for devices with CC < 1.3 just because the hardware cannot handle double precisions.
A good feature list could be found on wikipedia: CUDA
That you can’t see any doubles in this code doesn't mean that they are not there. Most commonly this error results from a missing f postfix on a floating point constant. The compiler performance an implicit cast from all floats to double when one double is part of the expression. A floating point constant without the f is a double value and the casting starts. However, for the less-operator a cast without constant expressions should not happen.
I can only speculate, but it seems to me that in your case a double precision value could be used within the thrust::sort implementation. Since you provide only a user function to a higher order function (functions that take functions as parameters).
I have a piece of C++ CUDA code which I have to write declaring the data variable in float. I also have to rewrite the code declaring the data variable in double.
What is a good design to handle a situation like this in CUDA?
I do not want to have two sets of same code because then in the future for any change I will have to have to change two sets of otherwise identical code. I also want to keep the code clean without too many #ifdef to change between float and double within the code.
Can anyone please suggest any good (in terms of maintenance and "easy to read") design?
CUDA supports type templating, and it is without doubt the most efficient way to implement kernel code where you need to handle multiple types in the same code.
As a trivial example, consider a simple BLAS AXPY type kernel:
template<typename Real>
__global__ void axpy(const Real *x, Real *y, const int n, const Real a)
{
int tid = threadIdx.x + blockIdx.x * blockDim.x;
int stride = blockDim.x * gridDim.x;
for(; tid<n; tid += stride) {
Real yval = y[tid];
yval += a * x[tid];
y[tid] = yval;
}
}
This templated kernel can be instantiated for both double and single precision without loss of generality:
template axpy<float>(const float *, float *, const int, const float);
template axpy<double>(const double *, double *, const int, const double);
The thrust template library, which ships with all recent versions of the CUDA toolkit, makes extensive use of this facility for implementing type agnostic algorithms.
In addition to templating, you may be able to achieve what you want with a single typedef:
typedef float mysize; // or double
Then just use mysize throughout where you would use float or double.
You might be interested in the simpleTemplates sample code, and there are other templatized CUDA examples as well, in addtion to thrust where, as talonmies states, it's used extensively. Thrust provides many other benefits as well to C++ programmers.
Sometimes I have to convert from an unsigned integer value to a float. For example, my graphics engine takes in a SetScale(float x, float y, float z) with floats and I have an object that has a certain size as an unsigned int. I want to convert the unsigned int to a float to properly scale an entity (the example is very specific but I hope you get the point).
Now, what I usually do is:
unsigned int size = 5;
float scale = float(size);
My3DObject->SetScale(scale , scale , scale);
Is this good practice at all, under certain assumptions (see Notes)? Is there a better way than to litter the code with float()?
Notes: I cannot touch the graphics API. I have to use the SetScale() function which takes in floats. Moreover, I also cannot touch the size, it has to be an unsigned int. I am sure there are plenty of other examples with the same 'problem'. The above can be applied to any conversion that needs to be done and you as a programmer have little choice in the matter.
My preference would be to use static_cast:
float scale = static_cast<float>(size);
but what you are doing is functionally equivalent and fine.
There is an implicit conversion from unsigned int to float, so the cast is strictly unnecessary.
If your compiler issues a warning, then there isn't really anything wrong with using a cast to silence the warning. Just be aware that if size is very large it may not be representable exactly by a float.