Step vs. comparison operator in HLSL? - hlsl

As an HLSL enthusiast, I've been in the habit of using (float)(x>=y). Usually for 0/1 multiplications for branch avoidance. I just revisited my intrinsic list and saw step(x,y). They sound equivalent in output to me.
Are there any reasons to prefer one of these styles over the other?

I think they're equivalent. This shader:
inline float test1( float x, float y )
{
return (float)( x >= y );
}
inline float test2( float x, float y )
{
return step( x, y );
}
float2 main(float4 c: COLOR0): SV_Target
{
float2 res;
res.x = test1( c.x, c.y );
res.y = test2( c.z, c.w );
return res;
}
Compiles into following DXBC instructions:
ps_4_0
dcl_input_ps linear v0.xyzw
dcl_output o0.xy
dcl_temps 1
ge r0.xy, v0.xwxx, v0.yzyy // If the comparison is true, then 0xFFFFFFFF is returned for that component.
and o0.xy, r0.xyxx, l(0x3f800000, 0x3f800000, 0, 0) // Component-wise logical AND, 0x3f800000 = 1.0f
ret
As you see, the compiler treated both inline functions as equivalents, it even merged then together into a single 2-lane vector comparison.

Related

GLSL to HLSL issues

I'm trying to import many transitions from GL Transitions into my video sequencer by converting GLSL to HLSL.
For example, this simple cross fade:
vec4 transition (vec2 uv) {
return mix(
getFromColor(uv),
getToColor(uv),
progress
);
}
is correctly translated in my HLSL code:
#define D2D_INPUT_COUNT 2
#define D2D_INPUT0_SIMPLE
#define D2D_INPUT1_SIMPLE
#define D2D_REQUIRES_SCENE_POSITION // The pixel shader requires the SCENE_POSITION input.
#include "d2d1effecthelpers.hlsli"
cbuffer constants : register(b0)
{
float progress : packoffset(c0.x);
...
}
float4 crossfade(float4 v1,float4 v2)
{
return lerp(v1, v2, progress);
}
D2D_PS_ENTRY(main)
{
float4 v1 = D2DGetInput(0);
float4 v2 = D2DGetInput(1);
return crossfade(v1,v2);
}
The same doesn't work for Wind effect:
// Custom parameters
uniform float size; // = 0.2
float rand (vec2 co) {
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
vec4 transition (vec2 uv) {
float r = rand(vec2(0, uv.y));
float m = smoothstep(0.0, -size, uv.x*(1.0-size) + size*r - (progress * (1.0 + size)));
return mix(
getFromColor(uv),
getToColor(uv),
m
);
}
This time HLSL is this:
float fract(float x)
{
return x - floor(x);
}
float rand(float2 co)
{
return fract(sin(dot(co.xy, float2(12.9898, 78.233))) * 43758.5453);
}
float4 wind(float4 v1, float4 v2,float2 uv)
{
float r = rand(float2(0, uv.y));
p1 = 0.2f;
progress = 0.5f; // hardcoded variables for testing, they will be taken from the buffer
float m = smoothstep(0.0f, -p1, uv.x*(1.0f-p1) + p1*r - (progress * (1.0f + p1)));
return lerp(v1, v2, m);
}
D2D_PS_ENTRY(main)
{
float4 v1 = D2DGetInput(0);
float4 v2 = D2DGetInput(1);
return wind(v1,v2,D2DGetScenePosition().xy);
}
Have I misunderstood the OpenGL's mix and fract and rand stuff? I only get the second image pixels in my HLSL version without mixing.
EDIT: I 've hardcoded size to 0.992 and multiplied progress by 4 in the HLSL. Now it seems to work, do I miss some bounds-related issues? Is the smoothstep function working as expected?
I found it.
It would need in main entry the usage of D2DGetInputCoordinate instead of D2DGetScenePosition
After doing that, the transitions run fine.

Differences between NVCC and NVRTC on compilation to PTX

Summary
I'm porting a simple raytracing application based on the Scratchapixel version to a bunch of GPU libraries. I sucessfully ported it to CUDA using the runtime API and the driver API, but It throws a Segmentation fault (core dumped) when I try to use the PTX compiled at runtime with NVRTC.
If I uncomment the #include <math.h> directive at the beginning of the kernel file (see below), it still works using NVCC (the generated PTX is exactly the same) but fails at compilation using NVRTC.
I want to know how can I make NVRTC behave just like NVCC (is it even possible?), or at least to understand the reason behind this issues.
Detailed description
File kernel.cu (Kernel source):
//#include <math.h>
#define MAX_RAY_DEPTH 5
template<typename T>
class Vec3
{
public:
T x, y, z;
__device__ Vec3() : x(T(0)), y(T(0)), z(T(0)) {}
__device__ Vec3(T xx) : x(xx), y(xx), z(xx) {}
__device__ Vec3(T xx, T yy, T zz) : x(xx), y(yy), z(zz) {}
__device__ Vec3& normalize()
{
T nor2 = length2();
if (nor2 > 0) {
T invNor = 1 / sqrt(nor2);
x *= invNor, y *= invNor, z *= invNor;
}
return *this;
}
__device__ Vec3<T> operator * (const T &f) const { return Vec3<T>(x * f, y * f, z * f); }
__device__ Vec3<T> operator * (const Vec3<T> &v) const { return Vec3<T>(x * v.x, y * v.y, z * v.z); }
__device__ T dot(const Vec3<T> &v) const { return x * v.x + y * v.y + z * v.z; }
__device__ Vec3<T> operator - (const Vec3<T> &v) const { return Vec3<T>(x - v.x, y - v.y, z - v.z); }
__device__ Vec3<T> operator + (const Vec3<T> &v) const { return Vec3<T>(x + v.x, y + v.y, z + v.z); }
__device__ Vec3<T>& operator += (const Vec3<T> &v) { x += v.x, y += v.y, z += v.z; return *this; }
__device__ Vec3<T>& operator *= (const Vec3<T> &v) { x *= v.x, y *= v.y, z *= v.z; return *this; }
__device__ Vec3<T> operator - () const { return Vec3<T>(-x, -y, -z); }
__device__ T length2() const { return x * x + y * y + z * z; }
__device__ T length() const { return sqrt(length2()); }
};
typedef Vec3<float> Vec3f;
typedef Vec3<bool> Vec3b;
class Sphere
{
public:
const char* id;
Vec3f center; /// position of the sphere
float radius, radius2; /// sphere radius and radius^2
Vec3f surfaceColor, emissionColor; /// surface color and emission (light)
float transparency, reflection; /// surface transparency and reflectivity
int animation_frame;
Vec3b animation_position_rand;
Vec3f animation_position;
Sphere(
const char* id,
const Vec3f &c,
const float &r,
const Vec3f &sc,
const float &refl = 0,
const float &transp = 0,
const Vec3f &ec = 0) :
id(id), center(c), radius(r), radius2(r * r), surfaceColor(sc),
emissionColor(ec), transparency(transp), reflection(refl)
{
animation_frame = 0;
}
//[comment]
// Compute a ray-sphere intersection using the geometric solution
//[/comment]
__device__ bool intersect(const Vec3f &rayorig, const Vec3f &raydir, float &t0, float &t1) const
{
Vec3f l = center - rayorig;
float tca = l.dot(raydir);
if (tca < 0) return false;
float d2 = l.dot(l) - tca * tca;
if (d2 > radius2) return false;
float thc = sqrt(radius2 - d2);
t0 = tca - thc;
t1 = tca + thc;
return true;
}
};
__device__ float mix(const float &a, const float &b, const float &mixval)
{
return b * mixval + a * (1 - mixval);
}
__device__ Vec3f trace(
const Vec3f &rayorig,
const Vec3f &raydir,
const Sphere *spheres,
const unsigned int spheres_size,
const int &depth)
{
float tnear = INFINITY;
const Sphere* sphere = NULL;
// find intersection of this ray with the sphere in the scene
for (unsigned i = 0; i < spheres_size; ++i) {
float t0 = INFINITY, t1 = INFINITY;
if (spheres[i].intersect(rayorig, raydir, t0, t1)) {
if (t0 < 0) t0 = t1;
if (t0 < tnear) {
tnear = t0;
sphere = &spheres[i];
}
}
}
// if there's no intersection return black or background color
if (!sphere) return Vec3f(2);
Vec3f surfaceColor = 0; // color of the ray/surfaceof the object intersected by the ray
Vec3f phit = rayorig + raydir * tnear; // point of intersection
Vec3f nhit = phit - sphere->center; // normal at the intersection point
nhit.normalize(); // normalize normal direction
// If the normal and the view direction are not opposite to each other
// reverse the normal direction. That also means we are inside the sphere so set
// the inside bool to true. Finally reverse the sign of IdotN which we want
// positive.
float bias = 1e-4; // add some bias to the point from which we will be tracing
bool inside = false;
if (raydir.dot(nhit) > 0) nhit = -nhit, inside = true;
if ((sphere->transparency > 0 || sphere->reflection > 0) && depth < MAX_RAY_DEPTH) {
float facingratio = -raydir.dot(nhit);
// change the mix value to tweak the effect
float fresneleffect = mix(pow(1 - facingratio, 3), 1, 0.1);
// compute reflection direction (not need to normalize because all vectors
// are already normalized)
Vec3f refldir = raydir - nhit * 2 * raydir.dot(nhit);
refldir.normalize();
Vec3f reflection = trace(phit + nhit * bias, refldir, spheres, spheres_size, depth + 1);
Vec3f refraction = 0;
// if the sphere is also transparent compute refraction ray (transmission)
if (sphere->transparency) {
float ior = 1.1, eta = (inside) ? ior : 1 / ior; // are we inside or outside the surface?
float cosi = -nhit.dot(raydir);
float k = 1 - eta * eta * (1 - cosi * cosi);
Vec3f refrdir = raydir * eta + nhit * (eta * cosi - sqrt(k));
refrdir.normalize();
refraction = trace(phit - nhit * bias, refrdir, spheres, spheres_size, depth + 1);
}
// the result is a mix of reflection and refraction (if the sphere is transparent)
surfaceColor = (
reflection * fresneleffect +
refraction * (1 - fresneleffect) * sphere->transparency) * sphere->surfaceColor;
}
else {
// it's a diffuse object, no need to raytrace any further
for (unsigned i = 0; i < spheres_size; ++i) {
if (spheres[i].emissionColor.x > 0) {
// this is a light
Vec3f transmission = 1;
Vec3f lightDirection = spheres[i].center - phit;
lightDirection.normalize();
for (unsigned j = 0; j < spheres_size; ++j) {
if (i != j) {
float t0, t1;
if (spheres[j].intersect(phit + nhit * bias, lightDirection, t0, t1)) {
transmission = 0;
break;
}
}
}
surfaceColor += sphere->surfaceColor * transmission *
max(float(0), nhit.dot(lightDirection)) * spheres[i].emissionColor;
}
}
}
return surfaceColor + sphere->emissionColor;
}
extern "C" __global__
void raytrace_kernel(unsigned int width, unsigned int height, Vec3f *image, Sphere *spheres, unsigned int spheres_size, float invWidth, float invHeight, float aspectratio, float angle) {
int x = blockIdx.x * blockDim.x + threadIdx.x;
int y = blockIdx.y * blockDim.y + threadIdx.y;
if (y < height && x < width) {
float xx = (2 * ((x + 0.5) * invWidth) - 1) * angle * aspectratio;
float yy = (1 - 2 * ((y + 0.5) * invHeight)) * angle;
Vec3f raydir(xx, yy, -1);
raydir.normalize();
image[y*width+x] = trace(Vec3f(0), raydir, spheres, spheres_size, 0);
}
}
I can successfully compile it with: nvcc --ptx kernel.cu -o kernel.ptx (full PTX here) and use that PTX in the driver API with cuModuleLoadDataEx using the following snippet. It works as expected.
It works fine even if I uncomment the #include <math.h> line (actually, the PTX generated is exactly the same).
CudaSafeCall( cuInit(0) );
CUdevice device;
CudaSafeCall( cuDeviceGet(&device, 0) );
CUcontext context;
CudaSafeCall( cuCtxCreate(&context, 0, device) );
unsigned int error_buffer_size = 1024;
std::vector<CUjit_option> options;
std::vector<void*> values;
char* error_log = new char[error_buffer_size];
options.push_back(CU_JIT_ERROR_LOG_BUFFER); //Pointer to a buffer in which to print any log messages that reflect errors
values.push_back(error_log);
options.push_back(CU_JIT_ERROR_LOG_BUFFER_SIZE_BYTES); //Log buffer size in bytes. Log messages will be capped at this size (including null terminator)
values.push_back(&error_buffer_size);
options.push_back(CU_JIT_TARGET_FROM_CUCONTEXT); //Determines the target based on the current attached context (default)
values.push_back(0); //No option value required for CU_JIT_TARGET_FROM_CUCONTEXT
CUmodule module;
CUresult status = cuModuleLoadDataEx(&module, ptxSource, options.size(), options.data(), values.data());
if (error_log && error_log[0]) { //https://stackoverflow.com/a/7970669/3136474
std::cout << "Compiler error: " << error_log << std::endl;
}
CudaSafeCall( status );
However, whenever I try to compile this exact kernel using NVRTC (full PTX here), it compiles successfully but gives me a Segmentation fault (core dumped) on the call to cuModuleLoadDataEx (when trying to use the resulting PTX).
If I uncomment the #include <math.h> line, it fails at the nvrtcCompileProgram call with the following output:
nvrtcSafeBuild() failed at cuda_raytracer_nvrtc_api.cpp:221 : NVRTC_ERROR_COMPILATION
Build log:
/usr/include/bits/mathcalls.h(177): error: linkage specification is incompatible with previous "isinf"
__nv_nvrtc_builtin_header.h(126689): here
/usr/include/bits/mathcalls.h(211): error: linkage specification is incompatible with previous "isnan"
__nv_nvrtc_builtin_header.h(126686): here
2 errors detected in the compilation of "kernel.cu".
The code I'm using to compile it with NVRTC is:
nvrtcProgram prog;
NvrtcSafeCall( nvrtcCreateProgram(&prog, kernelSource, "kernel.cu", 0, NULL, NULL) );
// https://docs.nvidia.com/cuda/nvrtc/index.html#group__options
std::vector<const char*> compilationOpts;
compilationOpts.push_back("--device-as-default-execution-space");
// NvrtcSafeBuild is a macro which automatically prints nvrtcGetProgramLog if the compilation fails
NvrtcSafeBuild( nvrtcCompileProgram(prog, compilationOpts.size(), compilationOpts.data()), prog );
size_t ptxSize;
NvrtcSafeCall( nvrtcGetPTXSize(prog, &ptxSize) );
char* ptxSource = new char[ptxSize];
NvrtcSafeCall( nvrtcGetPTX(prog, ptxSource) );
NvrtcSafeCall( nvrtcDestroyProgram(&prog) );
Then I simply load the ptxSource using the previous snippet (note: that code block is the same used for both the driver API version and the NVRTC version).
Additional things that I've noticed/tried so far
The PTX generated by the NVCC and the one generated by NVRTC are quite different, but I'm unable to understand them to identify possible problems.
Tried to specify the specific GPU architecture (in my case, CC 6.1) to the compiler, no difference.
Tried to disable any compiler optimizations (options --ftz=false --prec-sqrt=true --prec-div=true --fmad=false in nvrtcCompileProgram). PTX file got bigger, but still Segfaulting.
Tried to add --std=c++11 or --std=c++14 to the NVRTC compiler options. With any of them NVRTC generates an almost empty (4 lines) PTX but issue no warning nor error until I try to use it.
Environment
SO: Ubuntu 18.04.4 LTS 64-bit
nvcc --version: Cuda compilation tools, release 10.1, V10.1.168. Built on Wed_Apr_24_19:10:27_PDT_2019
gcc --version: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Hardware: Intel I7-7700HQ, GeForce GTX 1050 Ti
Edit on OP+1 day
I forgot to add my environment. See previous section.
Also can you compile the nvrtc output with ptxas? – #talonmies' comment
The nvcc-generated PTX compiles with a warning:
$ ptxas -o /tmp/temp_ptxas_output.o kernel.ptx
ptxas warning : Stack size for entry function 'raytrace_kernel' cannot be statically determined
Which is due to the recursive kernel function (more on that).
It can be safely ignored.
The nvrtc-generated PTX does not compile and issues the error:
$ ptxas -o /tmp/temp_ptxas_output.o nvrtc_kernel.ptx
ptxas fatal : Unresolved extern function '_Z5powiffi'
Based on this question I added __device__ to Sphere class constructor and removed --device-as-default-execution-space compiler option.
It generates a slightly different PTX now, but still presents the same error.
Compiling with the #include <math.h> now generates a lot of "A function without execution space annotations is considered a host function, and host functions are not allowed in JIT mode." warnings besides the previous errors.
If I try to use the accepted solution of the question it throws me a bunch of syntax errors and does not compile. NVCC still works flawlessly.
Just found the culprit by the ancient comment-and-test method: the error goes away if I remove the pow call used to calculate the fresnel effect inside the trace method.
For now, I've just replaced pow(var, 3) for var*var*var.
I created a MVCE and filled a bug report to NVIDIA: https://developer.nvidia.com/nvidia_bug/2917596.
Which Liam Zhang answered and pointed me the problem:
The issue in your code is that there is an incorrect option value being passed to cuModuleLoadDataEx. In lines:
options.push_back(CU_JIT_ERROR_LOG_BUFFER_SIZE_BYTES); //Log buffer size in bytes. Log messages will be capped at this size (including null terminator)
values.push_back(&error_buffer_size);
the buffer size option is provided, but instead of passing a value with the size, a pointer to that value is passed. Since this pointer is then read as a number, the driver assumed a much larger buffer size than 1024.
During the NVRTC compilation a "Unresolved extern function" error occurred, because the pow function signature, as you can find in the documentation is:
__device__​ double pow ( double x, double y )
When the driver tried to zero the buffer when putting the error message in it, the segfault happened.
Without the call to pow, there was no compilation error, so the error buffer was not used and there was no segfault.
To ensure the device code is correct, the values used to call pow function as well as the output pointer should be a double number, or a float equivalent function, powf, could be used.
If I change the call to values.push_back((void*)error_buffer_size); it reports the same error as ptxas compilation of the generated PTX:
Compiler error: ptxas fatal : Unresolved extern function '_Z5powiffi'
cudaSafeCall() failed at file.cpp:74 : CUDA_ERROR_INVALID_PTX - a PTX JIT compilation failed

fast dirty approximation of center of (list of 3D vertex) that forms a very shallow convex hull

I want to find XY of the center (red) of a convex-hull points (orange circles) set that is a result from collision detection.
Using separating-axis technique, I know for sure that the convex shape (pink) is relatively thin in Z-axis.
In >90% of my use cases, the amount of vertices is not more than 8.
My poor algorithm (AABB) ... MCVE
I tried to implement it by calculating the center point of AABB.
However, when I use it in real Physics simulation, the collision point (red) is not accurate enough for box-stack stability.
Here is the test case (the vertices are extruded in +y and -y to create volume) :-
int main(){
std::vector<Vec3> hullPoints;
hullPoints.push_back(Vec3(-0.5,-0.5,-0.1));
hullPoints.push_back(Vec3(-0.5,-0.5,0.1));
hullPoints.push_back(Vec3(-0.5,0.5,-0.1));
hullPoints.push_back(Vec3(-0.5,0.5,0.1));
hullPoints.push_back(Vec3(0.5,-0.5,-0.2));
hullPoints.push_back(Vec3(0.5,-0.5,0.2));
hullPoints.push_back(Vec3(0.5,0.5,-0.2));
hullPoints.push_back(Vec3(0.5,0.5,0.2));
//^^^^ INPUT
Vec3 centerOfHull;// approximate
Vec3 centerMax=Vec3(-100000,-100000,-100000);
Vec3 centerMin=Vec3(100000,100000,100000);
for(unsigned int n=0;n<hullPoints.size();n++){
Vec3 hullPoint=hullPoints[n];
for(unsigned int m3=0;m3<3;m3++){
centerMax[m3]=std::max( centerMax[m3],hullPoint[m3]);
centerMin[m3]=std::min( centerMin[m3],hullPoint[m3]);
}
}
centerOfHull=centerMax*0.5 + centerMin*0.5;
std::cout<<"centerOfHull="<< centerOfHull.toString()<<std::endl;
//it prints (0,0,0)
}
I wish it to return something like Vec3(a value between 0.05 and 0.45, 0, don't care).
References
I want a very fast algorithm that doesn't have to be very accurate.
There are some algorithm in the internet e.g.
Skeleton (so unrelated) : Better "centerpoint" than centroid
Just average all hull points. Its accuracy is too bad. (e.g. result of my example = Vec3(0,0,0))
It is even worse for unevenly-distributed vertices e.g.
Generate the whole convex hull (and all faces). It is too slow for unnecessary high precision.
Answers doesn't need to contain any C++ code.
Just a rough suggestion can be very useful.
Appendix (Vec3 library)
It is provided only for MCVE completeness.
#include <vector>
#include <iostream>
#include <string>
struct Vec3{
//modify from https://www.flipcode.com/archives/Faster_Vector_Math_Using_Templates.shtml
float x, y, z;
inline Vec3( void ) {}
inline Vec3( const float x, const float y, const float z )
{ this->x = x; this->y = y; this->z = z; }
inline Vec3 operator + ( const Vec3& A ) const {
return Vec3( x + A.x, y + A.y, z + A.z );
}
inline Vec3 operator *( const float& A ) const {
return Vec3( x*A, y*A,z*A);
}
inline float Dot( const Vec3& A ) const {
return A.x*x + A.y*y + A.z*z;
}
inline float& operator[]( int arr) {
switch(arr){
case 0: return x;
case 1: return y;
case 2: return z;
}
std::cout<<"error"<<std::endl;
return x;
}
std::string toString( ) const {
return "("+std::to_string(x)+","+std::to_string(y)+","+std::to_string(z)+")";
}
};

error X8000 : D3D11 Internal Compiler error : Invalid Bytecode: Invalid operand type for operand #1 of opcode #86 (counts are 1-based)

I'm absolutely stumped as well as my instructors/lab-assistants.
For some reason, the following HLSL code is returning this in the output window:
error X8000 : D3D11 Internal Compiler error : Invalid Bytecode: Invalid operand type for operand #1 of opcode #86 (counts are 1-based).
Here's the function in the HLSL causing the issue:
// Projects a sphere diameter large in screen space to calculate desired tesselation factor
float SphereToScreenSpaceTessellation(float3 p0, float3 p1, float diameter)
{
float3 centerPoint = (p0 + p1) * 0.5f;
float4 point0 = mul( float4(centerPoint,1.0f) , gTileWorldView);
float4 point1 = point0;
point1.x += diameter;
float4 point0ClipSpace = mul(point0, gTileProj);
float4 point1ClipSpace = mul(point1, gTileProj);
point0ClipSpace /= point0ClipSpace.w;
point1ClipSpace /= point1ClipSpace.w;
point0ClipSpace.xy *= gScreenSize;
point1ClipSpace.xy *= gScreenSize;
float projSizeOfEdge = distance(point0ClipSpace, point1ClipSpace);
float result = projSizeOfEdge / gTessellatedTriWidth;
return clamp(result, 0, 64);
}
I've narrowed it down to the point where it may be the "mul" intrinsic. We've taken everything out of the code and tried to return out a temporary variable like this, and it works fine:
float SphereToScreenSpaceTessellation(float3 p0, float3 p1, float diameter)
{
float temp = 0;
float3 centerPoint = (p0 + p1) * 0.5f;
float4 point0 = mul( float4(centerPoint,1.0f) , gTileWorldView);
float4 point1 = point0;
point1.x += diameter;
float4 point0ClipSpace = mul(point0, gTileProj);
float4 point1ClipSpace = mul(point1, gTileProj);
point0ClipSpace /= point0ClipSpace.w;
point1ClipSpace /= point1ClipSpace.w;
point0ClipSpace.xy *= gScreenSize;
point1ClipSpace.xy *= gScreenSize;
float projSizeOfEdge = distance(point0ClipSpace, point1ClipSpace);
float result = projSizeOfEdge / gTessellatedTriWidth;
return temp;
//return clamp(result, 0, 64);
}
If anyone is wondering:
gTileWorldView, gTileProj are float4x4's in a .hlsli file
gScreenSize is a float2 in a .hlsli file.
gTessellatedTriWidth is a float in a .hlsli file.
The following function is as states in a 2011 NVidia shader at : http://dx11-xpr.googlecode.com/svn/trunk/XPR/Media/Effects/TerrainTessellation.fx
I tried to copy and paste their solution replacing their variables with the one above, and the same error listed happens.
I'm absolutely stumped and I need assistance in order to do this assignment, please help.
Check out this line:
point0ClipSpace.xy *= gScreenSize;
Is gScreenSize a float2? I do not believe you can scalar multiply a vec by any vec type.

C++ data structures: converting from TNT Vector to GL vec3

I'm using a data structure not written by myself that returns a realVec. This is the declaration of realVec (in typedef.h):
typedef TNT::Vector<PCReal> realVec;
For the definition of TNT Vector please see: http://calicam.sourceforge.net/doxygen/tnt__vector_8h_source.html
Definition of PCReal:
typedef double PCReal;
I need to convert this realVec into the following vec3:
struct vec3 {
GLfloat x;
GLfloat y;
GLfloat z;
//
// --- Constructors and Destructors ---
//
vec3( GLfloat s = GLfloat(0.0) ) :
x(s), y(s), z(s) {}
vec3( GLfloat _x, GLfloat _y, GLfloat _z ) :
x(_x), y(_y), z(_z) {}
vec3( const vec3& v ) { x = v.x; y = v.y; z = v.z; }
vec3( const vec2& v, const float f ) { x = v.x; y = v.y; z = f; }
...
I'm very new to C++ so my confusion probably lies in using TNT::Vector's iterator and converting values returned by it. I'm thinking something like the below, so tell me if it makes sense. It seems to compile (no 'make' errors):
realVec normal = this->meshPoints.getNormalForVertex(i);
PCReal* iter = normal.begin();
vec3(*iter++, *iter++, *iter++);
I need this because I'm doing gl programming, and it's convenient for my shaders to take vec3's as input.
What you have might work, but you could improve it with some safety checks, and you don't really have to use iterators at all. It appears realVec provides operator[] like most other vector classes.
realVec normal = this->meshPoints.getNormalForVertex(i);
if (normal.size() >= 3)
{
vec3 x(normal[0], normal[1], normal[2]);
}