How to remove #define use in C++ array constant? - c++

Having this code:
#define GREEN 0.0f, 1.0f, 0.0f
#define RED 1.0f, 0.0f, 0.0f
const float colors[] = {
RED, GREEN, RED, RED,
};
I can not think of a better (typed) way to create colors, without using the #define. Is it a better way? Also, having the C++11 standard in mind.
UPDATE:
Full example of code using this kind of define, https://bitbucket.org/alfonse/gltut/src/3ee6f3dd04a76a1628201d2543a85e444bae8d25/Tut%2005%20Objects%20in%20Depth/OverlapNoDepth.cpp?at=default

I'm not sure to understand what you're trying to do, but to create a list of colors I would do it like this :
#include <vector>
class color {
public:
color(float r, float g, float b)
: m_red(r), m_green(b), m_blue(b) { }
float m_red;
float m_green;
float m_blue;
};
const auto red = color(1.0f, 0.0f, 0.0f);
const auto green = color(0.0f, 1.0f, 0.0f);
const auto blue = color(0.0f, 0.0f, 1.0f);
int main {
auto colors = std::vector<color>();
colors.push_back(red);
colors.push_back(green);
colors.push_back(blue);
colors.push_back(red);
...
}
Edit
As juanchopanza suggested it, I initialized the floats in the constructor initialization list.

As Elasticboy suggested, do something like this:
struct Color {
float R;
float G;
float B;
};
And now, create constants:
const Color Red = {1.0f, 0.0f, 0.0f };
const Color Green = {0.0f, 1.0f, 0.0f };
and so on...

you can use enum here. e.g.
typedef enum color
{
RED, GREEN, BLUE
} color;
alternatively you can assign default values to the colors also. e.g
typedef enum color
{
RED=1, GREEN=5, BLUE=7
} color;
the only thing you have to keep in mind is that these are named integer constants. float values are not allowed here.

Related

LNK2005 Error When Implementing static Fields In a Struct

I get LNK2005 "public: static struct Color Color::Black already defined in ***.obj
Color.h file contents:
#pragma once
struct Color
{
Color(float r, float g, float b) : R{ r }, G{ g }, B{ b }, A{ 1.0f }{}
float R;
float G;
float B;
float A;
static Color Black;
};
Color Color::Black = Color(0.0f, 0.0f, 0.0f);
What would be the correct way of implementing a bunch of default colors like black, white, red, green, etc?
I would go for this
// header file
#pragma once
struct Color
{
Color(float r, float g, float b) : R{ r }, G{ g }, B{ b }, A{ 1.0f }{}
float R;
float G;
float B;
float A;
static const Color Black;
static const Color Red;
// etc
};
// cpp file
const Color Color::Black = Color(0.0f, 0.0f, 0.0f);
const Color Color::Red = Color(1.0f, 0.0f, 0.0f);
// etc

Correct form for dynamic aggregate initialisation of a struct?

I'm trying to do something very simple but I'm doing something wrong.
Header file:
class Example
{
public:
typedef struct
{
float Position[3];
float Color[4];
float TexCoord[2];
} IndicatorVertex;
void doSomething();
};
.cpp file:
void Example::doSomething()
{
IndicatorVertex *vertices;
vertices = IndicatorVertex[] {
{{-1.0, 1.0, 1.0}, {1.0f, 1.0f, 1.0f, 1.0f}, {0.0f, 0.0f}}
{{1.0, 1.0, 1.0}, {1.0f, 1.0f, 1.0f, 1.0f}, {0.0f, 0.0f}},
};
}
Upon compilation, I'm getting Error:(12, 13) unexpected type name 'IndicatorVertex': expected expression.
(I'm intentionally not using std::vector etc; I'm deliberately using C features in a c++11 setting.)
You can't create a dynamic array like you do, you need to define an actual array like
IndicatorVertex vertices[] = { ... };
If you later need a pointer then remember that arrays naturally decays to pointers to their first element. So if you, for example, want to call a function which expects a IndicatorVertex* argument, just pass in vertices and it will still work as expected.
If you want to have different arrays and make vertices point to one of them, then you have to define the arrays as shown above, and make vertices point to one of them. Like
IndicatorVertex vertices1[] = { ... };
IndicatorVertex vertices2[] = { ... };
// ...
IndicatorVertex* vertices = vertices1;
// ...
vertices = vertices2;

How the compiler chooses correct overloaded function?

I have a class with the following constructors:
Color(const float red = 0.0f, const float green = 0.0f, const float blue = 0.0f, const float alpha = 1.0f);
Color(const unsigned char red, const unsigned char green, const unsigned char blue, const unsigned char alpha);
Color(const unsigned long int color);
If I call it like this:
Color c{ 0.0f, 1.0f, 0.0f, 1.0f };
everything is ok. But if I call it:
Color c{ 78, 180, 84, 255 };
or
Color c{ 0xffffffff };
I receive
error C2668: 'Color::Color' : ambiguous call to overloaded function
Why? How to make it choose correctly?
Color c{ 0.0f, 1.0f, 0.0f, 1.0f }; is unambiguous, the compiler can pick the constructor that takes floating point arguments.
With Color c{ 78, 180, 84, 255 };, the literals are actually signed types. So the compiler has to convert the literals. It has two choices and doesn't know which one to pick.
If you'd written, albeit tediously, Color c{static_cast<unsigned char>(78), static_cast<unsigned char>(180), static_cast<unsigned char>(84), static_cast<unsigned char>(255) }; then the constructor taking const unsigned char arguments would have been called automatically.
Again, with Color c{ 0xffffffff };, the number is again a signed hexadecimal literal. So the compiler doesn't know which one to use.

Generating a dynamic 3D matrix

I need to generate dynamically a 3D matrix like this:
float vCube[8][3] = {
{1.0f, -1.0f, -1.0f}, {1.0f, -1.0f, 1.0f},
{-1.0f, -1.0f, 1.0f}, {-1.0f, -1.0f, -1.0f},
{1.0f, -1.0f, -1.0f}, {1.0f, 1.0f, 1.0f},
{-1.0f, 1.0f, 1.0f}, {-1.0f, 1.0f, -1.0f}
};
I mean, to take a value and put it inside the matrix on running time.
I tried to make a pointer to float, then adding 3D elements by new, but the results were not what I want.
Note that I don't want to use STL like vector and so on, just a plane matrix.
Whether you use a vector or not, I would suggest you use:
struct Elem3D
{
float v[3];
};
Then you can quite easily create a vector:
vector <Elem3D> cube(8);
or dynamically allocate a number of elements
Elem3D *cube = new Elem3D[8];
Working with two-dimensiona arrays without using struct or class quite quickly gets VERY messy both syntactically and "brainhurt".
You can also store a 3D matrix in one dimensional array
x = height
y = width
z = depth
float VCube[x*y*z]
a_ijk = VCube[i + y * (j + z * k)]
One interesting question is to know which solution (this or Mats Petersson solution) reduces cache misses if we want to do matrix operations
To initialize a 2 dimension array first define the variable;
float vCube[8][3];
Then create a function that would initialize the vCube, or you can do the initialization in the constructor like this.
void function(float a, float b, float c) {
for(int i = 0; i < 8; i++) {
for(int j = 0; j < 3; j +=3) {
vCube[i][j] = a;
vCube[i][j+1] = b;
vCube[i][j+2] = c;
}
}
}

Broken line if using glTranslate

I'm trying to draw a line strip on an opengl project.
If I use the glTranslatef function to the transformation matrix, the magenta line strip is drawn broken as show in the figure:
And moving the view, the line strip is broken in different points, or drawn correctly, or not drawn at all.
If I translate manually the points, the line strip is always displayed correctly.
The other lines (red ones: GL_LINE_LOOP, cyan ones: GL_LINES) are manually translated and work properly.
Here is the code with glTranslate:
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glTranslatef( offs_x, offs_y, 0);
glLineWidth(2.0f);
glColor3f(1.0f, 0.0f, 1.0f);
glVertexPointer( 3, GL_FLOAT, 0, trailPoints );
glDrawArrays(GL_LINE_STRIP,0,numTrailPoints);
glPopMatrix();
and here the working code with manual translation:
for (i=0; i< numTrailPoints; i++)
{
translatedTrailPoints[i].x = trailPoints[i].x + offs_x;
translatedTrailPoints[i].y = trailPoints[i].y + offs_y;
translatedTrailPoints[i].z = trailPoints[i].z;
}
glLineWidth(2.0f);
glColor3f(1.0f, 0.0f, 1.0f);
glVertexPointer( 3, GL_FLOAT, 0, translatedTrailPoints);
glDrawArrays(GL_LINE_STRIP,0,numTrailPoints);
What I am missing here?
EDIT :
To complete the question, here are the data structures (in inverted declaration order for better readability):
vec3 translatedTrailPoints[C_MAX_NUM_OF_TRAIL_POINTS];
vec3 trailPoints[C_MAX_NUM_OF_TRAIL_POINTS];
typedef union
{
float array[3];
struct { float x,y,z; };
struct { float r,g,b; };
struct { float s,t,p; };
struct { vec2 xy; float zz; };
struct { vec2 rg; float bb; };
struct { vec2 st; float pp; };
struct { float xx; vec2 yz; };
struct { float rr; vec2 gb; };
struct { float ss; vec2 tp; };
struct { float theta, phi, radius; };
struct { float width, height, depth; };
struct { float longitude, latitude, altitude; };
struct { float pitch, yaw, roll; };
} vec3;
typedef union
{
float array[2];
struct { float x,y; };
struct { float s,t; };
} vec2;
I'd like to second datenwolf's suggestion, but with no success: I tried pragma pack(1 | 2 | 4) before vec2 and vec3 declaration, I tried compiling with /Zp1 | /Zp2 | /Zp4 (I'm under VisualStudio 2008) but the broken line/points still persists.
EDIT2 :
Same problems with textured quads:
vec3 point;
point.x = lon;
point.y = lat;
point.z = 500;
glTranslatef( offs_x, offs_y, 0);
glBindTexture(GL_TEXTURE_2D, iconTextures[0]);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex3f(point.x-C_ICON_WORLD_SIZE, point.y-C_ICON_WORLD_SIZE, point.z);
glTexCoord2f(1.0f, 0.0f); glVertex3f(point.x+C_ICON_WORLD_SIZE, point.y-C_ICON_WORLD_SIZE, point.z);
glTexCoord2f(1.0f, 1.0f); glVertex3f(point.x+C_ICON_WORLD_SIZE, point.y+C_ICON_WORLD_SIZE, point.z);
glTexCoord2f(0.0f, 1.0f); glVertex3f(point.x-C_ICON_WORLD_SIZE, point.y+C_ICON_WORLD_SIZE, point.z);
glEnd();
Results changing the view:
Correct drawn
Bad 1
Bad 2
EDIT3 :
I was able to correct the textured quads case by translating by (point.x + offs_x, point.y + offs_y, point.z) and removing the point coordinates in the glVertex definitions. The behaviour in the previous mode still puzzles me.
Try using glLoadIdentity() between the glPushMatrix() and glPopMatrix() calls since it resets the coordinate system and applies the translation for a fresh matrix.