I get LNK2005 "public: static struct Color Color::Black already defined in ***.obj
Color.h file contents:
#pragma once
struct Color
{
Color(float r, float g, float b) : R{ r }, G{ g }, B{ b }, A{ 1.0f }{}
float R;
float G;
float B;
float A;
static Color Black;
};
Color Color::Black = Color(0.0f, 0.0f, 0.0f);
What would be the correct way of implementing a bunch of default colors like black, white, red, green, etc?
I would go for this
// header file
#pragma once
struct Color
{
Color(float r, float g, float b) : R{ r }, G{ g }, B{ b }, A{ 1.0f }{}
float R;
float G;
float B;
float A;
static const Color Black;
static const Color Red;
// etc
};
// cpp file
const Color Color::Black = Color(0.0f, 0.0f, 0.0f);
const Color Color::Red = Color(1.0f, 0.0f, 0.0f);
// etc
Related
I'm trying to write a simple 3D soft engine, but I've got a little problem..
I have a class called Mesh which contains vertex and edge data:
struct Vertex { float x, y, z; };
struct Edge { int from, to; };
template <int vs, int es>
class Mesh {
public:
Vertex vertices[vs];
int vSize = vs;
Edge edges[es];
int eSize = es;
};
then a derived class called Cube which specifies the vertex and edge data for a cube (I will later on add more shapes of course):
class Cube : public Mesh<8, 12> {
public:
inline Cube() {
Vertex v[] = {
{ -1.0f, -1.0f, -1.0f },
{ 1.0f, -1.0f, -1.0f },
{ 1.0f, 1.0f, -1.0f },
{ -1.0f, 1.0f, -1.0f },
{ -1.0f, -1.0f, 1.0f },
{ 1.0f, -1.0f, 1.0f },
{ 1.0f, 1.0f, 1.0f },
{ -1.0f, 1.0f, 1.0f }
};
for (int i = 0; i < 8; i++)
this->vertices[i] = v[i];
Edge e[] = {
{ 0,1 },{ 1,2 },{ 2,3 },{ 3,0 },
{ 4,5 },{ 5,6 },{ 6,7 },{ 7,4 },
{ 0,4 },{ 1,5 },{ 2,6 },{ 3,7 }
};
for (int i = 0; i < 12; i++)
this->edges[i] = e[i];
}
};
And after that a class called Engine, which has an array of Mesh parent classes, which should be able to hold Cube and later Triangle etc..
template <int w, int h, int mSize>
class Engine {
private:
int width = w;
int height = h;
Mesh meshes[mSize]; <-- problem
int mCount = 0;
byte fBuffer[w][h];
byte bBuffer[w][h];
public:
inline Engine() {};
inline void addMesh(Mesh mesh) { this->meshes[this->mCount++] = mesh; }
};
which yields this error:
Engine.h: 19:3: error: invalid use of template-name 'Mesh' without an argument list
Mesh* meshes = new Mesh[m]
Engine.h: 25:23: error: 'Mesh' is not a type
inline void addMesh(Mesh mesh) { this->meshes[this->mCount++] = mesh; }
I know it's because the Mesh meshes[mSize]; should have Mesh<a, b> values but of course I don't know that for every possible Mesh.
What's a better way of storing these?
I suppose you could add a not-template base for Mesh, say mBase
struct mBase { };
template <std::size_t vs, std::size_t es>
struct Mesh : public mBase
{ };
and define meshes as an array of mBases
mBase meshes[mSize]; // <-- no more problem
void addMesh(mBase mesh) { this->meshes[this->mCount++] = mesh; }
You don't have to use templates for what you're trying to achieve here. So why don't declare your Mesh.vertices and Mesh.edges as std::vectors (for instance), and fill them as you construct your derived objects?
Like so:
#include <vector>
class Mesh {
public:
std::vector<Vertex> vertices;
std::vector<Edge> edges;
};
class Cube : public Mesh {
public:
Cube() {
// Following stuff is only allowed since c++11
// But there's other tricks for vector's initializations before c++11
this->vertices = {
{ -1.0f, -1.0f, -1.0f },
{ 1.0f, -1.0f, -1.0f },
{ 1.0f, 1.0f, -1.0f },
{ -1.0f, 1.0f, -1.0f },
{ -1.0f, -1.0f, 1.0f },
{ 1.0f, -1.0f, 1.0f },
{ 1.0f, 1.0f, 1.0f },
{ -1.0f, 1.0f, 1.0f }
};
this->edges = {
{ 0,1 },{ 1,2 },{ 2,3 },{ 3,0 },
{ 4,5 },{ 5,6 },{ 6,7 },{ 7,4 },
{ 0,4 },{ 1,5 },{ 2,6 },{ 3,7 }
};
}
};
Note that you don't need to store the size of these vectors, since you can get it with: std::vector<T>.size() (Mesh.edges.size())
Make sure to be familiar with the templated objects from the STL before creating your own ;)
(In facts, here your classes should be structs... But that is out of scope of the problem I guess...)
I have a class with the following constructors:
Color(const float red = 0.0f, const float green = 0.0f, const float blue = 0.0f, const float alpha = 1.0f);
Color(const unsigned char red, const unsigned char green, const unsigned char blue, const unsigned char alpha);
Color(const unsigned long int color);
If I call it like this:
Color c{ 0.0f, 1.0f, 0.0f, 1.0f };
everything is ok. But if I call it:
Color c{ 78, 180, 84, 255 };
or
Color c{ 0xffffffff };
I receive
error C2668: 'Color::Color' : ambiguous call to overloaded function
Why? How to make it choose correctly?
Color c{ 0.0f, 1.0f, 0.0f, 1.0f }; is unambiguous, the compiler can pick the constructor that takes floating point arguments.
With Color c{ 78, 180, 84, 255 };, the literals are actually signed types. So the compiler has to convert the literals. It has two choices and doesn't know which one to pick.
If you'd written, albeit tediously, Color c{static_cast<unsigned char>(78), static_cast<unsigned char>(180), static_cast<unsigned char>(84), static_cast<unsigned char>(255) }; then the constructor taking const unsigned char arguments would have been called automatically.
Again, with Color c{ 0xffffffff };, the number is again a signed hexadecimal literal. So the compiler doesn't know which one to use.
I have a class called Color which has some static Color objects in them. I would like to have a static vector of these static Color objects, but I do not know how to initialize the vector because the version of C++ I am using does not support list initialization. I have been told (to my chagrin) that I must use this older version of C++.
Here is my Color.h file:
#ifndef COLOR_H
#define COLOR_H
#include <vector>
class Color {
public:
Color( float red, float green, float blue, float alpha = 1.0f );
float r, g, b, a;
static Color red;
static Color yellow;
static Color blue;
static std::vector<Color> colors;
};
#endif /* COLOR_H */
And in Color.cpp:
#include "Color.h"
Color::Color( float red, float green, float blue, float alpha ) {
r = red;
g = green;
b = blue;
a = alpha;
}
Color Color::red(0.85, 0.0, 0.0);
Color Color::yellow(0.93, 0.93, 0.0);
Color Color::blue(0.0, 0.0, 0.93);
std::array<Colors> arr = {Color::red, Color::blue, Color::yellow};
However, this last line does not work because list initialization isn't supported. What is the alternative? How do I add red, yellow, and blue to colors?
If you can handle a bit of startup overhead, put your initialization in a function:
std::vector<Color> init_colors() {
Color arr[] = {Color::red, Color::blue, Color::yellow};
return std::vector<color>(arr, arr + sizeof(arr)/sizeof(arr[0]));
}
std::vector<Color> colors = init_colors();
I'm trying to draw a line strip on an opengl project.
If I use the glTranslatef function to the transformation matrix, the magenta line strip is drawn broken as show in the figure:
And moving the view, the line strip is broken in different points, or drawn correctly, or not drawn at all.
If I translate manually the points, the line strip is always displayed correctly.
The other lines (red ones: GL_LINE_LOOP, cyan ones: GL_LINES) are manually translated and work properly.
Here is the code with glTranslate:
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glTranslatef( offs_x, offs_y, 0);
glLineWidth(2.0f);
glColor3f(1.0f, 0.0f, 1.0f);
glVertexPointer( 3, GL_FLOAT, 0, trailPoints );
glDrawArrays(GL_LINE_STRIP,0,numTrailPoints);
glPopMatrix();
and here the working code with manual translation:
for (i=0; i< numTrailPoints; i++)
{
translatedTrailPoints[i].x = trailPoints[i].x + offs_x;
translatedTrailPoints[i].y = trailPoints[i].y + offs_y;
translatedTrailPoints[i].z = trailPoints[i].z;
}
glLineWidth(2.0f);
glColor3f(1.0f, 0.0f, 1.0f);
glVertexPointer( 3, GL_FLOAT, 0, translatedTrailPoints);
glDrawArrays(GL_LINE_STRIP,0,numTrailPoints);
What I am missing here?
EDIT :
To complete the question, here are the data structures (in inverted declaration order for better readability):
vec3 translatedTrailPoints[C_MAX_NUM_OF_TRAIL_POINTS];
vec3 trailPoints[C_MAX_NUM_OF_TRAIL_POINTS];
typedef union
{
float array[3];
struct { float x,y,z; };
struct { float r,g,b; };
struct { float s,t,p; };
struct { vec2 xy; float zz; };
struct { vec2 rg; float bb; };
struct { vec2 st; float pp; };
struct { float xx; vec2 yz; };
struct { float rr; vec2 gb; };
struct { float ss; vec2 tp; };
struct { float theta, phi, radius; };
struct { float width, height, depth; };
struct { float longitude, latitude, altitude; };
struct { float pitch, yaw, roll; };
} vec3;
typedef union
{
float array[2];
struct { float x,y; };
struct { float s,t; };
} vec2;
I'd like to second datenwolf's suggestion, but with no success: I tried pragma pack(1 | 2 | 4) before vec2 and vec3 declaration, I tried compiling with /Zp1 | /Zp2 | /Zp4 (I'm under VisualStudio 2008) but the broken line/points still persists.
EDIT2 :
Same problems with textured quads:
vec3 point;
point.x = lon;
point.y = lat;
point.z = 500;
glTranslatef( offs_x, offs_y, 0);
glBindTexture(GL_TEXTURE_2D, iconTextures[0]);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex3f(point.x-C_ICON_WORLD_SIZE, point.y-C_ICON_WORLD_SIZE, point.z);
glTexCoord2f(1.0f, 0.0f); glVertex3f(point.x+C_ICON_WORLD_SIZE, point.y-C_ICON_WORLD_SIZE, point.z);
glTexCoord2f(1.0f, 1.0f); glVertex3f(point.x+C_ICON_WORLD_SIZE, point.y+C_ICON_WORLD_SIZE, point.z);
glTexCoord2f(0.0f, 1.0f); glVertex3f(point.x-C_ICON_WORLD_SIZE, point.y+C_ICON_WORLD_SIZE, point.z);
glEnd();
Results changing the view:
Correct drawn
Bad 1
Bad 2
EDIT3 :
I was able to correct the textured quads case by translating by (point.x + offs_x, point.y + offs_y, point.z) and removing the point coordinates in the glVertex definitions. The behaviour in the previous mode still puzzles me.
Try using glLoadIdentity() between the glPushMatrix() and glPopMatrix() calls since it resets the coordinate system and applies the translation for a fresh matrix.
Having this code:
#define GREEN 0.0f, 1.0f, 0.0f
#define RED 1.0f, 0.0f, 0.0f
const float colors[] = {
RED, GREEN, RED, RED,
};
I can not think of a better (typed) way to create colors, without using the #define. Is it a better way? Also, having the C++11 standard in mind.
UPDATE:
Full example of code using this kind of define, https://bitbucket.org/alfonse/gltut/src/3ee6f3dd04a76a1628201d2543a85e444bae8d25/Tut%2005%20Objects%20in%20Depth/OverlapNoDepth.cpp?at=default
I'm not sure to understand what you're trying to do, but to create a list of colors I would do it like this :
#include <vector>
class color {
public:
color(float r, float g, float b)
: m_red(r), m_green(b), m_blue(b) { }
float m_red;
float m_green;
float m_blue;
};
const auto red = color(1.0f, 0.0f, 0.0f);
const auto green = color(0.0f, 1.0f, 0.0f);
const auto blue = color(0.0f, 0.0f, 1.0f);
int main {
auto colors = std::vector<color>();
colors.push_back(red);
colors.push_back(green);
colors.push_back(blue);
colors.push_back(red);
...
}
Edit
As juanchopanza suggested it, I initialized the floats in the constructor initialization list.
As Elasticboy suggested, do something like this:
struct Color {
float R;
float G;
float B;
};
And now, create constants:
const Color Red = {1.0f, 0.0f, 0.0f };
const Color Green = {0.0f, 1.0f, 0.0f };
and so on...
you can use enum here. e.g.
typedef enum color
{
RED, GREEN, BLUE
} color;
alternatively you can assign default values to the colors also. e.g
typedef enum color
{
RED=1, GREEN=5, BLUE=7
} color;
the only thing you have to keep in mind is that these are named integer constants. float values are not allowed here.