I have a class with the following constructors:
Color(const float red = 0.0f, const float green = 0.0f, const float blue = 0.0f, const float alpha = 1.0f);
Color(const unsigned char red, const unsigned char green, const unsigned char blue, const unsigned char alpha);
Color(const unsigned long int color);
If I call it like this:
Color c{ 0.0f, 1.0f, 0.0f, 1.0f };
everything is ok. But if I call it:
Color c{ 78, 180, 84, 255 };
or
Color c{ 0xffffffff };
I receive
error C2668: 'Color::Color' : ambiguous call to overloaded function
Why? How to make it choose correctly?
Color c{ 0.0f, 1.0f, 0.0f, 1.0f }; is unambiguous, the compiler can pick the constructor that takes floating point arguments.
With Color c{ 78, 180, 84, 255 };, the literals are actually signed types. So the compiler has to convert the literals. It has two choices and doesn't know which one to pick.
If you'd written, albeit tediously, Color c{static_cast<unsigned char>(78), static_cast<unsigned char>(180), static_cast<unsigned char>(84), static_cast<unsigned char>(255) }; then the constructor taking const unsigned char arguments would have been called automatically.
Again, with Color c{ 0xffffffff };, the number is again a signed hexadecimal literal. So the compiler doesn't know which one to use.
Related
I get LNK2005 "public: static struct Color Color::Black already defined in ***.obj
Color.h file contents:
#pragma once
struct Color
{
Color(float r, float g, float b) : R{ r }, G{ g }, B{ b }, A{ 1.0f }{}
float R;
float G;
float B;
float A;
static Color Black;
};
Color Color::Black = Color(0.0f, 0.0f, 0.0f);
What would be the correct way of implementing a bunch of default colors like black, white, red, green, etc?
I would go for this
// header file
#pragma once
struct Color
{
Color(float r, float g, float b) : R{ r }, G{ g }, B{ b }, A{ 1.0f }{}
float R;
float G;
float B;
float A;
static const Color Black;
static const Color Red;
// etc
};
// cpp file
const Color Color::Black = Color(0.0f, 0.0f, 0.0f);
const Color Color::Red = Color(1.0f, 0.0f, 0.0f);
// etc
I have this 2D array of GLfloats:
static constexpr GLfloat facenormals[6][12] = {
{
0.0f, 1.0f, 0.0f, // TOP
},
{
0.0f, -1.0f, 0.0f, // BOTTOM
},
{
0.0f, 0.0f, 1.0f, // FRONT
},
{
0.0f, 0.0f, -1.0f, // BACK
},
{
1.0f, 0.0f, 0.0f, // RIGHT
},
{
-1.0f, 0.0f, 0.0f, // LEFT
}
};
and an std::vector<GLfloat>. My goal is to add the data from one of the sub-arrays of my 2D array to the end of the vector. My first attempt was this:
normals.insert(
normals.end(),
&CubeData::facenormals[direction],
&CubeData::facenormals[direction] + 12
);
But when building the solution I get the error "cannot convert from 'const GLfloat [12]' to '_Objty". I tried changing the arguments of the insert() call to this:
normals.insert(
normals.end(),
CubeData::facenormals + 12 * direction,
CubeData::facenormals + 12 * (direction + 1)
);
but I get the same error when compiling.
How do I do this correctly, and what does the error mean?
_Objty is the name for the vector's type parameter in MSVC's particular implementation of the standard library. So the compiler is telling you you can't convert a value of type GLfloat[12] to whatever the vector is storing.
But why were you trying to insert arrays?
The problem lies in those extra & in the call to insert. This'll fix it:
normals.insert(
normals.end(),
CubeData::facenormals[direction],
CubeData::facenormals[direction] + 12
);
CubeData::facenormals is an array of arrays, so CubeData::facenormals[direction] is an array. That would normally decay in a pointer automatically, which would give you what you want, but by prepending a &, you instead get a pointer to that array. That pointer gets dereferenced into an array.
By removing &, you let the array decay to a GLfloat*, and then that gets dereferenced into something that you can assign to a GLfloat.
I'm trying to do something very simple but I'm doing something wrong.
Header file:
class Example
{
public:
typedef struct
{
float Position[3];
float Color[4];
float TexCoord[2];
} IndicatorVertex;
void doSomething();
};
.cpp file:
void Example::doSomething()
{
IndicatorVertex *vertices;
vertices = IndicatorVertex[] {
{{-1.0, 1.0, 1.0}, {1.0f, 1.0f, 1.0f, 1.0f}, {0.0f, 0.0f}}
{{1.0, 1.0, 1.0}, {1.0f, 1.0f, 1.0f, 1.0f}, {0.0f, 0.0f}},
};
}
Upon compilation, I'm getting Error:(12, 13) unexpected type name 'IndicatorVertex': expected expression.
(I'm intentionally not using std::vector etc; I'm deliberately using C features in a c++11 setting.)
You can't create a dynamic array like you do, you need to define an actual array like
IndicatorVertex vertices[] = { ... };
If you later need a pointer then remember that arrays naturally decays to pointers to their first element. So if you, for example, want to call a function which expects a IndicatorVertex* argument, just pass in vertices and it will still work as expected.
If you want to have different arrays and make vertices point to one of them, then you have to define the arrays as shown above, and make vertices point to one of them. Like
IndicatorVertex vertices1[] = { ... };
IndicatorVertex vertices2[] = { ... };
// ...
IndicatorVertex* vertices = vertices1;
// ...
vertices = vertices2;
I'm using the opengl function "drawstring" to create a HUD for my game. I've managed to successful draw string to the screen but I've ran into a problem when it comes to displaying a value such as the players position or score which are floats.
After some research I found that drawstring will only accept const chars. So i then tried to cast or convert a float into a char value but I have been successful.
This is my drawstring method
void Player::DrawString(const char* text, const Vector3* position, const Color* color)
{
glPushMatrix();
glDisable(GL_TEXTURE);
glDisable(GL_LIGHTING);
glDisable(GL_DEPTH_TEST);
glColor3f(1.0f, 1.0f, 1.0f);
glTranslatef(position->x, position->y, position->z);
glRasterPos2f(0.0f, 0.0f);
glutBitmapString(GLUT_BITMAP_TIMES_ROMAN_24, (unsigned char*)text);
glEnable(GL_LIGHTING);
glEnable(GL_TEXTURE);
glEnable(GL_DEPTH_TEST);
glPopMatrix();
}
and this is my method calls
Color c1 = {0.0f, 0.0f, 0.0f};
DrawString("Player Pos: ", &_vpos, &c1);
Vector3 vspeed = {20.0f, 0.0f, 500.0f};
Color c = {1.0f, 1.0f, 1.0f};
DrawString(cpp, &vspeed, &c1);
cpp is a float that i have tried to cast to a char.
#include <stdio.h>
// ...
char buf[256];
snprintf(buf, sizeof(buf) - 1, "Pos: { %f, %f, %f }", pos.x, pos.y, pos.z);
DrawString(buf, &_vpos, &c1);
Having this code:
#define GREEN 0.0f, 1.0f, 0.0f
#define RED 1.0f, 0.0f, 0.0f
const float colors[] = {
RED, GREEN, RED, RED,
};
I can not think of a better (typed) way to create colors, without using the #define. Is it a better way? Also, having the C++11 standard in mind.
UPDATE:
Full example of code using this kind of define, https://bitbucket.org/alfonse/gltut/src/3ee6f3dd04a76a1628201d2543a85e444bae8d25/Tut%2005%20Objects%20in%20Depth/OverlapNoDepth.cpp?at=default
I'm not sure to understand what you're trying to do, but to create a list of colors I would do it like this :
#include <vector>
class color {
public:
color(float r, float g, float b)
: m_red(r), m_green(b), m_blue(b) { }
float m_red;
float m_green;
float m_blue;
};
const auto red = color(1.0f, 0.0f, 0.0f);
const auto green = color(0.0f, 1.0f, 0.0f);
const auto blue = color(0.0f, 0.0f, 1.0f);
int main {
auto colors = std::vector<color>();
colors.push_back(red);
colors.push_back(green);
colors.push_back(blue);
colors.push_back(red);
...
}
Edit
As juanchopanza suggested it, I initialized the floats in the constructor initialization list.
As Elasticboy suggested, do something like this:
struct Color {
float R;
float G;
float B;
};
And now, create constants:
const Color Red = {1.0f, 0.0f, 0.0f };
const Color Green = {0.0f, 1.0f, 0.0f };
and so on...
you can use enum here. e.g.
typedef enum color
{
RED, GREEN, BLUE
} color;
alternatively you can assign default values to the colors also. e.g
typedef enum color
{
RED=1, GREEN=5, BLUE=7
} color;
the only thing you have to keep in mind is that these are named integer constants. float values are not allowed here.