Generating a dynamic 3D matrix - c++

I need to generate dynamically a 3D matrix like this:
float vCube[8][3] = {
{1.0f, -1.0f, -1.0f}, {1.0f, -1.0f, 1.0f},
{-1.0f, -1.0f, 1.0f}, {-1.0f, -1.0f, -1.0f},
{1.0f, -1.0f, -1.0f}, {1.0f, 1.0f, 1.0f},
{-1.0f, 1.0f, 1.0f}, {-1.0f, 1.0f, -1.0f}
};
I mean, to take a value and put it inside the matrix on running time.
I tried to make a pointer to float, then adding 3D elements by new, but the results were not what I want.
Note that I don't want to use STL like vector and so on, just a plane matrix.

Whether you use a vector or not, I would suggest you use:
struct Elem3D
{
float v[3];
};
Then you can quite easily create a vector:
vector <Elem3D> cube(8);
or dynamically allocate a number of elements
Elem3D *cube = new Elem3D[8];
Working with two-dimensiona arrays without using struct or class quite quickly gets VERY messy both syntactically and "brainhurt".

You can also store a 3D matrix in one dimensional array
x = height
y = width
z = depth
float VCube[x*y*z]
a_ijk = VCube[i + y * (j + z * k)]
One interesting question is to know which solution (this or Mats Petersson solution) reduces cache misses if we want to do matrix operations

To initialize a 2 dimension array first define the variable;
float vCube[8][3];
Then create a function that would initialize the vCube, or you can do the initialization in the constructor like this.
void function(float a, float b, float c) {
for(int i = 0; i < 8; i++) {
for(int j = 0; j < 3; j +=3) {
vCube[i][j] = a;
vCube[i][j+1] = b;
vCube[i][j+2] = c;
}
}
}

Related

Array of Objects in C++

i am trying to do chess with openGL c ++, for now i have created the pawn object, where its constructor takes an unsigned int parameter. So I tried to create an array of these pawns, and the only working way I've found to do this is this :
Pawn *pawn[n];
for (int i = 0; i < n; i++) {
pawn[i] = new Pawn(Unsigned int var);
}
To call a function of pawn [0], for example, I have to do this :
pawn[0]->function(parameters);
This is the Pawn class :
class Pawn
{
private:
float vertices [16] = {
//position //text coord
-0.08f, -0.10f, 0.0f, 0.0f,
0.08f, -0.10f, 1.0f, 0.0f,
0.08f, 0.10f, 1.0f, 1.0f,
-0.08f, 0.10f, 0.0f, 1.0f
};
GLuint indices[6] {
0, 1, 2,
0, 2, 3
};
unsigned int shaderID, VBO, VAO, EBO, texture;
public:
glm::vec2 Position = glm::vec2(0.0f, 0.0f);
Pawn () {}
Pawn(GLuint shaderID) {
...
}
~Pawn() {
...
}
void setTexture();
void draw (glm::vec2 position);
};
I also tried this :
Pawn pawn[8];
for (int i = 0; i < 8; i++) {
pawn[i] = Pawn(shaderID);
}
but when i run it doesn't work.
I was wondering if this method is efficient or not, and if so, why it works, since I didn't understand it. Thanks for your help

Error message "cannot convert from 'const GLfloat [12]' to '_Objty'" when inserting 2D array into std::vector

I have this 2D array of GLfloats:
static constexpr GLfloat facenormals[6][12] = {
{
0.0f, 1.0f, 0.0f, // TOP
},
{
0.0f, -1.0f, 0.0f, // BOTTOM
},
{
0.0f, 0.0f, 1.0f, // FRONT
},
{
0.0f, 0.0f, -1.0f, // BACK
},
{
1.0f, 0.0f, 0.0f, // RIGHT
},
{
-1.0f, 0.0f, 0.0f, // LEFT
}
};
and an std::vector<GLfloat>. My goal is to add the data from one of the sub-arrays of my 2D array to the end of the vector. My first attempt was this:
normals.insert(
normals.end(),
&CubeData::facenormals[direction],
&CubeData::facenormals[direction] + 12
);
But when building the solution I get the error "cannot convert from 'const GLfloat [12]' to '_Objty". I tried changing the arguments of the insert() call to this:
normals.insert(
normals.end(),
CubeData::facenormals + 12 * direction,
CubeData::facenormals + 12 * (direction + 1)
);
but I get the same error when compiling.
How do I do this correctly, and what does the error mean?
_Objty is the name for the vector's type parameter in MSVC's particular implementation of the standard library. So the compiler is telling you you can't convert a value of type GLfloat[12] to whatever the vector is storing.
But why were you trying to insert arrays?
The problem lies in those extra & in the call to insert. This'll fix it:
normals.insert(
normals.end(),
CubeData::facenormals[direction],
CubeData::facenormals[direction] + 12
);
CubeData::facenormals is an array of arrays, so CubeData::facenormals[direction] is an array. That would normally decay in a pointer automatically, which would give you what you want, but by prepending a &, you instead get a pointer to that array. That pointer gets dereferenced into an array.
By removing &, you let the array decay to a GLfloat*, and then that gets dereferenced into something that you can assign to a GLfloat.

Generate one inserted object multiple times

I am trying to create 5x5x5 cubes for my game. Right now, I have this code which shows only one cube in the camera view. Obviously, it is "inserted" only one time.
void onIdle() override {
// Animate using time when activated
if (animationEnabled) time = (float) glfwGetTime();
// Set gray background
glClearColor(.5f, .5f, .5f, 0);
// Clear depth and color buffers
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Create object matrices
auto cubeMat = rotate(mat4{}, time, {1.0f, 1.0f, 0.0f});
//auto cubeMat = mat4(1.0f);
auto sphereMat = rotate(mat4{}, (float)time, {0.5f, 1.0f, 0.0f});
cubeMat = scale(cubeMat, {0.2f, 0.2f, 0.2f});
// Camera position/rotation - for example, translate camera a bit backwards (positive value in Z axis), so we can see the objects
auto cameraMat = translate(mat4{}, {0, 0, -4.0f});
program.setUniform("ViewMatrix", cameraMat);
// Update camera position with perspective projection
program.setUniform("ProjectionMatrix", perspective((PI / 180.f) * 60.0f, 1.0f, 0.1f, 10.0f));
program.setUniform("LightDirection", normalize(vec3{1.0f, -1.0f, 1.0f}));
// Render objects
// Central box
program.setUniform("Texture", cubeTexture);
for (int i = 0; i < 5*5*5; ++i)
{
program.setUniform("ModelMatrix", cubeMat[i]);
cube.render();
}
}
};
How can I generate 5x5x5 cubes so I don't have to manually insert them so many times? Also, every insertion should give each cube its specific location to create a big 3D cube full of little 5x5x5 cubes (like rubik's cube) or even better, here is a good example.
You need a function which generates a model matrix for an individual cube:
mat4 CubeMat( int x, int y, int z )
{
mat4 cubeMat;
//cubeMat = rotate(cubeMat, time, {1.0f, 1.0f, 0.0f});
//cubeMat = scale(cubeMat, {0.2f, 0.2f, 0.2f});
cubeMat = translate(cubeMat, {1.5f*(float)x-4.0f, 1.5f*(float)y-4.0f, 1.5f*(float)z-4.0f});
return cubeMat;
}
You have to call cube.render(); 5*5*5 times and you have to set 5*5*5 idividual model matrices:
for (int x = 0; x < 5; ++x)
{
for (int y = 0; y < 5; ++y)
{
for (int z = 0; z < 5; ++z)
{
mat4 cubeMat = CubeMat(x, y, z);
program.setUniform("ModelMatrix", cubeMat);
cube.render();
}
}
}

Correct form for dynamic aggregate initialisation of a struct?

I'm trying to do something very simple but I'm doing something wrong.
Header file:
class Example
{
public:
typedef struct
{
float Position[3];
float Color[4];
float TexCoord[2];
} IndicatorVertex;
void doSomething();
};
.cpp file:
void Example::doSomething()
{
IndicatorVertex *vertices;
vertices = IndicatorVertex[] {
{{-1.0, 1.0, 1.0}, {1.0f, 1.0f, 1.0f, 1.0f}, {0.0f, 0.0f}}
{{1.0, 1.0, 1.0}, {1.0f, 1.0f, 1.0f, 1.0f}, {0.0f, 0.0f}},
};
}
Upon compilation, I'm getting Error:(12, 13) unexpected type name 'IndicatorVertex': expected expression.
(I'm intentionally not using std::vector etc; I'm deliberately using C features in a c++11 setting.)
You can't create a dynamic array like you do, you need to define an actual array like
IndicatorVertex vertices[] = { ... };
If you later need a pointer then remember that arrays naturally decays to pointers to their first element. So if you, for example, want to call a function which expects a IndicatorVertex* argument, just pass in vertices and it will still work as expected.
If you want to have different arrays and make vertices point to one of them, then you have to define the arrays as shown above, and make vertices point to one of them. Like
IndicatorVertex vertices1[] = { ... };
IndicatorVertex vertices2[] = { ... };
// ...
IndicatorVertex* vertices = vertices1;
// ...
vertices = vertices2;

Equivalent memcpy different results?

I used memcpy to copy a struct Vertex comprised of glm::vec3 objects.
It worked to copy the struct in a class function.
It did not work in the copy constructor that was called when that function returned the class object.
Why?
Class function returning object
ShapeData ShapeGenerator::drawTriangle() {
ShapeData ret;
Vertex verts[] = {
glm::vec3(0.0f, 1.0f, 0.0f),
glm::vec3(1.0f, 0.0f, 0.0f),
glm::vec3(-1.0f, -1.0f, 0.0f),
glm::vec3(0.0f, 1.0f, 0.0f),
glm::vec3(1.0f, -1.0f, 0.0f),
glm::vec3(0.0f, 0.0f, 1.0f),
};
ret.numVerts = NUM_ARRAY_ELEMENTS(verts);
ret.verts = new Vertex[ret.numVerts];
memcpy(ret.verts, verts, sizeof(verts)); //WORKS
GLushort indicies[] = {0,1,2};
ret.numIndicies = NUM_ARRAY_ELEMENTS(indicies);
ret.indicies = new GLushort[ret.numIndicies];
memcpy(ret.indicies, indicies, sizeof(indicies));
return ret;
}
Copy Constructor
ShapeData(const ShapeData& data) {
verts = new Vertex[data.numVerts];
//memcpy(verts, data.verts, sizeof(data.verts)); //DOES NOT WORK
std::copy( data.verts, data.verts + data.numVerts, verts);
indicies = new GLushort[data.numIndicies];
memcpy(indicies, data.indicies, sizeof(data.indicies));
numVerts = data.numVerts;
numIndicies = data.numIndicies;
std::cout << numVerts << std::endl;
}
Vertex:
#ifndef VERTEX_H
#define VERTEX_H
#include <glm/glm.hpp>
struct Vertex {
glm::vec3 position;
glm::vec3 color;
};
#endif
memcpy(verts, data.verts, sizeof(data.verts)); //DOES NOT WORK
does not work since verts is a pointer, not an array. sizeof(data.verts) does not evaluate to the size of the array the pointer points to. It simply evaluates to the size of a pointer on your platform.
You should be able to use:
size_t n = sizeof(*data.verts)*data.numVerts;
memcpy(verts, data.verts, n);