Iam new to OpenGL and also new to C++ .
I created a main.cpp where i do everything .I Create an GLfloat data[] where i store my Cube's Vertices and also one GLfloat color[] for the Color. I create a VAO and a VBO and i have method, where i compile a simple Vertex and Fragment Shader.
The problem is,that my Code is very long,so i thought about creating a Cube Class,Triangle Class and so on.
My Question is now, can i use the same VAO for several Objects or do i use a new VAO for each Object?.
The other thing is, that i dont exactly know how to program this.I'am missing the structure(I used Java before).
My thoughts were :
class Cube {
public:
Cube();
Cube(GLfloat position[],GLfloat color[]);
//But i need the same Shaders for each Object , so do i initialize it here or somewhere else?
compileShader();
drawCube();
}
Note that this are only my thoughts about it..Maybe someone can help me out with this :)
VAO and VBO can be reused but for performance only should be if they aren't changing (for example, if you're keeping a 1x1x1 cube but using transformations to draw additional ones).
For a simple cube in a small application Your ideal structure would be something like
class Cube{
private static int vbo;
private static int vao;
public Cube(){
//If vbo and vao are 0 initialize them here
}
public void render(mat4* transform){
}
}
The shader would be a separate class, which can either be part of the rendering call for this object public void render (mat4* transform, shader* shader) or can be called by the main pipeline.
Now, I would recommend reading about header files in c++. In c++ headers are the right way to develop especially if you are going to be sharing your class around your program. Other classes need to only inherit the header, they don't have to care about the implementation, and that way your code only gets compiled once and you're using the same compiled assembly code throughout your class. (In c++ if you include actual code, it gets compiled, and even though modern linkers are good about merging duplicate code, it's still bad practice)
Cube.h:
class Cube{
static int vbo;
static int vao;
public:
Cube();
void render(mat4* transform);
}
cube.cpp:
Cube::Cube(){
//Initialize Here
}
Cube::render(mat4* transform){
//Render here
}
Then you'd also want a pair for shader
shader.h:
class Shader{
int ProgramID;
public:
Shader();
void activate ();
}
shader.cpp:
Shader::Shader(){
//compile shader here
}
Shader::activate(){
//make this shader program active here
}
The mat4 data for the transform that I mentioned came from http://glm.g-truc.net/0.9.6/index.html which is a great library for handling the math of 3d graphics.
If you want to refactor this further, you can define the render function of cube in an interface that all of your graphic objects will inherit, allowing you to more dynamically instantiate and configure the objects, and not care about what types they are, just that they're of the interface required.
Related
I have an opengl batch renderer, which has a static vao, vbo, ebo etc. problem is, int the constructor of those are opengl methods. now, because they are static the opengl methods like glGenBuffers get called before opengl has been initialized.
so you can get a better picture, this is how it looks:
class renderer2d
{
private:
static vertex_array vao;
static vertex_buffer vbo;
static index_buffer ibo;
public:
static void draw();
static GLuint create_quad(glm::vec2 position, glm::vec2 size, GLfloat angle, glm::vec4 color);
}
and int the constructor of e.g. vao:
vao()
{
//some sort of opengl method, that gets called without opengl being initialized
glGenVertexArrays(1, &id);
}
btw, i dont only want to "solve" the problem while keeping the "static solution", if you have different ideas on how to do this, please tell me
One trick is to delay initialisation of the object like this:
renderer2d& get_renderer()
{
static renderer2d renderer;
return renderer;
}
This method works for any class, it does not require the renderer itself to have static data. The function can also be a static member of the class, as part of the Meyers singleton design.
I have a code structure that has a Render with the rendering loop, a Shader class and a Model class. I am able to get my shaders to work when I implement them in the renderer, but I'm looking to assign them to a model and call render(), however I'm having an odd issue, that the shader works outside of the model, but not inside.
Shader structure:
class Shader
{
public:
unsigned int ID;
// constructor generates the shader on the fly
// ------------------------------------------------------------------------
Shader(const char* vertexPath, const char* fragmentPath, const char* geometryPath = nullptr)
{
... Generate shaders...
//Assign program ID to the public unsigned int
ID = glCreateProgram();
...attach shaders to program and link...
}
//Install program for use in rendering
void use()
{
glUseProgram(ID);
}
}
Model Structure:
class TextureModel
{
public:
TextureModel(const std::string modelPath, const std::string texturePath)
{
...Initialize model, load in mesh,texture, create a VAO with vertex and
index VBOs, set glAttribArray values...
}
Render(Shader* shader)
{
//glUseProgram(4) <---------- (3)
shader->use(); <---------- (1)
glBindVertexArray(VAO);
glBindTexture(GL_TEXTURE_2D, texture);
glDrawElements(GL_TRIANGLES, IBO->GetCount(), GL_UNSIGNED_INT, (void*)0);
glBindVertexArray(0);
}
}
Renderer structure:
Shader* texShader;
TextureModel* Reindeer;
int main()
{
texShader = new Shader("Shaders/camera.vs", "Shaders/textured.fs", "Shaders/passThrough.gs");
Reindeer = new TextureModel("Models/reindeer_v1/reindeer_v1.obj",
"Models/reindeer_v1/reindeer_diffuse.jpg");
while (!glfwWindowShouldClose(window))
{
...Update camera, project and model matrices. Update the shader uniforms...
//texShader->use() <---------- (2)
Reindeer->Render(texShader);
glfwSwapBuffers(window);
}
}
The above code does not render my object.
At point (1) my model calls the texShader->use() passed in by TextureModel::Render(Shader*). For some reason this does not work, however if I call texShader->use() from the rendering loop it works, this is at point (2).
Also in debugging I found that the program ID assigned to texShader on creation is 4. If I put glUseProgram(4) in the model::Render(Shader*) block, this also works. Point (3).
Why does the glUseProgram() function work in all cases except when passed in to my model? I'm really confused at this one.
Looks like I've fixed it.
I've summarized my problem above, but in actual fact I am using two different shaders for different objects.
The both share the same vertex shader (using their own model matrices, with a camera uniform buffer object), use different fragment shaders, and share a geometry shader.
For some reason I am only able to have one instance of the geometry shader working at a time... I need to to further investigation.
Either way I've got the above issue solved.
Thanks
I am using C++ and OpenGL. I'm trying to do some mock design of a model loader and renderer.
Here is where I'm getting stuck:
I have been drawing to the screen with my renderer class and window class, no problems there.
I'm using a generic model class that until now was hard coded to take vertices[108] and colors[108] and draw a cube. This works and I could instance hundreds of cubes just fine. However, I was always creating the model(s) by using vertices[108] and colors[108].
Now I want to ditch the [108] and just pass vertices and colors of any sizes into the model constructor.
Right now it looks like in pseudo code:
//this is in main.cpp
GLfloat vertices[108] = {
//vertices here
};
GLfloat colors[108] = {
//colors
};
mynamespace::Model::cube(vertices,colors);
That is how I a have been using this and within the model class:
`//this is in model class declaration
GLfloat vertices_[108];
GLfloat colors_[108];
//then in the constructor definition
Model::Model(vertices,colors) {
//loop through, i<108, and assign vertices,colors to vertices_,colors_
}
`
This has worked fine for learning purposes. I now would like to start creating various size vertices[] and sending them along to the Model constructor. (The number of vertices and colors will match - will check that). But I am having a hard time removing that hard coded index, e.g. vertices[108], and just sending along vertices[unknown until it arrives].
I thought, worst case, I could send a vertices[] through and then in the constructor defn, receive the vertices, check the sizeof() and divide by 4 and assign values by loop if nothing else would work. However, when I send any size vertices[] through and print out the sizeof() to check it, I always get 4 bytes...and nothing draws of course.
To be clear, I'm not getting errors in my code and I don't have a particular code I want to debug so I'm not pasting an existing code sample to solve anything. This is meant to be here is what I'm trying to do, what are some recommendations from experienced folks.
What is a good practice for doing this type of thing?
After this, I want to start loading mesh from files but first I want to understand how I am supposed to pass along various amounts of vertices and make a model so I can send models to the renderer.
Just use std::vector (you'll need to include <vector> first).
//this is in model class declaration
std::vector<GLfloat> vertices_;
std::vector<GLfloat> colors_;
//then in the constructor definition
Model::Model(const std::vector<GLfloat> &vertices, const std::vector<GLfloat> &colors) {
vertices_ = vertices;
colors_ = colors;
}
and then:
std::vector<GLfloat> vertices = {
//vertices here
};
std::vector<GLfloat> colors = {
//colors
};
mynamespace::Model cube(vertices,colors);
Of course, you can remove all the std::s if you have using std::vector; or using namespace std;
You get 4 bytes because when u pass the array to a function, the array degenerate into a Type * pointer. So u didn't get the length of array, instead u got the size of the pointer.
Since u have said u are new to both.
A simple way to work around is to def your fun
fun(Type array,int n)
When u invoke, you call like this:
fun(array,sizeof(array))
This should solve your problem.
I'm not sure how I can use RAII to my fullest in the situation that I have. This is the situation:
I am creating a basic renderer. Geometry is described by a Geometry class, which may have vertices added to it. In order for Geometry objects to be used for rendering, it must first be compiled (i.e. a VBO will be created for the Geometry). Compiliation (and decompiliation) of geometery objects is done via a Renderer object, decompilation must be done once you are done with geometery objects. If decompilation is not done, there will be memory leaks.
Here is an example of what I'm describing:
Renderer renderer; // the renderer object
{
Geometry geometry;
// add vertices
// and play around with material, etc.
if(!renderer.compile(geometery); // compile the geometery, so we can use it
{
cerr << "Failed to compile geometry!\n";
}
// now we can use it...
renderer.render(geometry, RenderType::TriangleStrips);
} // if I don't call renderer.decompile(geometry) here, I will get a leak
What I'm trying to do is decompile my geometery, without explicitly telling the renderer to decompile it. This is to simply reduce memory leaks. My first thought was to use RAII, but if I do so the Geometry class would require the Renderer class, which seems quite messy. As I would require a reference to the Renderer object which compiled the geometery.
Another alternative I thought of was to make the renderer create geometry, but this will cause geometry objects to allocated dynamically (i.e. with new), and it also seems quite messy.
I also thought of placing a handle object within the geometery, such as a unique_ptr to an abstract handle object.
e.g.
class GeometeryHandle
{
virtual ~GeometeryHandle() = 0;
};
Which might actually work, as it may also be used to store the GLuint within the handle. I am unsure if this is appropriate though, as I could simply decompile the geometry directly via a Renderer reference. i.e. it will do the same thing if I call it directly through the destructor.
How should I design this appropriately so I don't accidentally not decompile geometry?
It's not clear who is supposed to be responsible for what in your design. That's step 1 in figuring out how to employ any form of resource management: deciding who's responsible for what.
For example, you say that "Geometry is described by a Geometry class, which may have vertices added to it." OK, but how does this relate to the post-compilation data? If the user adds vertices to Geometry after it's compiled, do those vertices automatically get placed in the compiled data? Or is the compiled data completely separate from the Geometry class after compiling it, such that changes to the Geometry class don't update the compiled data?
It sounds to me like you're conflating two very different ideas: GeometryBuilder and Renderable. GeometryBuilder is what you put vertex data into. Then you take that and create a Renderable out of it. The Renderable is an optimized form of the data stored in a GeometryBuilder. The object is completely independent of the GeometryBuilder that created it. The Renderable is what you can actually render with.
So you would do this:
GeometryBuilder meshBuilder;
meshBuilder.AddVertex(...);
...
Renderable renderMesh = render.CreateRenderable(meshBuilder);
After this point, meshBuilder is independent of renderMesh. You can delete one and the other's fine. You can "compile" renderMesh multiple times and get identical copies of the same data. And so forth.
Method 1:
You can use a helper class:
class CompileHandler
{
private:
Geometry& g;
Renderer& r;
public:
CompileHandler(Geometry& _g, Renderer& _r) : g(_g), r(_r)
{
}
~CompileHandler()
{
r.decompile(g);
}
};
And you can use it the following way:
{
Geometry geometry;
CompileHandler ch(geometry,renderer);
// add vertices
// and play around with material, etc.
if(!renderer.compile(geometery); // compile the geometery, so we can use it
{
cerr << "Failed to compile geometry!\n";
}
// now we can use it...
renderer.render(geometry, RenderType::TriangleStrips);
// decompilation is automatic on destruction of the CompileHandler object
}
Method 2:
Create a more robust hierarchy:
GeometryCompiler
^
|
| inherits
|
Renderer
On compilation of the geometry, the geometry compiler (here, the renderer) notify the geometry that it has been compiled (and set a GeometryCompiler pointer inside the geometry to it, the compiler).
Then, on destruction of the geometry, if the pointer is not null, it can require the GeometryCompiler to decompile it.
RAII requires the "undo" operations to be played by dstructors.
In you case, the geometry is destroyed, but the render survive. The only trigger you have is the Geometry destructor, that should know what render compiled, to call him to decompile.
But since is not in the purpose of geometry to know about rederes, you most likely need an helper class (let's call it Compile_guard) that should be instantiated just after the geometry completion, taking a Geometry and Render as parameters, and calling Render::compile upon construction and Render::decompile upon destruction:
Renderer renderer; // the renderer object
{
Geometry geometry;
// add vertices
// and play around with material, etc.
Compile_guard guard(render, geometry);
if(!guard); // Invalid compilation ....
{
cerr << "Failed to compile geometry!\n";
return; // this is exception safe!
}
// now we can use it...
renderer.render(geometry, RenderType::TriangleStrips);
} //here guard will decompile
About the Compile_guard, it can be something like
class Compile_guard
{
public:
Compile_guard(Render& r, Geometry& g) :render(&r), geometry(&g), good(false)
{ good = render->compile(*geometry); }
~Compile_guard()
{ if(good) render->decompile(*geometry); }
explicit operator bool() const { return good; }
Compile_guard(const Compile_guard&) =delete; //just avoid copy and assign.
Compile_guard& operator=(const Compile_guard&) =delete;
private:
Render* render;
Geometry* geometry;
bool good;
};
I am making a basic render engine.
In order to let the render engine operate on all kinds of geometry,
I made this class:
class Geometry
{
protected:
ID3D10Buffer* m_pVertexBuffer;
ID3D10Buffer* m_pIndexBuffer;
public:
[...]
};
Now, I would like the user to be able to create his own geometry by inheriting from this class.
So let's suppose the user made a class Cube : public Geometry
The user would have to create the vertexbuffer and indexbuffer at initialisation.
This is a problem, since it would recreate the vertexbuffer and indexbuffer each time a new Cube object is made. There should only be one instance of vertexbuffer and indexbuffer per derived class. Either that, or a completely different design.
A solution might be to make separate static ID3D10Buffer* for the inheriting class , and set the pointers of the inherited class equal to those in the constructor.
But that would require a static method like static void CreateBuffers() which the user would have to call explicitly one time in his application for each type he decides to make that inherits from Geometry. That doesn't seem like a nice design.
What is a good solution to this problem?
You should separate the concept of an instance from the concept of a mesh. This means you create one version of the Geometry for a cube that represents the vertex and index buffer for a cube.
You then introduce a new class called GeometryInstance which contains a transformation matrix. This class should also have a pointer/reference to a Geometry. Now you can create new Instances of your geometry by creating GeometryInstances that all refer the same Geometry object not duplicating memory or work when creating a new box.
EDIT:
Given that you have the Geometry class from the question and a Mesh class as in your comment your Mesh class should look something like this:
class Mesh {
private:
Matrix4x4 transformation;
Geometry* geometry;
public:
Mesh(const Matrix4x4 _t, Geometry* _g) : transformation(_t), geometry(_g) {}
}
Now when creating your scene you want to do things like this
...
std::vector<Mesh> myMeshes;
// OrdinaryGeometry is a class inheriting Geometry
OrdinaryGeometry* geom = new OrdinaryGeometry(...);
for(int i = 0; i < ordinaryGeomCount; ++i) {
// generateTransform is a function that generates some
// transformation Matrix given an index, just as an example
myMeshes.push_back(Mesh(generateTransform(i), geom);
}
// SpecialGeometry is a class inheriting Geometry with a different
// set of vertices and indices
SuperSpecialGeometry* specialGeom = new SuperSpecialGeometry(...);
for(int i = 0; i < specialGeomCount; ++i) {
myMeshes.push_back(Mesh(generateTransform(i), specialGeom);
}
// Now render all instances
for(int i = 0; i < myMeshes.size(); ++i) {
render(myMeshes[i]);
}
Note how we only have two Geometry objects that are shared between multiple Meshes. These should ideally be refcounted using std::shared_ptr or something similar but it's outside the scope of the question.
What would be the point of sub classing Geometry in your cube example? A cube is simply an instance of Geometry which has a certain set of triangles and indices. There would be no difference between a Cube class and a Sphere class, other than that they fill their triangle/index buffers with different data. So the data itself is what is important here. You need a way to allow the user to provide your engine with various shape data, and to then refer to that data in some way once its made.
For providing shape data, you have two options. You can decide to either keep the details of Geometry private, and provide some interface that takes raw data like a string from a file, or a float array filled in some user made function, creates a Geometry instance for that data, and then gives the user some handle to that instance (or allow the user to specify a handle). Or, you can create some class like GeometryInfo which has methods addTriangle, addVertex etc which the user fills him/herself, and then have some function that accepts a GeometryInfo, creates a Geometry instance for that data and then gives the user some handle again.
In both situations you need to provide some interface that allows the user to say "here's some data, make something out of it and give it some handle. Minimally it would have a function as I described. You would need to maintain a map somewhere of created Geometry instances in your engine. This is so you enforce your one instance per shape rule, and so you can associate what the user wants ("Ball", "Cube") with what your engine needs (Geometry with filled buffers).
Now about the handle. I would either let the user associate the data with a name, like "Ball", or return some integer that the user would then associate with a certain "Ball" instance. That way when you make your Rocket class, the user can then request the "Ball" instance from your engine, various other objects can use the "Ball" and everything's fine because they're just storing handles, not the ball itself. I wouldn't advise storing a pointer to the actual Geometry instance. The mesh doesn't own the geometry, because it can share it with other meshes. It doesn't need access to the geometry's members, because the renderer handles the grunt work. So it is an unnecessary dependency. The only reason would be for speed, but using hashing for your handles would work just as good.
Now for some examples:
Providing shape data:
//option one
engine->CreateGeometryFromFile("ball.txt", "Ball");
//option two
GeometryInfo ball;
ball.addTriangle(0, 1, 0, 1);
ball.addTriangle(...);
...
engine->CreateGeometryFromInfo(ball, "Ball");
Refering to that data using a handle:
class Drawable
{
std::string shape;
Matrix transform;
};
class Rocket : public Drawable
{
Rocket() { shape = "Ball";}
//other stuff here for physics maybe
};
class BallShapedEnemy : public Drawable
{
BallShapedEnemy() { shape = "Ball";}
...
}
...
...in user's render loop...
for each (drawable in myDrawables)
{
engine->Render(drawable.GetShape(), drawable.GetTransform());
}
Now, having a separate class for each different game object such as Rocket is debatable, and is the subject of another question entirely, I was just making it look like your example from a comment.
This may be a sloppy way of doing it but could you not just make a singleton?
#pragma once
#include <iostream>
#define GEOM Geometry::getInstance()
class Geometry
{
protected:
static Geometry* ptrInstance;
static Geometry* getInstance();
float* m_pVertexBuffer;
float* m_pIndexBuffer;
public:
Geometry(void);
~Geometry(void);
void callGeom();
};
#include "Geometry.h"
Geometry* Geometry::ptrInstance = 0;
Geometry::Geometry(void)
{
}
Geometry::~Geometry(void)
{
}
Geometry* Geometry::getInstance()
{
if(ptrInstance == 0)
{
ptrInstance = new Geometry();
}
return ptrInstance;
}
void Geometry::callGeom()
{
std::cout << "Call successful!" << std::endl;
}
Only problem with this method is you would only ever have one Geometry object and I'm assuming you might want more than one? If not it could be useful, but I think Lasserallan's method is probably a much better implementation for what your looking for.