Should the visitor pattern be used for rendering? - c++

I have a game engine that currently uses inheritance to provide a generic interface to do rendering:
class renderable
{
public:
void render();
};
Each class calls the gl_* functions itself, this makes the code hard to optimize and hard to implement something like setting the quality of rendering:
class sphere : public renderable
{
public:
void render()
{
glDrawElements(...);
}
};
I was thinking about implementing a system where I would create a Renderer class that would render my objects:
class sphere
{
void render( renderer* r )
{
r->renderme( *this );
}
};
class renderer
{
renderme( sphere& sphere )
{
// magically get render resources here
// magically render a sphere here
}
};
My main problem is where should I store the VBOs and where should I Create them when using this method?
Should I even use this approach or stick to the current one, perhaps something else?

(Disclaimer: I'm neither a GameEngine nor a C++ performance expert, so take this with a grain of salt)
There are some existing game engines that use the visitor approach, e.g. GamePlay3D. For performance reasons, you probably should exclude non-visible objects from the rendering routine.

Related

Aesthetically correct code for storing/displaying basic shapes

I'm using ES 3.0 (basically GL 3.3 without geometry shaders) to be able to port my programs to almost everything
My helpful framework/wrapper written on C++. Basically its everything what can be found inside of quick reference card: Buffer/Shader/ShaderProgram/Framebuffer/Texture/etc. (pretty basic stuff, I do believe everyone have classes like that)
I noticed that when I need to draw a basic shapes such as full-screen quad, triangles, spheres I always doing it in-place, its not a part of my framework. And I kinda hate it, because I'm repeating myself again and again. It is really unpleasant thing to do
How aesthetically and technically right I can add such a functionality to my framework?
(in advance: for platforms like android context loss is possible, so pause/restore mechanism required)
SFML has similar functionality. Here's its structural frame:
class Drawable {
friend class RenderTarget;
protected: // hidden from everyone but subclasses and RenderTarget
virtual void draw(RenderTarget&) const = 0;
};
class RenderTarget {
public:
void draw(Drawable& drawable) {
drawable.draw(*this);
}
};
class RectangleShape : public Drawable {
protected:
void draw(RenderTarget&) const override {
// the algorithm
}
};
void use() {
RectangleShape shape;
RenderTarget& target = get();
target.draw(shape);
}
(Actually it's more complicated: I omitted virtual destructors, unnecessary inheritance levels etc.)

Engine to render different types of graphic objects

I'm trying to write a class (some sort of graphics engine) basically it's purpose is to render ANYTHING that I pass into it. In most tutorials I've seen, objects draw themselves. I'm not sure if that's how things are supposed to work. I've been searching the internet trying to come up with different ways to handle this problem, I've been reviewing function templates and class templates over and over again (which sounds like the solution I could be looking for) but when I try using templates, it just seems messy to me (possibly because I don't fully understand how to use them) and then I'll feel like taking the template class down, then I'll give it a second try but then I just take it down again, I'm not sure if that's the way to go but it might be. Originally it was tiled-based only (including a movable player on screen along with a camera system), but now I've trying to code up a tile map editor which has things such as tool bars, lists, text, possibly even primitives on screen in the future, etc. and I'm wondering how I will draw all those elements onto the screen with a certain procedure (the procedure isn't important right now, I'll find that out later). If any of you were going to write a graphics engine class, how would you have it distinguish different types of graphic objects from one another, such as a primitive not being drawn as a sprite or a sphere primitive not being drawn as a triangle primitive, etc.? Any help would be appreciated. :)
This is the header for it, it's not functional right now because I've been doing some editing on it, Just ignore the part where I'm using the "new" keyword, I'm still learning that, but I hope this gives an idea for what I'm trying to accomplish:
//graphicsEngine.h
#pragma once
#include<allegro5\allegro.h>
#include<allegro5\allegro_image.h>
#include<allegro5\allegro_primitives.h>
template <class graphicObjectData>
class graphicsEngine
{
public:
static graphicObjectData graphicObject[];
static int numObjects;
static void setup()
{
al_init_image_addon();
al_init_primitives_addon();
graphicObject = new graphicObjectData [1]; //ignore this line
}
template <class graphicObjectData> static void registerObject(graphicObjectData &newGraphicObject) //I'm trying to use a template function to take any type of graphic object
{
graphicObject[numObjects] = &newObject;
numObjects++;
}
static void process() //This is the main process where EVERYTHING is supposed be drawn
{
int i;
al_clear_to_color(al_map_rgb(0,0,0));
for (i=0;i<numObjects;i++) drawObject(graphicObject[i]);
al_flip_display();
}
};
I am a huge fan of templates, but you may find in this case that they are cumbersome (though not necessarily the wrong answer). Since it appears you may be wanting diverse object types in your drawing container, inheritance may actually be a stronger solution.
You will want a base type which provides an abstract interface for drawing. All this class needs is some function which provides a mechanism for the actual draw process. It does not actually care how drawing occurs, what's important is that the deriving class knows how to draw itself (if you want to separate your drawing and your objects, keep reading and I will try to explain a way to accomplish this):
class Drawable {
public:
// This is our interface for drawing. Simply, we just need
// something to instruct our base class to draw something.
// Note: this method is pure virtual so that is must be
// overriden by a deriving class.
virtual void draw() = 0;
// In addition, we need to also give this class a default virtual
// destructor in case the deriving class needs to clean itself up.
virtual ~Drawable() { /* The deriving class might want to fill this in */ }
};
From here, you would simply write new classes which inherit from the Drawable class and provide the necessary draw() override.
class Circle : public Drawable {
public:
void draw() {
// Do whatever you need to make this render a circle.
}
~Circle() { /* Do cleanup code */ }
};
class Tetrahedron : public Drawable {
public:
void draw() {
// Do whatever you need to make this render a tetrahedron.
}
~Tetrahedron() { /* Do cleanup code */ }
};
class DrawableText : public Drawable {
public:
std::string _text;
// Just to illustrate that the state of the deriving class
// could be variable and even dependent on other classes:
DrawableText(std::string text) : _text(text) {}
void draw() {
// Yet another override of the Drawable::draw function.
}
~DrawableText() {
// Cleanup here again - in this case, _text will clean itself
// up so nothing to do here. You could even omit this since
// Drawable provides a default destructor.
}
};
Now, to link all these objects together, you could simply place them in a container of your choosing which accepts references or pointers (or in C++11 and greater, unique_ptr, shared_ptr and friends). Setup whatever draw context you need and loop through all the contents of the container calling draw().
void do_drawing() {
// This works, but consider checking out unique_ptr and shared_ptr for safer
// memory management
std::vector<Drawable*> drawable_objects;
drawable_objects.push_back(new Circle);
drawable_objects.push_back(new Tetrahedron);
drawable_objects.push_back(new DrawableText("Hello, Drawing Program!"));
// Loop through and draw our circle, tetrahedron and text.
for (auto drawable_object : drawable_objects) {
drawable_object->draw();
}
// Remember to clean up the allocations in drawable_objects!
}
If you would like to provide state information to your drawing mechanism, you can require that as a parameter in the draw() routine of the Drawable base class:
class Drawable {
public:
// Now takes parameters which hold program state
virtual void draw(DrawContext& draw_context, WorldData& world_data) = 0;
virtual ~Drawable() { /* The deriving class might want to fill this in */ }
};
The deriving classes Circle, Tetrahedron and DrawableText would, of course, need their draw() signatures updated to take the new program state, but this will allow you to do all of your low-level drawing through an object which is designed for graphics drawing instead of burdening the main class with this functionality. What state you provide is solely up to you and your design. It's pretty flexible.
BIG UPDATE - Another Way to Do It Using Composition
I've been giving it careful thought, and decided to share what I've been up to. What I wrote above has worked for me in the past, but this time around I've decided to go a different route with my engine and forego a scene graph entirely. I'm not sure I can recommend this way of doing things as it can make things complicated, but it also opens the doors to a tremendous amount of flexibility. Effectively, I have written lower-level objects such as VertexBuffer, Effect, Texture etc. which allow me to compose objects in any way I want. I am using templates this time around more than inheritance (though intheritance is still necessary for providing implementations for the VertexBuffers, Textures, etc.).
The reason I bring this up is because you were talking about getting a larger degree of seperation. Using a system such as I described, I could build a world object like this:
class World {
public:
WorldGeometry geometry; // Would hold triangle data.
WorldOccluder occluder; // Runs occlusion tests against
// the geometry and flags what's visible and
// what is not.
WorldCollider collider; // Handles all routines for collision detections.
WorldDrawer drawer; // Draws the world geometry.
void process_and_draw();// Optionally calls everything in necessary
// order.
};
Here, i would have multiple objects which focus on a single aspect of my engine's processing. WorldGeometry would store all polygon details about this particular world object. WorldOccluder would do checks against the camera and geometry to see which patches of the world are actually visible. WorldCollider would process collission detection against any world objects (omitted for brevity). Finally, WorldDrawer would actually be responsible for the drawing of the world and maintain the VertexBuffer and other lower-level drawing objects as needed.
As you can see, this works a little more closely to what you originally asked as the geometry is actually not used only for rendering. It's more data on the polygons of the world but can be fed to WorldGeometry and WorldOccluder which don't do any drawing whatsoever. In fact, the World class only exists to group these similar classes together, but the WorldDrawer may not be dependent on a World object. Instead, it may need a WorldGeometry object or even a list of Triangles. Basically, your program structure becomes highly flexible and dependencies begin to disappear since objects do not inherit often or at all and only request what they absolutely require to function. Case in point:
class WorldOccluder {
public:
// I do not need anything more than a WorldGeometry reference here //
WorldOccluder(WorldGeometry& geometry) : _geometry(geometry)
// At this point, all I need to function is the position of the camera //
WorldOccluderResult check_occlusion(const Float3& camera) {
// Do all of the world occlusion checks based on the passed
// geometry and then return a WorldOccluderResult
// Which hypothetically could contain lists for visible and occluded
// geometry
}
private:
WorldGeometry& _geometry;
};
I chose the WorldOccluder as an example because I've spent the better part of the day working on something like this for my engine and have used a class hierarchy much like above. I've got boxes in 3D space changing colors based on if they should be seen or not. My classes are very succinct and easy to follow, and my entire project hierarchy is easy to follow (I think it is anyway). So this seems to work just fine! I love being on vacation!
Final note: I mentioned templates but didn't explain them. If I have an object that does processing around drawing, a template works really well for this. It avoids dependencies (such as through inheritence) while still giving a great degree of flexibility. Additionally, templates can be optimized by the compiler by inlining code and avoiding virtual-style calls (if the compiler can deduce such optimizations):
template <typename TEffect, TDrawable>
void draw(TEffect& effect, TDrawable& drawable, const Matrix& world, const Matrix& view, const Matrix& projection) {
// Setup effect matrices - our effect template
// must provide these function signatures
effect.world(world);
effect.view(view);
effect.projection(projection);
// Do some drawing!
// (NOTE: could use some RAII stuff here in case drawable throws).
effect.begin();
for (int pass = 0; pass < effect.pass_count(); pass++) {
effect.begin_pass(pass);
drawable.draw(); // Once again, TDrawable objects must provide this signature
effect.end_pass(pass);
}
effect.end();
}
My technique might really suck, but I do it like this.
class entity {
public:
virtual void render() {}
};
vector<entity> entities;
void render() {
for(auto c : entities) {
c->render();
}
}
Then I can do stuff like this:
class cubeEntity : public entity {
public:
virtual void render() override {
drawCube();
}
};
class triangleEntity : public entity {
public:
virtual void render() override {
drawTriangle();
}
};
And to use it:
entities.push_back(new cubeEntity());
entities.push_back(new triangleEntity());
People say that it's bad to use dynamic inheritance. They're a lot smarter than me, but this approach has been working fine for a while. Make sure to make all your destructors virtual!
The way the SFML graphics library draws objects (and the way I think is most manageable) is to have all drawable objects inherit from a 'Drawable' class (like the one in David Peterson's answer), which can then be passed to the graphics engine in order to be drawn.
To draw objects, I'd have:
A Base class:
class Drawable
{
int XPosition;
int YPosition;
int PixelData[100][100]; //Or whatever storage system you're using
}
This can be used to contain information common to all drawable classes (like position, and some form of data storage).
Derived Subclasses:
class Triangle : public Drawable
{
Triangle() {} //overloaded constructors, additional variables etc
int indigenous_to_triangle;
}
Because each subclass is largely unique, you can use this method to create anything from sprites to graphical-primitives.
Each of these derived classes can then be passed to the engine by reference with
A 'Draw' function referencing the Base class:
void GraphicsEngine::draw(const Drawable& _object);
Using this method, a template is no longer necessary. Unfortunately your current graphicObjectData array wouldn't work, because derived classes would be 'sliced' in order to fit in it. However, creating a list or vector of 'const Drawable*' pointers (or preferably, smart pointers) would work just as well for keeping tabs on all your objects, though the actual objects would have to be stored elsewhere.
You could use something like this to draw everything using a vector of pointers (I tried to preserve your function and variable names):
std::vector<const Drawable*> graphicObject; //Smart pointers would be better here
static void process()
{
for (int i = 0; i < graphicObject.size(); ++i)
draw(graphicObject[i]);
}
You'd just have to make sure you added each object to the list as it was created.
If you were clever about it, you could even do this in the construction and destruction:
class Drawable; //So the compiler doesn't throw an error
std::vector<const Drawable*> graphicObject;
class Drawable
{
Triangle() {} //overloaded constructors, additional variables etc
int indigenous_to_triangle;
std::vector<const Drawable*>::iterator itPos;
Drawable() {
graphicObject.push_back(this);
itPos = graphicObject.end() - 1;
}
~Drawable() {
graphicObject.erase(itPos);
}
}
Now you can just create objects and they'll be drawn automatically when process() is called! And they'll even be removed from the list once they're destroyed!
All the above ideas have served me well in the past, so I hope I've helped you out, or at least given you something to think about.

Rendering Engine Design - Abstracting away API specific code for Resources [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have a very big design stumbling block in my rendering code. Basically what this is, is not requiring API specific code (such as OpenGL code or DirectX). Now I've thought of numerous ways on how to solve the problem, however I'm not sure which one to use, or how I should improve upon these ideas.
To give a brief example, I will use a Texture as an example. A texture is an object which represents a texture in GPU memory, implementation wise it may be resembled in any particular way, i.e. whether implementation uses a GLuint or LPDIRECT3DTEXTURE9 to resemble the texture.
Now here are the ways I've thought of to actually implement this. I'm quite unsure if there is a better way, or which way is better than another.
Method 1: Inheritance
I could use inheritance, it seems the most obvious choice for this matter. However, this method requires virtual functions, and would require a TextureFactory class in order to create Texture objects. Which would require calls to new for each Texture object (e.g. renderer->getTextureFactory()->create()).
Here's how I'm thinking of using inheritance in this case:
class Texture
{
public:
virtual ~Texture() {}
// Override-able Methods:
virtual bool load(const Image&, const urect2& subRect);
virtual bool reload(const Image&, const urect2& subRect);
virtual Image getImage() const;
// ... other texture-related methods, such as wrappers for
// load/reload in order to load/reload the whole image
unsigned int getWidth() const;
unsigned int getHeight() const;
unsigned int getDepth() const;
bool is1D() const;
bool is2D() const;
bool is3D() const;
protected:
void setWidth(unsigned int);
void setHeight(unsigned int);
void setDepth(unsigned int);
private:
unsigned int _width, _height, _depth;
};
and then in order for OpenGL (or any other API specific) textures to be created, a sub-class would have to be made, such as OglTexture.
Method 2: Use a 'TextureLoader' or some other class
This method is as simple as it sounds, I use another class to handle loading of textures. This may or may not use virtual functions, depending on the circumstance (or whether I feel it is necessary).
e.g. A polymorphic texture loader
class TextureLoader
{
public:
virtual ~TextureLoader() {}
virtual bool load(Texture* texture, const Image&, const urect2& subRect);
virtual bool reload(Texture* texture, const Image&, const urect2& subRect);
virtual Image getImage(Texture* texture) const;
};
If I were to use this, a Texture object would only be a POD type. However, in order for this to work, a handle object/ID would have to be present within the Texture class.
For example, this is how I would more than likely implement it. Although, I may be able to generalise the whole ID thing, using a base class. Such as a Resource base class in which case holds an ID for a graphics resource.
Method 3: The Pimpl Idiom
I could use the pimpl idiom, which implements how to load/reload/etc. textures. This would more than likely require an abstract factory class for creation of textures. I am unsure how this is better than using inheritance. This pimpl idiom could be used in conjunction with Method 2, i.e. Texture objects would have a reference (pointer) to their loader.
Method 4: Using concepts/compile-time polymorphism
I could on the other hand, use compile-time polymorphism and basically use what I presented in the inheritance method, except without declaring virtual functions. This would work, but if I wanted to dynamically switch from OpenGL rendering to DirectX rendering, this would not be the best solution. I would simply put OpenGL/D3D specific code within the Texture class, where there would be multiple texture classes with some-what the same interface (load/reload/getImage/etc.), wrapped inside some namespace (resembling which API it uses, e.g. ogl, d3d, etc.).
Method 5: Using integers
I could just use integers to store handles to texture objects, this seems fairly simple, but may produce some-what "messy" code.
This problem is also present for other GPU resources such as Geometry, Shaders, and ShaderPrograms.
I've also thought of just making the Renderer class handle the creation, loading, and etc. of graphical resources. However this would violate SPR. e.g.
Texture* texture = renderer->createTexture(Image("something.png"));
Image image = renderer->getImage(texture);
Can someone please guide me, I think I'm thinking too heavily about this. I've tried observing various rendering engines, such as Irrlicht, Ogre3D, and others I have found online. Ogre and Irrlicht use inheritance, however I am unsure that this is the best route to take. As some others just use void*, integers, or just put API specific (mainly OpenGL) code within their classes (e.g. GLuint directly within the Texture class). I really cannot decide which design would be the most appropriate for me.
The platforms I am going to target are:
Windows/Linux/Mac
iOS
Possibly Android
I have considered to just use OpenGL specific code, as OpenGL works for all of those platforms. However, I feel that if I do that I will have to change my code quite a lot if I wish to port to other platforms that cannot use OpenGL, such as the PS3. Any advice on my situation will be greatly appreciated.
Think of it from a high-level point of view. How will your rendering code work with the rest of you game/application model? In other words, how do you plan to create objects in your scene and to what degree of modularity? In my previous work with engines, the end result of a well-designed engine generally has a step-by-step procedure that follows a pattern. For example:
//Components in an engine could be game objects such as sprites, meshes, lights, audio sources etc.
//These resources can be created via component factories for convenience
CRenderComponentFactory* pFactory = GET_COMPONENT_FACTORY(CRenderComponentFactory);
Once a component has been obtained there are usually a variety of overloaded methods you could use to construct the object. Using a sprite as an example, a SpriteComponent could contain everything potentially needed by a sprite in the form of sub-components; like a TextureComponent for instance.
//Create a blank sprite of size 100x100
SpriteComponentPtr pSprite = pFactory->CreateSpriteComponent(Core::CVector2(100, 100));
//Create a sprite from a sprite sheet texture page using the given frame number.
SpriteComponentPtr pSprite = pFactory->CreateSpriteComponent("SpriteSheet", TPAGE_INDEX_SPRITE_SHEET_FRAME_1);
//Create a textured sprite of size 100x50, where `pTexture` is your TextureComponent that you've set-up elsewhere.
SpriteComponentPtr pSprite = pFactory->CreateSpriteComponent(Core::CVector2(100, 50), pTexture);
Then it's simply a matter of adding the object to the scene. This could be done by making an entity, which is simply a generic collection of information that would contain everything needed for scene manipulation; position, orientation, etc. For every entity in your scene, your AddEntity method would add that new entity by default to your render factory, extracting other render-dependent information from sub-components. E.g:
//Put our sprite onto the scene to be drawn
pSprite->SetColour(CColour::YELLOW);
EntityPtr pEntity = CreateEntity(pSprite);
mpScene->AddEntity(pEntity);
What you then have is a nice way of creating objects and a modular way of coding your application without having to reference 'draw' or other render-specific code. A good graphics pipeline should be something along the lines of:
This is a nice resource for rendering engine design (also where the above image is from). Jump to page 21 and read onwards where you'll see in-depth explainations of how scenegraphs operate and general engine design theory.
I don't think there's any one right answer here, but if it were me, I would:
Plan on using only OpenGL to start with.
Keep rendering code separate from other code (that's just good design), but don't try to wrap it in an extra layer of abstraction - just do whatever is most natural for OpenGL.
Figure that if and when I was porting to PS3, I would have a much better grasp of what I need my rendering code to do, so that would be the right time to refactor and pull out a more abstract interface.
I've decided to go for a hybrid approach, with method (2), (3), (5) and possibly (4) in the future.
What I've basically done is:
Every resource has a handle attached to it. This handle describes the object. Each handle has an ID associated with it, which is a simple integer. In order to talk to the GPU with each resource, an interface for each handle is made. This interface is at the moment abstract, but could be done with templates, if I choose to do so in the future. The resource class has a pointer to an interface.
Simply put, a handle describes the actual GPU object, and a resource is just a wrapper over the handle and an interface to connect the handle and the GPU together.
This is what it basically looks like:
// base class for resource handles
struct ResourceHandle
{
typedef unsigned Id;
static const Id NULL_ID = 0;
ResourceHandle() : id(0) {}
bool isNull() const
{ return id != NULL_ID; }
Id id;
};
// base class of a resource
template <typename THandle, typename THandleInterface>
struct Resource
{
typedef THandle Handle;
typedef THandleInterface HandleInterface;
HandleInterface* getInterface() const { return _interface; }
void setInterface(HandleInterface* interface)
{
assert(getHandle().isNull()); // should not work if handle is NOT null
_interface = interface;
}
const Handle& getHandle() const
{ return _handle; }
protected:
typedef Resource<THandle, THandleInterface> Base;
Resource(HandleInterface* interface) : _interface(interface) {}
// refer to this in base classes
Handle _handle;
private:
HandleInterface* _interface;
};
This allows me to extend quite easily, and allows for syntax such as:
Renderer renderer;
// create a texture
Texture texture(renderer);
// load the texture
texture.load(Image("test.png");
Where Texture derives from Resource<TextureHandle, TextureHandleInterface>, and where renderer has the appropriate interface for loading texture handle objects.
I have a short working example of this here.
Hopefully this works, I may choose to redesign it in the future, if so I will update. Criticism would be appreciated.
EDIT:
I have actually changed the way I do this again. The solution I am using is quite similar to the one described above, but here is how it is different:
The API revolves around "backends", these are objects that have a common interface and communicate with a low-level API (e.g. Direct3D or OpenGL).
Handles are no longer integers/IDs. A backend has specific typedef's for each resource handle type (e.g. texture_handle_type, program_handle_type, shader_handle_type).
Resources do not have a base class, and only require one template parameter (a GraphicsBackend). A resource stores a handle and a reference to the graphics backend it belongs to. Then the resource has a user-friendly API and uses the handle and graphics backend common interface to interact with the "actual" resource. i.e. resource objects are basically wrappers of handles that allow for RAII.
A graphics_device object is introduced to allow construction of resources (factory pattern; e.g. device.createTexture() or device.create<my_device_type::texture>(),
For example:
#include <iostream>
#include <string>
#include <utility>
struct Image { std::string id; };
struct ogl_backend
{
typedef unsigned texture_handle_type;
void load(texture_handle_type& texture, const Image& image)
{
std::cout << "loading, " << image.id << '\n';
}
void destroy(texture_handle_type& texture)
{
std::cout << "destroying texture\n";
}
};
template <class GraphicsBackend>
struct texture_gpu_resource
{
typedef GraphicsBackend graphics_backend;
typedef typename GraphicsBackend::texture_handle_type texture_handle;
texture_gpu_resource(graphics_backend& backend)
: _backend(backend)
{
}
~texture_gpu_resource()
{
// should check if it is a valid handle first
_backend.destroy(_handle);
}
void load(const Image& image)
{
_backend.load(_handle, image);
}
const texture_handle& handle() const
{
return _handle;
}
private:
graphics_backend& _backend;
texture_handle _handle;
};
template <typename GraphicBackend>
class graphics_device
{
typedef graphics_device<GraphicBackend> this_type;
public:
typedef texture_gpu_resource<GraphicBackend> texture;
template <typename... Args>
texture createTexture(Args&&... args)
{
return texture{_backend, std::forward(args)...};
}
template <typename Resource, typename... Args>
Resource create(Args&&... args)
{
return Resource{_backend, std::forward(args)...};
}
private:
GraphicBackend _backend;
};
class ogl_graphics_device : public graphics_device<ogl_backend>
{
public:
enum class feature
{
texturing
};
void enableFeature(feature f)
{
std::cout << "enabling feature... " << (int)f << '\n';
}
};
// or...
// typedef graphics_device<ogl_backend> ogl_graphics_device
int main()
{
ogl_graphics_device device;
device.enableFeature(ogl_graphics_device::feature::texturing);
auto texture = device.create<decltype(device)::texture>();
texture.load({"hello"});
return 0;
}
/*
Expected output:
enabling feature... 0
loading, hello
destroying texture
*/
Live demo: http://ideone.com/Y2HqlY
This design is currently being put in use with my library rojo (note: this library is still under heavy development).

Best approach on accessing variables on other class

I'm now writing a Direct3D renderer for our engine.
Here's the problem:
In OpenGL, I can just easily call glClearColor() to clear.
In Direct3D, I need to use g_pd3dDevice just to call ClearRenderTargetView() to clear.
The design of our engine is like this:
class Renderer
{
// ...
}
class Direct3dWin32 : public Renderer
{
private ID3D10Device* g_pd3dDevice;
}
class OpenGLWin32 : public Renderer
{
// Nothing, I can call a function easily without relying on something
}
The problem rises when my ShaderManager class wants to compile the shader. I need to use g_pd3dDevice which is on Direct3dWin32 class.
My question is, what is the best approach on solving this problem? I'm thinking of global variables, a singleton class, or just passing the class in function.
First of all, I can't help but notice g_pd3dDevice, that's not a global. It's a class member pointer to a COM interface of the device, ID3D10Device*, and it's not a global here, nor should it be.
And to answer your question as simple as possible (since it seems like a beginner engine/framework design issue), provide accessor methods which return a pointer to a working device from which it can be passed on further, where it needs to be employed.
A simple example to conform to your little "spec" upstairs:
class Direct3DWin32 : public Renderer
{
ID3D10Device* pD3DDevice;
public:
ID3D10Device* getD3DDevice();
}
Now, whenever you need it, you can just pass it around through functions when you get it from your Direct3DWin32 instance. There's a lot more to engine design than this and I personally wouldn't recommend this as a path to take, but that's a tale for another time and perhaps a series of books.
Note!
You can define the basic stuff like this, but if you really want to take the multiple render paths design to a proper level, you're going to have to introduce polymorphism, adding a nice level of abstraction. Then you can simply define a unified rendering interface that will do the right thing, whether the DirectX or the OpenGL rendering path is currently employed, instantiate a derived class and give its address to the pointer to its abstract base class which contains the specified interface everything conforms to. Then you can render obliviously to the underlying choice of API.
Hopefully this solves your current problem. Also, again, evade globals. And happy coding.
You could possibly use a variant of double dispatch (a.k.a. the visitor pattern):
class ShaderManager
{
public:
void compileShader(Renderer* r, Shader* s) { r->compileShader(this, s); }
void compileD3DShader(ID3D10Device* device, Shader*s);
void compileGLShader(Shader* s);
};
class Renderer
{
public:
virtual void compileShader(ShaderManager* m, Shader* s) = 0;
};
class Direct3dWin32 : public Renderer
{
private:
ID3D10Device* m_device;
public:
virtual void compileShader(ShaderManager* m, Shader* s)
{
m->compileD3DShader(m_device, s);
}
}
class OpenGLWin32 : public Renderer
{
public:
virtual void compileShader(ShaderManager* m, Shader* s)
{
m->compileGLShader(s);
}
}
(I'm not a huge fan of "getters".)
You should provide accessor methods for the variables you want to pass into another class.
For instance, in Direct3dWin32, you could have :
ID3d10Device* get_gpd3Device()
{
return g_pd3Device;
}
You can then pass this into OpenGLWin32:
void useDevice (ID3d10Device* aDevice)
{
// do work
}
Your application that uses both classes would then have responsibility for bridging the gap:
OpenGLWin32 openGL;
openGL.useDevice(direct3d.get_gpd3device());

c++ Having multiple graphics options

Currently my app uses just Direct3D9 for graphics, however in the future I' m planning to extend this to D3D10 and possibly OpenGL. The question is how can I do this in a tidy way?
At present there are various Render methods in my code
void Render(boost::function<void()> &Call)
{
D3dDevice->BeginScene();
Call();
D3dDevice->EndScene();
D3dDevice->Present(0,0,0,0);
}
The function passed then depends on the exact state, eg MainMenu->Render, Loading->Render, etc. These will then oftern call the methods of other objects.
void RenderGame()
{
for(entity::iterator it = entity::instances.begin();it != entity::instance.end(); ++it)
(*it)->Render();
UI->Render();
}
And a sample class derived from entity::Base
class Sprite: public Base
{
IDirect3DTexture9 *Tex;
Point2 Pos;
Size2 Size;
public:
Sprite(IDirect3DTexture9 *Tex, const Point2 &Pos, const Size2 &Size);
virtual void Render();
};
Each method then takes care of how best to render given the more detailed settings (eg are pixel shaders supported or not).
The problem is I'm really not sure how to extend this to be able to use one of, what may be somewhat different (D3D v OpenGL) render modes...
Define an interface that is sufficient for your application's graphic output demands. Then implement this interface for every renderer you want to support.
class IRenderer {
public:
virtual ~IRenderer() {}
virtual void RenderModel(CModel* model) = 0;
virtual void DrawScreenQuad(int x1, int y1, int x2, int y2) = 0;
// ...etc...
};
class COpenGLRenderer : public IRenderer {
public:
virtual void RenderModel(CModel* model) {
// render model using OpenGL
}
virtual void DrawScreenQuad(int x1, int y1, int x2, int y2) {
// draw screen aligned quad using OpenGL
}
};
class CDirect3DRenderer : public IRenderer {
// similar, but render using Direct3D
};
Properly designing and maintaining these interfaces can be very challenging though.
In case you also operate with render driver dependent objects like textures, you can use a factory pattern to have the separate renderers each create their own implementation of e.g. ITexture using a factory method in IRenderer:
class IRenderer {
//...
virtual ITexture* CreateTexture(const char* filename) = 0;
//...
};
class COpenGLRenderer : public IRenderer {
//...
virtual ITexture* CreateTexture(const char* filename) {
// COpenGLTexture is the OpenGL specific ITexture implementation
return new COpenGLTexture(filename);
}
//...
};
Isn't it an idea to look at existing (3d) engines though? In my experience designing this kind of interfaces really distracts from what you actually want to make :)
I'd say if you want a really complete the answer, go look at the source code for Ogre3D. They have both D3D and OpenGL back ends. Look at : http://www.ogre3d.org
Basically their API kind of forces you into working in a D3D-ish way, creating buffer objects and stuffing them with data, then issuing draw calls on those buffers. That's the way the hardware likes it anyway, so it's not a bad way to go.
And then once you see how they do things, you might as well just just go ahead and use it and save yourself the trouble of having to re-implement all that it already provides. :-)