C++ - Single local class instance for entire program duration - c++

I'm working on a lil' game engine in C++, and decided to do it all OOPily (heavy use of classes.)
It's intended to be (theoretically) cross-platform, so I have an 'Engine' class, an instance of which is created by the 'OS Module', which is WinMain for Windows (the platform I'm developing it for first.)
I have three main questions:
Is it considered poor practice to create a class that is only going to be instantiated once in the entire application? Perhaps because there is some kind of performance hit or added overhead incurred by using a class rather than a bunch of functions?
I've been planning to have WinMain create the instance of Engine as a local variable. The Engine class will be fairly large, containing classes for rendering, script parsing, file system stuff, etc. Basically, the whole game engine, apart from OS specific code, will be contained in the Engine class in some form (possibly as an instance of another class.) Is creating a local instance of my very large Engine class within the WinMain function a bad idea? Is creating a local instance a bad idea when the class will be created when the program starts, and end when the program ends? Maybe new would be better?
My plan (i/wa)s to divide the engine up into 'modules', each of which is represented by a class. The Engine class would contain an instance of almost all the other modules, like, as mentioned above, rendering, file system interaction, etc. Is using classes as containers for huge modules a bad idea from some perspective (performance, design, readability?)
Thanks for any help :)

Game engines are not prime candidates for cross-platform'ness since they usually involve efficient interaction with low-level API's (which are not cross-platofrm).
The size of a class depends on the member variables it contains, not the number of functions it implements.
Stack space is usually small (http://msdn.microsoft.com/en-us/library/ms686774%28v=vs.85%29.aspx) while the heap is theoretically as big as available RAM. So, if you have something really big, store it on the heap (with new).
The "Splash Screen": don't do all the work at the beginning of the program. Users hate it when they run the program and nothing shows on screen because your program is busy initializing something...
Lookup lazy instantiation, basically don't do things that can wait and always show something on screen.
As for specific answers:
1. No, assuming no virtual functions, there should be no performance overhead.
2. See "Splash Screen" and "limited stack space" above.
3. Modularity is generally good, just make sure each class does/represent a single "thing". But don't have so many classes you start forgetting their names and purpose :)

Is it considered poor practice to
create a class that is only going to
be instantiated once in the entire
application? Perhaps because there is
some kind of performance hit or added
overhead incurred by using a class
rather than a bunch of functions?
Not at all. This is basically because your Application class can do things like inherit and encapsulate. There's no performance hit or added overhead.
I've been planning to have WinMain
create the instance of Engine as a
local variable. The Engine class will
be fairly large, containing classes
for rendering, script parsing, file
system stuff, etc. Basically, the
whole game engine, apart from OS
specific code, will be contained in
the Engine class in some form
(possibly as an instance of another
class.) Is creating a local instance
of my very large Engine class within
the WinMain function a bad idea? Is
creating a local instance a bad idea
when the class will be created when
the program starts, and end when the
program ends? Maybe new would be
better?
Nope- this is pretty much the way it should go. Why bother heap allocating? You want automatic destruction semantics. Unless your class has a very large size, hundreds of KB or more, in which case an RAII-based heap allocation is smarter, as stack memory is quite limited. That is, a physical per-instance static size as reported by sizeof(), not including dynamic allocations.
My plan (i/wa)s to divide the engine
up into 'modules', each of which is
represented by a class. The Engine
class would contain an instance of
almost all the other modules, like, as
mentioned above, rendering, file
system interaction, etc. Is using
classes as containers for huge modules
a bad idea from some perspective
(performance, design, readability?)
This is exactly what object-orientated design is for- encapsulation, and dividing the program up into clearly defined and separated modules, then instantiating each one that you need, is exactly the idea behind object-orientated programming. The size of the concept that a class encapsulates is usually considered to be irrelevant, as long as it's one single concept. A clear modular design is great.
I would recommend using run-time inheritance for all of the platform-dependent ones, and preferably, loading them dynamically at run-time (using the OS class to perform the dynamic loading). This enforces a solid compile-time abstraction and allows for more efficient compilation.

No, doing whatever makes your design cleaner and more maintainable is recommended.
No, direct use of the freestore should be avoided whenever possible (i.e., don't use new unless you absolutely have to)
See #1

No it isn't considered bad style. In fact I know no application frameworks that go without one, and the renowned Singleton pattern is basically around the same theme (but different, mind you)
I can't imagine that your class is actually that big. If it is, make it on the heap. However, chances are that the actual contents of that class are going to be on the heap anyway (I'm sort of assuming you'll use existing container classes, that wil 99 out of 100 do dynamic allocation; this goes for STL containers as well)
What difference would it make whether you put them in the data segment, as auto's on the stack or members of a class? I think the class emphesizes the modularity and might enable you to do things more easily (like e.g. unit testing, or reusing the engine for a network-only version etc. etc.)

Related

C++ Singleton Design pattern alternatives

I hate to beat a dead horse, that said, I've gone over so many conflicting articles over the past few days in regards to the use of the singleton pattern.
This question isn't be about which is the better choice in general, rather what makes sense for my use case.
The pet project I'm working on is a game. Some of the code that I'm currently working on, I'm leaning towards using a singleton pattern.
The use cases are as follows:
a globally accessible logger.
an OpenGL rendering manager.
file system access.
network access.
etc.
Now for clarification, more than a couple of the above require shared state between accesses. For instance, the logger is wrapping a logging library and requires a pointer to the output log, the network requires an established open connection, etc.
Now from what I can tell it's more suggested that singletons be avoided, so lets look at how we may do that. A lot of the articles simply say to create the instance at the top and pass it down as a parameter to anywhere that is needed. While I agree that this is technically doable, my question then becomes, how does one manage the potentially massive number of parameters? Well what comes to mind is wrapping the different instances in a sort of "context" object and passing that, then doing something like context->log("Hello World"). Now sure that isn't to bad, but what if you have a sort of framework like so:
game_loop(ctx)
->update_entities(ctx)
->on_preupdate(ctx)
->run_something(ctx)
->only use ctx->log() in some freak edge case in this function.
->on_update(ctx)
->whatever(ctx)
->ctx->networksend(stuff)
->update_physics(ctx)
->ctx->networksend(stuff)
//maybe ctx never uses log here.
You get the point... in some areas, some aspects of the "ctx" aren't ever used but you're still stuck passing it literally everywhere in case you may want to debug something down the line using logger, or maybe later in development, you actually want networking or whatever in that section of code.
I feel like the above example would much rather be suited to a globally accessible singleton, but I must admit, I'm coming from a C#/Java/JS background which may color my view. I want to adopt the mindset/best practices of a C++ programmer, yet like I said, I can't seem to find a straight answer. I also noticed that the articles that suggest just passing the "singleton" as a parameter only give very simplistic use cases that anyone would agree a parameter would be the better way to go.
In this game example, you probably wan't to access logging everywhere even if you don't plan on using it immediately. File system stuff may be all over but until you build out the project, it's really hard to say when/where it will be most useful.
So do I:
Stick with using singletons for these use cases regardless of how "evil/bad" people say it is.
Wrap everything in a context object, and pass it literally everywhere. (seems kinda gross IMO, but if that's the "more accepted/better" way of doing it, so be it.)
Something completely else. (Really lost as to what that might be.)
If option 1, from a performance standpoint, should I switch to using namespace functions, and hiding the "private" variables / functions in anonymous namespaces like most people do in C? (I'm guessing there will be a small boost in performance, but then I'll be stuck having to call an "init" and "destroy" method on a few of these rather than being able to just allow the constructor/destructor to do that for me, still might be worth while?)
Now I realize this may be a bit opinion based, but I'm hoping I can still get a relatively good answer when a more complicated/nested code base is in question.
Edit:
After much more deliberation I've decided to use the "Service Locator" pattern instead. To prevent a global/singleton of the Service Locator I'm making anything that may use the services inherit from a abstract base class that requires the Service Locator be passed when constructed.
I haven't implemented everything yet so I'm still unsure if I'll run into any problems with this approach, and would still love feedback on if this is a reasonable alternative to the singleton / global scope dilemma.
I had read that Service Locator is also somewhat of an anti-pattern, that said, many of the example I found implemented it with statics and/or as a singleton, perhaps using it as I've described removes the aspects that cause it to be an anti-pattern?
Whenever you think you want to use a Singleton, ask yourself the following question: Why is it that it must be ensured at all cost that there never exists more than one instance of this class at any point in time? Because the whole point of the Singleton pattern is to make sure that there can never be more than one instance of the Singleton. That's what the term "singleton" is all about: there only being one. That's why it's called the Singleton pattern. That's why the pattern calls for the constructor to be private. The point of the Singleton pattern is not and never was to give you a globally-accessible instance of something. The fact that there is a global access point to the sole instance is just a consequence of the Singleton pattern. It is not the objective the Singleton pattern is meant to achieve. If all you want is a globally accessible instance of something, then use a global variable. That's exactly what global variables are for…
The Singleton pattern is probably the one design pattern that's singularly more often misunderstood than not. Is it an intrinsic aspect of the very concept of a network connection that there can only ever be one network connection at a time, and the world would come to an end if that constraint was ever to be violated? If the answer is no, then there is no justification for a network connection to ever be modeled as a Singleton. But don't take my word for it, convince yourself by checking out page 127 of Design Patterns: Elements of Reusable Object-Oriented Software where the Singleton pattern was originally described…😉
Concerning your example: If you're ending up having to pass a massive number of parameters into some place then that first and foremost tells you one thing: there are too many responsibilities in that place. This fact is not changed by the use of Singletons. The use of Singletons simply obfuscates this fact because you're not forced to pass all stuff in through one door in the form of parameters but rather just access whatever you want directly all over the place. But you're still accessing these things. So the dependencies of your piece of code are the same. These dependencies are just not expressed explicitly anymore at some interface level but creep around in the mists. And you never know upfront what stuff a certain piece of code depends on until the moment your build breaks after trying to take away one thing that something else happened to depend upon. Note that this issue is not specific to the Singleton pattern. This is a concern with any kind of global entity in general…
So rather than ask the question of how to best pass a massive number of parameters, you should ask the question of why the hell does this one piece of code need access to that many things? For example, do you really need to explicitly pass the network connection to the game loop? Should the game loop not maybe just know the physics world object and that physics world object is at the moment of creation given some object that handles the network communication. And that object in turn is upon initialization told the network connection it is supposed to use? The log could just be a global variable (or is there really anything about the very idea of a log itself that prohibits there ever being more than one log?). Or maybe it would actually make sense for each thread to have its own log (could be a thread-local variable) so that you get a log from each thread in the order of the control flow that thread happened to take rather than some (at best) interleaved mess that would be the output from multiple threads for which you'd probably want to write some tool so that you'd at least have some hope of making sense of it at all…
Concerning performance, consider that, in a game, you'll typically have some parent objects that each manage collections of small child objects. Performance-critical stuff would generally be happening in places where something has to be done to all child objects in such a collection. The relative overhead of first getting to the parent object itself should generally be negligible…
PS: You might wanna have a look at the Entity Component System pattern…

Adding unlimited number of classes as a member og GameObect C++

Thanks for reading!
Problem Itself
So the question is Is there any simple specific way provided by C++ to add unlimited number of unspecified classes as a members
or have I play with vectors ;)
They must be:
Able to add when the program is running
Able to add many different classes that even doesn't exist now, so I can't
use std::optional because I will die with every code change.
Able to remove also during program work.
I found something that can be similar C++ Components based class
but this can't help me because I neeed flexibile code without prdefining what classes can be memeber of Game Object
This is my first question on stack overflow so pleas be patient ;) I'll be very glad for any kind of help.
Thanks for all answers!
I'm not sure if you're heading the right way. You see, games don't usually have a lot of dynamic stuff. Usually, you might want to load a level in memory where it stays until it's completed by player. When it does, you may clean all the game objects and load a next level. Open world games are more complicated and you might have to load and unload adjacent areas on fly when developing one. Also, games require lots of CPU speed, so when you create lots of dynamic stuff it leads to fragmented spaghetti structure of your memory and lots of cache misses. You don't want that. Dynamic classes will require trampoline code to connect your functions together and it'll lead to another performance loss. Solve your problems one by one. You don't have that much classes to have a need for another CORBA or DCOM yet. I guess, you'll never have. Linux core is monolithic, for instance.
IMHO C++ isn't any good for making binary interfaces.
If you want to make a plugin API, you might list some predefined directory for *.dll files, load each DLL file and call some function from it, something like "LoadGameEnginePlugin". Valid plugin will return a structure describing its functionality and pointers to functions implementing one. After that, you call these functions via pointers.
If you want to extend your GameObject's functionality indefinitely, you may look to Visitor or MultiMethod design patterns.

Avoiding singletons in plugins to be shared among all the instances

I'm building a plugin which will be used in a host. This plugin is using a singleton for services I would like to easily access anywhere. The problem comes when I instance several times the same plugin, the same (static) singleton, being specific to the runnable, will be shard among all the instanced plugins. Is there, generally speaking, a way to reduce the scope of the singleton (c++) ?
As each plugin is an instance in itself, I could obviously pass the root class of the plugin to all of it's subclasses but I would like to keep the same global singleton design as possible.
Is there a reason for having a singleton? The rationale is when you need to enforce that there is only one, and need to provide a single point of access. If these aren't really requirements, then just create one and pass it around where needed.
I would gradually get rid of the singleton.
Does the singleton do a lot, or not much?
You might need to divide it up into parts.
If it doesn't do much, just pass it where is is needed, and get rid of its singleton-ness.
If it provides lots of services, create interfaces for each service and pass those around where they are needed. Your design will improve and become more testable and easier to comprehend.
At first, the implementations of the interfaces could delegate to the original singleton, but you want to make them self contained eventually.
A singleton do internally make use of a static variable.
The scope of this static variable is specified by the source file where it is defined and partitioned by its current runnable. For those reason, while running under the same host (and then the same runnable) both plugins (which are the same code) do share the same static variable (and by extension the same singleton).
As we assume in this question the code to be the same for each plugin, the only way to split those singletons would then be to run a new executable. This could be done using the fork unix command for example where both process will then hold their own memory range.
Obviously (as most of you commented) it is a much better approach to avoid using singletons in this case as forking a process is just adding useless complexity.

Should I move to Object Oriented Programming(black boxes) and how?

Right now I have a DirectX engine with a couple of classes - Application,Graphics,Sound and each of them is around 1k lines and they each reference eachother.I initially tried to limit use of classes and stuff like passing the D3D Device and instead made it global for all classes to use,but I see in everyone else's engine that everything is split up into many classes and they have stuff like Engine->GetRenderer->Render(MyD3DContext); isn't that terriby inefficient?Why not just make MyD3DContext global and use it directly in the Render function.And one last thing I don't get is = how are you supposed to make classes that work independent of eachother?Sounds weird.
Firstly why do you think that's terribly inefficient? Besides being much easier to code and maintain, that is also blazingly fast. OOP isn't a bottleneck, its a boon for large projects with multiple developers and multiple concerns(such as real world games).
Let me give you an example, since you mentioned "games":
The game is a Simulation
The simulation contains entities(Objects)
Objects can do things, and have attributes. Hence Objects are like an encapsulation of attributes and actions. This is what makes the "Object" in "Object Oriented Programming". You can think they're(objects are) created in a fictional factory in your simulator. The blueprint of object is the "class", and is called encapsulation.
Each of these objects are bound to your world, probably through some sort of highly mathematical Half-Life-2(source) level Physics engine. You wouldn't want to code the "physics" for each class. Instead you would inherit from a class(or interface) "IPhysics". And then whenever you change the gravity from 10.0 to 15.0, this value is propagated throughout the "world" scenario. This is inheritance.
Each object in your game, say Half-Life-2, Gordon Freeman can at the same time, act as a "Player" and "Can-Be-Scripted" if you know what I mean. This is polymorphism. One object acting in different types.
So you see, this is pretty easy(and terribly EFFICIENT) to model and present the fictional game in OOP.
It isn't terribly inefficient. And you definitely need an introduction to OOP of some sort. Maybe even something online
Yes.
As the project becomes larger having one global anything will cause a vast list of problems. It's also not particularly inefficient to traverse a few pointers. Worry about efficiency in the right areas, areas that you have proven by running tests are inefficient, and try and maintain code clarity and separation at all times.
If you're worried about inefficiency why not knock together a test app that has exactly that kind of structure and time how long it takes to do all that dereferencing. You'll find it insignificant to, say, building up the list of polys in sight.
The only way you'll see the benefit of having well encapsulated non-global objects will be as your project grows and you change things around.
there are a couple big tenants of OO design: in particular Code-Reuse/modularity and scope/isolation. Globals are generally frowned upon these days because they just don't scale well to large development efforts and always end up causing problems, so OO attempts to limit the scope of any given call to the minimum required to perform the function.
as for Modularity/reuse, the larger a sub-module grows, generally the more specific it becomes, and as such the less likely that it will fit all the purposes it could if it were broken apart into more modular chunks. as a result you spend less time rewriting the same code for a slightly tweaked purpose, which also reduces the adjacent purposes that you might break while implementing code for your new one. that makes it more efficient to implement, though there may or may not be some slight cost at runtime. likely not though. remember, it doesn't take a lick more binary to run Render() whether its defined in a root module or composed several layers deep in a composed object. its still just a function pointer.
these are just general concepts, so take what you like.
hope that helps.

How should I design a mechanism in C++ to manage relatively generic entities within a simulation?

I would like to start my question by stating that this is a C++ design question, more then anything, limiting the scope of the discussion to what is accomplishable in that language.
Let us pretend that I am working on a vehicle simulator that is intended to model modern highway systems. As part of this simulation, entities will be interacting with each other to avoid accidents, stop at stop lights and perhaps eventually even model traffic enforcement with radar guns and subsequent exciting high speed chases.
Being a spatial simulation written in C++, it seems like it would be ideal to start with some kind of Vehicle hierarchy, with cars and trucks deriving from some common base class. However, a common problem I have run in to is that such a hierarchy is usually very rigidly defined, and introducing unexpected changes - modeling a boat for instance - tends to introduce unexpected complexity that tends to grow over time into something quite unwieldy.
This simple aproach seems to suffer from a combinatoric explosion of classes. Imagine if I created a MoveOnWater interface and a MoveOnGround interface, and used them to define Car and Boat. Then lets say I add RadarEquipment. Now I have to do something like add the classes RadarBoat and RadarCar. Adding more capabilities using this approach and the whole thing rapidly becomes quite unreasonable.
One approach I have been investigating to address this inflexibility issue is to do away with the inheritance hierarchy all together. Instead of trying to come up with a type safe way to define everything that could ever be in this simulation, I defined one class - I will call it 'Entity' - and the capabilities that make up an entity - can it drive, can it fly, can it use radar - are all created as interfaces and added to a kind of capability list that the Entity class contains. At runtime, the proper capabilities are created and attached to the entity and functions that want to use these interfaced must first query the entity object and check for there existence. This approach seems to be the most obvious alternative, and is working well for the time being. I, however, worry about the maintenance issues that this approach will have. Effectively any arbitrary thing can be added, and there is no single location in which all possible capabilities are defined. Its not a problem currently, when the total number of things is quite small, but I worry that it might be a problem when someone else starts trying to use and modify the code.
As one potential alternative, I pondered using the template system to achieve type safe while keeping the same kind of flexibility. I imagine I could create entities that inherited whatever combination of interfaces I wanted. Using these objects would entail creating a template class or function that used any combination of the interfaces. One example might be the simple move on road using just the MoveOnRoad interface, whereas more complex logic, like a "high speed freeway chase", could use methods from both MoveOnRoad and Radar interfaces.
Of course making this approach usable mandates the use of boost concept check just to make debugging feasible. Also, this approach has the unfortunate side effect of making "optional" interfaces all but impossible. It is not simple to write a function that can have logic to do one thing if the entity has a RadarEquipment interface, and do something else if it doesn't. In this regard, type safety is somewhat of a curse. I think some trickery with boost any may be able to pull it off, but I haven't figured out how to make that work and it seems like way to much complexity for what I am trying to achieve.
Thus, we are left with the dynamic "list of capabilities" and achieving the goal of having decision logic that drives behavior based on what the entity is capable of becomes trivial.
Now, with that background in mind, I am open to any design gurus telling me where I err'd in my reasoning. I am eager to learn of a design pattern or idiom that is commonly used to address this issue, and the sort of tradeoffs I will have to make.
I also want to mention that I have been contemplating perhaps an even more random design. Even though I my gut tells me that this should be designed as a high performance C++ simulation, a part of me wants to do away with the Entity class and object-orientated foo all together and uses a relational model to define all of these entity states. My initial thought is to treat entities as an in memory database and use procedural query logic to read and write the various state information, with the necessary behavior logic that drives these queries written in C++. I am somewhat concerned about performance, although it would not surprise me if that was a non-issue. I am perhaps more concerned about what maintenance issues and additional complexity this would introduce, as opposed to the relatively simple list-of-capabilities approach.
Encapsulate what varies and Prefer object composition to inheritance, are the two OOAD principles at work here.
Check out the Bridge Design pattern. I visualize Vehicle abstraction as one thing that varies, and the other aspect that varies is the "Medium". Boat/Bus/Car are all Vehicle abstractions, while Water/Road/Rail are all Mediums.
I believe that in such a mechanism, there may be no need to maintain any capability. For example, if a Bus cannot move on Water, such a behavior can be modelled by a NOP behavior in the Vehicle Abstraction.
Use the Bridge pattern when
you want to avoid a permanent binding
between an abstraction and its
implementation. This might be the
case, for example, when the
implementation must be selected or
switched at run-time.
both the abstractions and their
implementations should be extensible
by subclassing. In this case, the
Bridge pattern lets you combine the
different abstractions and
implementations and extend them
independently.
changes in the implementation of an
abstraction should have no impact on
clients; that is, their code should
not have to be recompiled.
Now, with that background in mind, I am open to any design gurus telling me where I err'd in my reasoning.
You may be erring in using C++ to define a system for which you as yet have no need/no requirements:
This approach seems to be the most
obvious alternative, and is working
well for the time being. I, however,
worry about the maintenance issues
that this approach will have.
Effectively any arbitrary thing can be
added, and there is no single location
in which all possible capabilities are
defined. Its not a problem currently,
when the total number of things is
quite small, but I worry that it might
be a problem when someone else starts
trying to use and modify the code.
Maybe you should be considering principles like YAGNI as opposed to BDUF.
Some of my personal favourites are from Systemantics:
"15. A complex system that works is invariably found to have evolved from a simple system that works"
"16. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system."
You're also worring about performance, when you have no defined performance requirements, and no problems with performance:
I am somewhat concerned about
performance, although it would not
surprise me if that was a non-issue.
Also, I hope you know about double-dispatch, which might be useful for implementing anything-to-anything interactions (it's described in some detail in More Effective C++ by Scott Meyers).