Right now I have a DirectX engine with a couple of classes - Application,Graphics,Sound and each of them is around 1k lines and they each reference eachother.I initially tried to limit use of classes and stuff like passing the D3D Device and instead made it global for all classes to use,but I see in everyone else's engine that everything is split up into many classes and they have stuff like Engine->GetRenderer->Render(MyD3DContext); isn't that terriby inefficient?Why not just make MyD3DContext global and use it directly in the Render function.And one last thing I don't get is = how are you supposed to make classes that work independent of eachother?Sounds weird.
Firstly why do you think that's terribly inefficient? Besides being much easier to code and maintain, that is also blazingly fast. OOP isn't a bottleneck, its a boon for large projects with multiple developers and multiple concerns(such as real world games).
Let me give you an example, since you mentioned "games":
The game is a Simulation
The simulation contains entities(Objects)
Objects can do things, and have attributes. Hence Objects are like an encapsulation of attributes and actions. This is what makes the "Object" in "Object Oriented Programming". You can think they're(objects are) created in a fictional factory in your simulator. The blueprint of object is the "class", and is called encapsulation.
Each of these objects are bound to your world, probably through some sort of highly mathematical Half-Life-2(source) level Physics engine. You wouldn't want to code the "physics" for each class. Instead you would inherit from a class(or interface) "IPhysics". And then whenever you change the gravity from 10.0 to 15.0, this value is propagated throughout the "world" scenario. This is inheritance.
Each object in your game, say Half-Life-2, Gordon Freeman can at the same time, act as a "Player" and "Can-Be-Scripted" if you know what I mean. This is polymorphism. One object acting in different types.
So you see, this is pretty easy(and terribly EFFICIENT) to model and present the fictional game in OOP.
It isn't terribly inefficient. And you definitely need an introduction to OOP of some sort. Maybe even something online
Yes.
As the project becomes larger having one global anything will cause a vast list of problems. It's also not particularly inefficient to traverse a few pointers. Worry about efficiency in the right areas, areas that you have proven by running tests are inefficient, and try and maintain code clarity and separation at all times.
If you're worried about inefficiency why not knock together a test app that has exactly that kind of structure and time how long it takes to do all that dereferencing. You'll find it insignificant to, say, building up the list of polys in sight.
The only way you'll see the benefit of having well encapsulated non-global objects will be as your project grows and you change things around.
there are a couple big tenants of OO design: in particular Code-Reuse/modularity and scope/isolation. Globals are generally frowned upon these days because they just don't scale well to large development efforts and always end up causing problems, so OO attempts to limit the scope of any given call to the minimum required to perform the function.
as for Modularity/reuse, the larger a sub-module grows, generally the more specific it becomes, and as such the less likely that it will fit all the purposes it could if it were broken apart into more modular chunks. as a result you spend less time rewriting the same code for a slightly tweaked purpose, which also reduces the adjacent purposes that you might break while implementing code for your new one. that makes it more efficient to implement, though there may or may not be some slight cost at runtime. likely not though. remember, it doesn't take a lick more binary to run Render() whether its defined in a root module or composed several layers deep in a composed object. its still just a function pointer.
these are just general concepts, so take what you like.
hope that helps.
Related
According to widely spread advice, I should watch out to keep my larger software projects as modular as possible. There are of course various ways to achieve this, but I think that there is no way around to using more or less many interface classes.
Take, as an example the development of a 2D game engine in C++.
Now, one could of course achieve a very modular system by using interfaces for practically everything: From the renderer (Interface Renderer Class -> Dummy, OpenGL, DirectX, SDL, etc.), over audio to input management.
Then, there is the option of making extensive usage of messaging systems, for example. But logically these again come with a high price in performance.
How am I supposed to build a working engine like this?
I don't want to lower the limits for my engine in terms of performance (maximum viable amount of entities, particles, and so on) just to have a perfectly modular system working in the background. This matters because I'd also like to target mobile platforms where CPU power and memory are limited.
Having an interface for the renderer class, for example, would involve virtual function calls for time-critical drawing operations. This alone would slow the engine down by a fair amount.
And here come my main questions:
Where am I supposed to draw the line between consistency and performance with modular programming?
What ways are there to keep projects modular, while retaining good performance for time-critical operations?
There are many ways to keep your code modular without using a single "interface" class.
You already mentioned message passing
then there's plain old callbacks. If an object needs to be able to trigger some event elsewhere in your system, give it a callback function it can invoke to trigger that event. It doesn't need to know anything about the rest of your architecture then
and using templates and static polymorphism, you can achieve most of the same goals as you would with interface classes -- but with zero performance overhead. (For example, template your game engine so that a Direct3D or OpenGL-based renderer can be picked at compile-time)
Moreover, modularity is tricky, and it's not something you get just by hiding everything behind an interface. For it to be modular, whatever that interface implements should be replaceable. You have to have a strategy for replacing one implementation with another. And it has to be possible to create multiple different implementations.
And if you just blindly hide everything behind interfaces, your code will not be modular at all. Replacing any implementation of anything will be such a huge pain, because of the countless layers of interfaces you have to dig through to do so. You'll have to go through hundreds of places in the code and make sure that the right implementation is picked and instantiated and passed through. And your interfaces will be so general that they can't express the functionality you need, or so specific that no other implementation can be made.
If you want a cheesy analogy, bricks are modular. A brick can be easily taken out and replaced with another. But you can also grind them up into tiny particles of baked clay. Is that more modular? You've certainly created more, and smaller "modules". But the only effect is to make it much much harder to replace any given component. I can no longer just pick up one tangible brick, throw it away, and replace it with something else that's brick-sized. Instead, I have to go through thousands of small particles, finding an appropriate replacement for each. And because the replaced component is no longer surrounded by a couple of bricks in the larger structure, but with tens or hundreds of thousands of particles, a ridiculous number of other "modules" are now affected because I swapped out their neighbors that they interfaced with.
Grinding everything up into finer and smaller bits doesn't make anything more modular. It just removes all the structure from your application. The way to write modular software is to actually think and determine which components are so logically isolated and distinct that they can be replaced without affecting the rest of the application. And then write the application, and the component, to maintain this isolation.
Prototype first, then let the interface boundaries emerge.
Preemptive interface design can make coding a drag
Trying to engineer abstraction barriers before you code can be tricky, as you run two risks. One is that you'll inevitably draw some abstraction barriers in the wrong places, and as you start writing working code (as opposed to interface code), you find that your interfaces serve your problem poorly, despite sounding good when described in natural language. The other problem is that it makes coding more of a drag, since you have to juggle two concerns in your head instead of one: writing working code for a problem that you don't completely understand yet, and adhering to an interface that may turn out to be bad.
Interface boundaries emerge from working code.
I am of course not saying that interfaces are bad, but that they're hard to design correctly without having written working code first. Once you have a working program, it becomes obvious which parts should be different instantiations of the same virtual function, which functions need to share resources (and therefore should be put in the same class), etc.
Prototype, then draw only the interface boundaries you need.
I therefore agree with #jdv-Jan de Vaan's suggestion that the first thing to do is to blast out the shortest readable program that works. (This is different from the shortest program that works. There is of course some minimal amount of interface design even at the very beginning.) My addition is to say interface design comes after that. That is, once you have the simple-as-possible code, you can refactor it into interfaces to make it even shorter and more readable. If you want interfaces for portability, I wouldn't start that until you actually have code for two or more platforms. Then the interface boundaries will emerge in a natural (and testable) manner, as it becomes clear which functions can be used as-is for both, and which need to have multiple implementations hidden behind interfaces.
I don't agree with this advice(or maybe your interpretation). "As modular as possible": Where should this end? Are you going to write a virtual interface for 3d vectors, so you can switch implementations? I don't thinks so, but it would be "as modular as possible".
If you are selling a game engine, modularization can help to keep your build times lower, to reduce the amount of header files needed by your prospective clients, and the ability to switch implementations for a particular problem domain (such as directx vs opengl). It can also help to make your code maintaible by partitioning it. But in that case you're not required to decouple the modules with interfaces.
My advice is to always write the shortest readable program that works. If you have can write 20 lines of code that solve some problem locally, or scatter the function over five different classes, the latter would be more modular, but usually the result is less reliable, less readable and less maintainable.
Keep in mind that virtual function calls are intended primarily to deal with a collection of (pointers/references to) objects that aren't necessarily all of the same actual type.
You certainly should not even contemplate something like squares being drawn via OpenGL, but circles via DirectX, or anything on that order. It's perfectly reasonable to handle modularity at this level via templates or even file selection when you build your code, but virtual functions for this situation make no real sense.
That probably brings up the relevant piece of advice for getting performance out of C++: use templates for flexibility and modularity while still retaining maximum performance. One of the main reasons templates are so popular is that they give you modularity without sacrificing performance. The CRTP is particularly relevant to code that may initially seem like it needs virtual functions.
As far as where to draw the line between consistency and performance, there really is no one answer -- it depends heavily on how much performance you need. For your situation (3D game engine for mobile devices) performance is clearly a lot more critical than for many (most) other situations.
I'm comming from an scientific background and I'm having trouble to think object oriented. I'm always thinking of functions not objects.
For example I've got a Dataset containing multiple 2D arrays and I like to extract Regions of Interest out of these arrays. So my first Design was something like a RoIFinder class to whom I pass a reference to the Dataset object. The RoIFinder object did its magic and returned the RoI's.
But this gives me a bad feeling since it looks more like a function than an object.It's more like a Blackbox. But I have no Idea how it is done properly.
How would you do something like that?
OO is not a silver bullet. Do your work in whichever way it seems to be correct from the different points of view: problem decomposition, efficiency, code simplicity, testing, etc.
Do not make the code look in OO way if you don't need to. The OO is about simplification the life when the problem is too complex, not for making simple problems solved in a complicated way.
Specifically for your problem, I see nothing bad in your approach. It possibly doesn't use some advanced techniques, so what?
To me it sounds like in the specific situation you describe, this could be a fine OO design.
OO in brief is about bundling together
state
behaviour
(identity).
Whenever you have data which represents the state of (a part of) the system, and you have behaviour tied to (typically manipulating) that data, you have a candidate for an object. Optionally, these objects may also have an identity, but this may not be always necessary.
If you potentially have multiple different criteria to pick regions of interests out of the Dataset, you may implement these as distinct *Finder classes, implementing a common base interface. There you have an OO class hierarchy! From then on, the Finders can even be used as interchangeable Strategies in your code.
The alternative would be to put the finder functionality in the Dataset. This might be OK if you are absolutely sure you won't have more different criteria to extract regions. Even then, your Dataset has two distinct responsibilities, which is usually not a good idea. It is better to have each class be responsible for one thing, and do it well.
We don't know what you are supposed to do with the data in the arrays - there may be some possibility to find more abstractions there and build some OO types and objects on these too.
Note though, that these all are just possibilities. Implement them only if they are actually useful (for solving problems, simplifying your code, or - last but not least - helping you gain practical experience with new concepts).
What you've done is probably better than the obvious OO approach of adding a find_roi() to the Dataset class itself. Why? Because it sounds like you've created the RoIFinder functionality based only on the public API of Dataset. Keeping Dataset simpler is also good. The STL (well these days it's just part of the Standard library) has examples of this in the way algorithms such as sort are applicable to multiple containers, rather than each container have a sort member function (although in the case of list optimisation opportunities leads it to implementing its own version). The STL also has std::string which controvertially embeds a lot of functionality that could have been factored out - in my opinion it is well designed, prioritorising convenient and elegant usage which is important in such a ubiquitous class which is so frequently and heavily operated on. So, pick what suits the situation. Anyway, there's no reason to put the RoIFinder into a class if it could equally be just a function, but if you've found some state (i.e. data members) that are convenient to preserve, or it helps usability in some other way, then that's a good enough reason to stick with your object.
Your 2D array can be implemented as a matrix class. One object
A region of interest is another class.
Getting a region of interest out of a matrix is a method.
"Iterators" into your matrices are classes.
I want to know whether there is an effect on program efficiency by adopting object oriented approach to a problem as compared to the structured programming approach in any programming language but specially in c++.
Maybe. Maybe not.
You can write efficient object-oriented code. You can write inefficient structured code.
It depends on the application, how well the code is written, and how heavily the code is optimized. In general, you should write code so that it has a good, clean, modular architecture and is well designed, then if you have problems with performance optimize the hot spots that are causing performance issues.
Use object oriented programming where it makes sense to use it and use structured programming where it makes sense to use it. You don't have to choose between one and the other: you can use both.
I remember back in the early 1990's when C++ was young there were studies done about this. If I remember correctly, the guys who took (well written) C++ programs and recoded them in C got around a 15% increase in speed. The guys who took C programs and recoded them in C++, and modified the imperative style of C to an OO style (but same algorithms) for C++ got the same or better performance. The apparent contradiction was explained by the observation that the C programs, in being translated to an object oriented style, became better organized. Things that you did in C because it was too much code and trouble to do better could more easily be done properly in C++.
Thinking back about this I wonder about the conclusion some. Writing a program a second time will always result in a better program, so it didn't have to be imperative to OO style that made the difference. Todays computer architectures are designed with hardware support for common operations done by OO programs, and compilers have gotten better at using the instructions, so I think that it is likely that whatever overhead a virtual function call had in 1992 it is far smaller today.
There doesn't have to be, if you are very careful to avoid it. If you just take the most straightforward approach, using dynamic allocation, virtual functions, and (especially) passing objects by value, then yes there will be inefficiency.
It doesn't have to be. Algorithm is all matters. I agree encapsulation will slow you down little bit, but compilers are there to optimize.
You would say no if this is the question in computer science paper.
However in the real development environment this tends to be true if the OOP paradigm is used correctly. The reason is that in real development process, we generally need to maintain our code base and that the time when OOP paradigm could help us. One strong point of OOP over structured programming like C is that in OOP it is easier to make the code maintainable. When the code is more maintainable, it means less bug and less time to fix bug and less time needed for implementing new features. The bottom line is then we will have more time to focus on the efficiency of the application.
The problem is not technical, it is psychological. It is in what it encourages you to do by making it easy.
To make a mundane analogy, it is like a credit card. It is much more efficient than writing checks or using cash. If that is so, why do people get in so much trouble with credit cards? Because they are so easy to use that they abuse them. It takes great discipline not to over-use a good thing.
The way OO gets abused is by
Creating too many "layers of abstraction"
Creating too much redundant data structure
Encouraging the use of notification-style code, attempting to maintain consistency within redundant data structures.
It is better to minimize data structure, and if it must be redundant, be able to tolerate temporary inconsistency.
ADDED:
As an illustration of the kind of thing that OO encourages, here's what I see sometimes in performance tuning: Somebody sets SomeProperty = true;. That sounds innocent enough, right? Well that can ripple to objects that contain that object, often through polymorphism that's hard to trace. That can mean that some list or dictionary somewhere needs to have things added to it or removed from it. That can mean that some tree or list control needs controls added or removed or shuffled. That can mean windows are being created or destroyed. It can also mean some things need to be changed in a database, which might not be local so there's some I/O or mutex locking to be done.
It can really get crazy. But who cares? It's abstract.
There could be: the OO approach tends to be closer to a decoupled approach where different modules don't go poking around inside each other. They are restricted to public interfaces, and there is always a potential cost in that. For example, calling a getter instead of just directly examining a variable; or calling a virtual function by default because the type of an object isn't sufficiently obvious for a direct call.
That said, there are several factors that diminish this as a useful observation.
A well written structured program should have the same modularity (i.e. hiding implementations), and therefore incur the same costs of indirection. The cost of calling a function pointer in C is probably going to be very similar to the cost of calling a virtual function in C++.
Modern JITs, and even the use of inline methods in C++, can remove the indirection cost.
The costs themselves are probably relatively small (typically just a few extra simple operations per instruction call). This will be insignificant in a program where the real work is done in tight loops.
Finally, a more modular style frees the programmer to tackle more complicated, but hopefully less complex algorithms without the peril of low level bugs.
I would like to start my question by stating that this is a C++ design question, more then anything, limiting the scope of the discussion to what is accomplishable in that language.
Let us pretend that I am working on a vehicle simulator that is intended to model modern highway systems. As part of this simulation, entities will be interacting with each other to avoid accidents, stop at stop lights and perhaps eventually even model traffic enforcement with radar guns and subsequent exciting high speed chases.
Being a spatial simulation written in C++, it seems like it would be ideal to start with some kind of Vehicle hierarchy, with cars and trucks deriving from some common base class. However, a common problem I have run in to is that such a hierarchy is usually very rigidly defined, and introducing unexpected changes - modeling a boat for instance - tends to introduce unexpected complexity that tends to grow over time into something quite unwieldy.
This simple aproach seems to suffer from a combinatoric explosion of classes. Imagine if I created a MoveOnWater interface and a MoveOnGround interface, and used them to define Car and Boat. Then lets say I add RadarEquipment. Now I have to do something like add the classes RadarBoat and RadarCar. Adding more capabilities using this approach and the whole thing rapidly becomes quite unreasonable.
One approach I have been investigating to address this inflexibility issue is to do away with the inheritance hierarchy all together. Instead of trying to come up with a type safe way to define everything that could ever be in this simulation, I defined one class - I will call it 'Entity' - and the capabilities that make up an entity - can it drive, can it fly, can it use radar - are all created as interfaces and added to a kind of capability list that the Entity class contains. At runtime, the proper capabilities are created and attached to the entity and functions that want to use these interfaced must first query the entity object and check for there existence. This approach seems to be the most obvious alternative, and is working well for the time being. I, however, worry about the maintenance issues that this approach will have. Effectively any arbitrary thing can be added, and there is no single location in which all possible capabilities are defined. Its not a problem currently, when the total number of things is quite small, but I worry that it might be a problem when someone else starts trying to use and modify the code.
As one potential alternative, I pondered using the template system to achieve type safe while keeping the same kind of flexibility. I imagine I could create entities that inherited whatever combination of interfaces I wanted. Using these objects would entail creating a template class or function that used any combination of the interfaces. One example might be the simple move on road using just the MoveOnRoad interface, whereas more complex logic, like a "high speed freeway chase", could use methods from both MoveOnRoad and Radar interfaces.
Of course making this approach usable mandates the use of boost concept check just to make debugging feasible. Also, this approach has the unfortunate side effect of making "optional" interfaces all but impossible. It is not simple to write a function that can have logic to do one thing if the entity has a RadarEquipment interface, and do something else if it doesn't. In this regard, type safety is somewhat of a curse. I think some trickery with boost any may be able to pull it off, but I haven't figured out how to make that work and it seems like way to much complexity for what I am trying to achieve.
Thus, we are left with the dynamic "list of capabilities" and achieving the goal of having decision logic that drives behavior based on what the entity is capable of becomes trivial.
Now, with that background in mind, I am open to any design gurus telling me where I err'd in my reasoning. I am eager to learn of a design pattern or idiom that is commonly used to address this issue, and the sort of tradeoffs I will have to make.
I also want to mention that I have been contemplating perhaps an even more random design. Even though I my gut tells me that this should be designed as a high performance C++ simulation, a part of me wants to do away with the Entity class and object-orientated foo all together and uses a relational model to define all of these entity states. My initial thought is to treat entities as an in memory database and use procedural query logic to read and write the various state information, with the necessary behavior logic that drives these queries written in C++. I am somewhat concerned about performance, although it would not surprise me if that was a non-issue. I am perhaps more concerned about what maintenance issues and additional complexity this would introduce, as opposed to the relatively simple list-of-capabilities approach.
Encapsulate what varies and Prefer object composition to inheritance, are the two OOAD principles at work here.
Check out the Bridge Design pattern. I visualize Vehicle abstraction as one thing that varies, and the other aspect that varies is the "Medium". Boat/Bus/Car are all Vehicle abstractions, while Water/Road/Rail are all Mediums.
I believe that in such a mechanism, there may be no need to maintain any capability. For example, if a Bus cannot move on Water, such a behavior can be modelled by a NOP behavior in the Vehicle Abstraction.
Use the Bridge pattern when
you want to avoid a permanent binding
between an abstraction and its
implementation. This might be the
case, for example, when the
implementation must be selected or
switched at run-time.
both the abstractions and their
implementations should be extensible
by subclassing. In this case, the
Bridge pattern lets you combine the
different abstractions and
implementations and extend them
independently.
changes in the implementation of an
abstraction should have no impact on
clients; that is, their code should
not have to be recompiled.
Now, with that background in mind, I am open to any design gurus telling me where I err'd in my reasoning.
You may be erring in using C++ to define a system for which you as yet have no need/no requirements:
This approach seems to be the most
obvious alternative, and is working
well for the time being. I, however,
worry about the maintenance issues
that this approach will have.
Effectively any arbitrary thing can be
added, and there is no single location
in which all possible capabilities are
defined. Its not a problem currently,
when the total number of things is
quite small, but I worry that it might
be a problem when someone else starts
trying to use and modify the code.
Maybe you should be considering principles like YAGNI as opposed to BDUF.
Some of my personal favourites are from Systemantics:
"15. A complex system that works is invariably found to have evolved from a simple system that works"
"16. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system."
You're also worring about performance, when you have no defined performance requirements, and no problems with performance:
I am somewhat concerned about
performance, although it would not
surprise me if that was a non-issue.
Also, I hope you know about double-dispatch, which might be useful for implementing anything-to-anything interactions (it's described in some detail in More Effective C++ by Scott Meyers).
I know my gut reaction to global variables is "badd!" but in the two game development courses I've taken at my college globals were used extensively, and now in the DirectX 9 game programming tutorial I am using (www.directxtutorial.com) I'm being told globals are okay in game programming ...? The site also recommends using only structs if you can when doing game programming to help keep things simple.
I'm really confused on this issue, and all the research I've been trying to do is very confusing. I realize there are issues when using global variables (threading issues, they make code harder to maintain, the state of them is hard to track etc) but also there is a cost associated with not using globals, I'd have to pass a loooot of information around very often which can be confusing and I imagine time-costing, although I guess pointers would speed the process up (this is my first time writing a game in C++.) Anyway, I realize there is probably no "right" or "wrong" answer here since both ways work, but I want my code to be as proper as I can so any input would be good, thank you very much!
The trouble with games and globals is that games (nowadays) are threaded at engine level. Game developers using an engine use the engine's abstractions rather than directly programming concurrency (IIRC). In many of the highlevel languages such as C++, threads sharing state is complex. When many concurrent processes share a common resource they have to make sure they don't tread on eachother's toes.
To get around this, you use concurrency control such as mutex and various locks. This in effect makes asynchronous critical sections of code access shared state in a synchronous manner for writing. The topic of concurrency control is too much to explain fully here.
Suffice to say, if threads run with global variables, it makes debugging very hard, as concurrency bugs are a nightmare (think, "which thread wrote that? Who holds that lock?").
There are exceptions in games programming API such as OpenGL and DX. If your shared data/globals are pointers to DX or OpenGL graphics contexts then typically this maps down to GPU operations which don't suffer so much from the same trouble.
Just be careful. Keeping objects representing 'player' or 'zombie' or whatever, and sharing them between threads can be tricky. Spawn 'player' threads and 'zombie group' threads instead and have a robust concurrency abstraction between them based on message passing rather than accessing those object's state across the thread/critical section boundary.
Saying all that, I do agree with the "Say no to globals" point made below.
For more on the complexities of threads and shared state see:
1 POSIX Threads API - I know it is POSIX, but provides a good idea that translates to other API
2 Wikipedia's excellent range of articles on concurrency control mechanisms
3 The Dining Philosopher's problem (and many others)
4 ThreadMentor tutorials and articles on threading
5 Another Intel article, but more of a marketing thing.
6 An ACM article on building multi-threaded game engines
Have worked on AAA game titles, I can tell you that globals should be eradicated immediately before they spread like a cancer. I've seen them corrupt an I/O subsystem so completely that it had to be wholly thrown out to be rewritten.
Say no to globals. Always.
In this respect, there's no difference between games and other programs. While arguably OK in small examples given in elementary courses, global variables are strongly discouraged in real programs.
So if you want to write correct, readable and maintainable code, stay away from global variables as much as possible.
All answers until now deal with the globals/threads issue, but I will add my 2 cents to the struct/class (understood as all public attributes/private attributes + methods) discussion. Preferring structs over classes on the grounds of that being simpler is in the same line of thought of preferring assembler over C++ on the grounds of that being a simpler language.
The fact that you have to think on how your entities are going to be used and provide methods for it makes the concrete entity a little more complex, but greatly simplifies the rest of the code and maintainability. The whole point of encapsulation is that it simplifies the program by providing clear ways in that your data can be modified while maintaining your objects invariants. You control the entry points and what can happen there. Having all attributes public imply that any part of the code can have a small innocent error (forgot to check condition X) and break your invariants completely (health below 0, but no 'death' processing being triggered)
The other common discussion is performance: If I just need to update a datum, then having to call a method will impact my performance. Not really. If methods are simple and you provide them in the header as inlines (inside the class body or outside with the inline keyword), the compiler will be able to copy those instructions to each use place. You get the guarantee that the compiler will not leave out any check by mistake, and no impact in performance.
Having read a bit more what you posted though:
I'm being told globals are okay in
game programming ...? The site also
recommends using only structs if you
can when doing game programming to
help keep things simple.
Games code is no different from other code really. Gratuitous use of globals is bad regardless. And as for 'only use structs', that is just a load of crap. Approach game development on the same principles as any other software - you may find places where you need to bend this but they should be the exception, typically when dealing with low-level hardware issues.
I would say that the advice about globals and 'keeping things simple' is probably a mixture of being easier to learn and old fashioned thinking about performance. When I was being taught about game programming I remember being told that C++ wasn't advised for games as it would be too slow but I've worked on multiple games using every facet of C++ which proves that isn't true.
I would add to everyone's answers here that globals are to be avoided where possible, I wouldn't be afraid to use whatever you need from C++ to make your code understandable and easy to use. If you come up against a performance problem then profile that specific issue and I'll bet that most of the time you won't need to remove use of a C++ feature but just think about your problem better. There may still be some platforms around that require pure C but I don't really have experience of them, even the Gameboy Advance seemed to deal with C++ quite nicely.
The metaissue here is that of state. How much state does any given function depend on, and how much does it change? Then consider how much state is implicit versus explicit, and cross that with the inspect vs. change.
If you have a function/method/whatever that is called DoStuff(), you have no idea from the outside what it depends on, what it needs, and what's going to happen to the shared state. If this is a class member, you also have no idea how that object's state is going to mutate. This is bad.
Contrast to something like cosf(2), this function is understood not to change any global state, and any state that it requires (lookup tables for example) are hidden from view and have no effect on your program-at-large. This is a function that computes a value based on what you give it and it returns that value. It changes no state.
Class member functions then have the opportunity to step up some problems. There's a huge difference between
myObject.hitpoints -= 4;
myObject.UpdateHealth();
and
myObject.TakeDamage(4);
In the first example, an external operation is changing some state that one of its member functions implicitly depends upon. The fact is that these two lines can be separated by many other lines of code begins to make it non-obvious what's going to happen in the UpdateHealth call, even if outside of the subtraction it is the same as the TakeDamage call. Encapsulating the state changes (in the second example) implies that the specifics of the state changes aren't important to the outside world, and hopefully they're not. In the first example, the state changes are explicitly important to the outside world, and this is really no different than setting some globals and calling a function that uses those globals. E.g. hopefully you'd never see
extern float value_to_sqrt;
value_to_sqrt = 2.0f;
sqrt(); // reads the global value_to_sqrt
extern float sqrt_value; // has the results of the sqrt.
And yet, how many people do exactly this sort of thing in other contexts? (Considering especially that class instance state is "global" in regards to that particular instance.)
So- prefer giving explicit instruction to your function calls, and prefer that they return the results directly rather than having to explicitly set state before calling a function and then checking other state after it returns.
The more state dependencies a bit of code has, the harder it'll be to make it multithread safe, but that has already been covered above. The point I want to make is that the problem isn't so much globals but more the visibility of the collection of state that is required for a bit of code to operate (and subsequently how much other code also depends on that state).
Most games aren't multi-threaded, although newer titles are going down that route, and so they've managed to get away with it so far.
Saying that globals are okay in games is like not bothering to fix the brakes on your car because you only drive at 10mph!
It's bad practice which ever way you look at it.
You only have to look at the number of bugs in games to see examples of this.
If in class A you need to access data D, instead of setting D global, you'd better put into A a reference to D.
Globals are NOT intrinsically bad. In C for instance they are part of the language's normal use... and since C++ builds on C they still have a place.
On an aesthetic level, it's better to avoid them where you can sensibly make them part of a class, but if all you do is wrap a bunch of globals into a singleton, you made things worse because at least with globals it's obvious what the point is.
Be careful, but for some things it makes less sense to force OO concepts on what is actually a global value.
Two specific issues that I've encountered in the past:
First: If you're attempting to separate e.g. render phase (const access to most game state) from logic phase (non-const access to most game state) for whatever reason (decoupling render rate from logic rate, synchronizing game state across a network, recording and playback of gameplay at a fixed point in the frame, etc), globals make it very hard to enforce that.
Problems tend to creep in and become hard to debug and eradicate. This also has implications for threaded renderers separate from game logic, or the like (the other answers cover this topic thoroughly).
Second: The presence of many globals tends to bloat the literal pool, which the compiler typically places after each function.
If you get to your state through either a single "struct GlobalTable" which holds globals or a collection of methods on an object or the like, your literal pool tends to be a lot smaller, decreasing the size of the .text section in your executable.
This is mostly a concern for instruction set architectures that can't embed load targets directly into instructions (see e.g. fixed-width ARM or Thumb version 1 instruction encoding on ARM processors). Even on modern processors I'd wager you'll get slightly smaller codegen.
It also hurts doubly when your instruction and data caches are separate (again, as on some ARM processors); you'll tend to get two cachelines loaded where you might only need one with a unified cache. Since the literal pool may count as data, and won't always start on a cacheline boundary, the same cacheline might have to be loaded both into the i-cache and d-cache.
This may count as a "microoptimization", but it's a transform that's relatively easily applied to an entire codebase (e.g. extern struct GlobalTable { /* ... */ } the; and then replace all g_foo with the.foo)... And we've seen code size decrease between 5 and 10 percent in some cases, with a corresponding boost to performance due to keeping the caches cleaner.
I'm going to take a risk and say it depends to me on the scope/scale of your project. Because if you are trying to program an epic game with a boatload of code, then globals could, and easily in the most painful way in hindsight, cost you way more time than they save.
But if you are trying to code, say, something as simple as Super Mario or Metroid or Contra or even simpler, like Pac-Man, then these games were originally coded in 6502 assembly and used globals for almost everything. Just about all the game data and state was stored in data segments, and that didn't stop the devs from shipping a very competent and popular product in spite of working with absolutely inferior tools and engineering standards which would probably horrify people today.
So if you are just writing this kind of small and simple game which has a very limited scope and isn't designed to grow and expand far beyond its original design, isn't designed to be maintained for years and years, with a few thousand lines of simple C++ code, then I don't see the big deal of using a global here or a singleton there. Someone obsessed with trying to engineer Super Mario with the soundest engineering techniques with SOLID and a DI framework could end up taking far, far longer to ship than even the devs who wrote it in 6502 asm.
And I'm getting old and there's something to it there when I look at these old simple games and how they were coded, and it almost seems like the devs were doing something right in spite of the hard-coded magic numbers and globals all over the place while I spend my career fumbling around and trying to figure out the best way to engineer things. That said this is probably a very unpopular opinion, and not one I would have liked either a decade or two ago, but there's something to it. I don't look at the 6502 asm of Metroid and think, "these devs underengineered their product and their lives would have been so much easier if they did this or that." Seems like they did things just about right.
But again this is for small-scale stuff, maybe in the indie category of games by today's standards, and in the smaller of the indie games among them, and far from doing anything ground-breaking in terms of how much data it can process or using cutting-edge hardware techniques. If in doubt, I'd definitely suggest to err on the side of avoiding globals. It's also a little bit trickier in C++ as opposed to say, C, since you can have objects with constructor and destructors, and initialization and destruction order isn't well-defined and easily predictable for global objects. There I'd say to lean even more on the side of avoiding globals since they can trip you up in whole new ways when you aren't explicitly initializing and destroying them yourself in a predictable order. And naturally if you want to multithread a lot beyond a critical loop here and there, then your ability to reason about thread-safety of any particular code will be severely diminished if you cannot minimize the scope/visibility of your game state to the minimum of places.