How to make data available to all objects of a class? - c++

This is probably very basic but somehow I cannot figure it out.
Say I have a class A which embeds 42 Things, plus some common data:
class A {
Thing things[42];
int common_data[1024];
}
I would like each thing to have access to the common data, but I don't want to copy the data in each Thing object, nor pay the price of a pointer to it in each thing. In other word, I would like Thing to look like this:
class Thing {
int ident;
int f() {
return common_data[ident];
}
}
Of course here common_data is unbound. What is the canonical way to make this work?
FWIW I am working with a subset of C++ with no dynamic allocation (no "new", no inheritance, basically it's C with the nice syntax to call methods and declare objects); I am ideally looking for a solution that fits in this subset.

You can solve your issue by making the common_data attribute of Class A static. Static variables are shared by all members of class A, and will be accessible if you make it public.
class A
{
private:
Thing things[42];
public:
static int common_data[1024];
}
It can be accessed by doing...
A::common_data[index];

I am not sure if I understand the question correctly, but maybe this helps:
struct A {
Thing things[42];
int common_data[1024];
void foo(int index) {
things[index].doSomeThingWithCommonData(int* common_data);
}
};
struct Thing {
void doSomeThinWithCommonData(int* common_data) {
/* now you have access to common_data */
}
};

Your reasons for avoiding pointers/reference is based on irrational fears. "Copying" a pointer 42 times is nothing (read this word carefully) for the machine. Moreover this is definitely not the bottleneck of the application.
So the idiomatic way is to simply use dependency injection, which is indeed a slightly more costly action for you (if passing an array can be considered costly), but allows for a much more decoupled design.
This is therefore the solution I recommend:
struct Thing {
using data = std::shared_ptr<std::array<int, 1024>>;
data common_data;
Thing(data arg)
: common_data(arg)
{}
// ...
};
If the system is costrained, then you should benchmark your program. I can tell you already with almost absolutely certainty that the bottleneck won't be the copying of those 42 pointers.

Related

dumb data object holds all common values c++, is this correct

So I am new to c++ and I'm writing for a scientific application.
Data needs to be read in from a few input text files.
At the moment I am storing these input variables in an object. (lets call it inputObj).
Is it right that I have to pass this "inputObj" around all my objects now. It seems like it has just become a complicated version of global variables. So I think I may be missing the point of OOP.
I have created a g++ compilable small example of my program:
#include<iostream>
class InputObj{
// this is the class that gets all the data
public:
void getInputs() {
a = 1;
b = 2;
};
int a;
int b;
};
class ExtraSolver{
//some of the work may be done in here
public:
void doSomething(InputObj* io) {
eA = io->a;
eB = io->b;
int something2 = eA+eB;
std::cout<<something2<<std::endl;
};
private:
int eA;
int eB;
};
class MainSolver{
// I have most things happening from here
public:
void start() {
//get inputs;
inputObj_ = new InputObj();
inputObj_ -> getInputs();
myA = inputObj_->a;
myB = inputObj_->b;
//do some solve:
int something = myA*myB;
//do some extrasolve
extraSolver_ = new ExtraSolver();
extraSolver_ -> doSomething(inputObj_);
};
private:
InputObj* inputObj_;
ExtraSolver* extraSolver_;
int myA;
int myB;
};
int main() {
MainSolver mainSolver;
mainSolver.start();
}
Summary of question: A lot of my objects need to use the same variables. Is my implementation the correct way of achieving this.
Don't use classes when functions will do fine.
Don't use dynamic allocation using new when automatic storage will work fine.
Here's how you could write it:
#include<iostream>
struct inputs {
int a;
int b;
};
inputs getInputs() {
return { 1, 2 };
}
void doSomething(inputs i) {
int something2 = i.a + i.b;
std::cout << something2 << std::endl;
}
int main() {
//get inputs;
inputs my_inputs = getInputs();
//do some solve:
int something = my_inputs.a * my_inputs.b;
//do some extrasolve
doSomething(my_inputs);
}
I'll recommend reading a good book: The Definitive C++ Book Guide and List
my answer would be based off your comment
"Yea I still haven't got the feel for passing objects around to each other, when it is essentially global variables im looking for "
so this 'feel for passing object' will come with practice ^^, but i think it's important to remember some of the reasons why we have OO,
the goal (in it simplified version) is to modularise your code so as increase the reuse segment of code.
you can create several InputObj without redefining or reassignig them each time
another goal is data hiding by encapsulation,
sometimes we don't want a variable to get changed by another function, and we don't want to expose those variable globally to protect their internal state.
for instance, if a and b in your InputObj where global variable declared and initialized at the beginning of your code, can you be certain that there value doesn't get changed at any given time unless you want to ? for simple program yes.. but as your program scale so does the chances of your variable to get inadvertently changed (hence some random unexpected behavior)
also there if you want the initial state of a and b to be preserved , you will have to do it yourself ( more temp global variables? )
you get more control over the flow of your code by adding level abstractions with classes/inheritances/operation overriding/polymorphisms/Abtract and interface and a bunch of other concepts that makes our life easier to build complex architectures.
now while many consider global variable to be evil, i think they are good and useful when used properly... otherwise is the best way to shoot yourself in the foot.
I hope this helped a bit to clear out that uneasy feeling for passing out objects :)
Is using your approach good or not strongly depends on situation.
If you need some high speed calculation you can't provide incapsulation methods for your InputObj class, though they are recommended, because it will strongly reduce speed of calculation.
However there are two rules that your can follow to reduce bugs:
1) Carefully using 'const' keyword every time you really don't want your object to modify:
void doSomething(InputObj * io) -> void doSomething(const InputObj * io)
2) Moving every action related with initial state of the object(in your case, as far as I can guess, your InputObj is loaded from file and thus without this file loading is useless) to constructor:
Instead of:
InputObj() { }
void getInputs(String filename) {
//reading a,b from file
};
use:
InputObj(String filename) {
//reading a,b from file
};
You are right that this way you have implemented global variables, but I would call your approach structured, and not complicated, as you encapsulate your global values in an object. This will make your program more maintainable, as global values are not spread all over the place.
You can make this even nicer by implementing the global object as a singleton (http://en.wikipedia.org/wiki/Singleton_pattern) thus ensuring there is only one global object.
Further, access the object through a static member or function. That way you don't need to pass it around as a variable, but any part of your program can easily access it.
You should be aware that a global object like this will e.g. not work well in a multithreaded application, but I understand that this not the case.
You should also be aware that there is a lot of discussions if you should use a singleton for this kind of stuff or not. Search SO or the net for "C++ singleton vs. global static object"

What is the proper way to handle a large number of interface implementations?

For one of my current projects I have an interface defined for which I have a large number of implementations. You could think of it as a plugin interface with many plugins.
These "plugins" each handle a different message type in a network protocol.
So when I get a new message, I loop through a list of my plugins, see who can handle it, and call into them via the interface.
The issue I am struggling with is how to allocate, initialize, and "load" all the implementations into my array/vector/whatever.
Currently I am declaring all of the "plugins" in main(), then calling an "plugin_manager.add_plugin(&plugin);" for each one. This seems less than ideal.
So, the actual questions:
1. Is there a standardized approach to this sort of thing?
2. Is there any way to define an array (global?) pre-loaded with the plugins?
3. Am I going about this the wrong way entirely? Are there other (better?) architecture options for this sort of problem?
Thanks.
EDIT:
This compiles (please excuse the ugly code)... but it kind of seems like a hack.
On the other hand, it solves the issue of allocation, and cleans up main()... Is this a valid solution?
class intf
{
public:
virtual void t() = 0;
};
class test : public intf
{
public:
test(){}
static test* inst(){ if(!_inst) _inst = new test; return _inst; }
static test* _inst;
void t(){}
};
test* test::_inst = NULL;
intf* ints[] =
{
test::inst(),
NULL
};
Store some form of smart pointer in a container. Dynamically allocate the plugins and register them in the container so that they can be used later.
One possible approach for your solution would be, if you have some form of message id that the plugin can decode, to use a map from that id to the plugin that handles that. This approach allows you to have fast lookup of the plugin given the input message.
One way of writing less code would be to use templates for the instantiation function. Then you only need to write one and put it in the interface, instead of having one function per implementation class.
class intf
{
public:
virtual void t() = 0;
template<class T>
static T* inst()
{
static T instance;
return &instance;
}
};
class test : public intf { ... };
intf* ints[] =
{
intf::inst<test>(),
NULL
};
The above code also works around two bugs you have in your code: One is a memory leak, in your old inst() function you allocate but you never free; The other is that the constructor sets the static member to NULL.
Other tips is to read more about the "singleton" pattern, which is what you have. It can be useful in some situations, but is generally advised against.

Most effective method of executing functions an in unknown order

Let's say I have a large, between 50 and 200, pool of individual functions whose job it is to operate on a single object and modify it. The pool of functions is selectively put into a single array and arranged in an arbitrary order.
The functions themselves take no arguments outside of the values present within the object it is modifying, and in this way the object's behavior is determined only by which functions are executed and in what order.
A way I have tentatively used so far is this, which might explain better what my goal is:
class Behavior{
public:
virtual void act(Object * obj) = 0;
};
class SpecificBehavior : public Behavior{
// many classes like this exist
public:
void act(Object * obj){ /* do something specific with obj*/ };
};
class Object{
public:
std::list<Behavior*> behavior;
void behave(){
std::list<Behavior*>::iterator iter = behavior.front();
while(iter != behavior.end()){
iter->act(this);
++iter;
};
};
};
My Question is, what is the most efficient way in C++ of organizing such a pool of functions, in terms of performance and maintainability. This is for some A.I research I am doing, and this methodology is what most closely matches what I am trying to achieve.
edits: The array itself can be changed at any time by any other part of the code not listed here, but it's guaranteed to never change during the call to behave(). The array it is stored in needs to be able to change and expand to any size
If the behaviour functions have no state and only take one Object argument, then I'd go with a container of function objects:
#include <functional>
#include <vector>
typedef std::function<void(Object &)> BehaveFun;
typedef std::vector<BehaveFun> BehaviourCollection;
class Object {
BehaviourCollection b;
void behave() {
for (auto it = b.cbegin(); it != b.cend(); ++it) *it(*this);
}
};
Now you just need to load all your functions into the collection.
if the main thing you will be doing with this collection is iterating over it, you'll probably want to use a vector as dereferencing and incrementing your iterators will equate to simple pointer arithmetic.
If you want to use all your cores, and your operations do not share any state, you might want to have a look at a library like Intel's TBB (see the parallel_for example)
I'd keep it exactly as you have it.
Perofmance should be OK (there may be an extra indirection due to the vtable look up but that shouldn't matter.)
My reasons for keeping it as is are:
You might be able to lift common sub-behaviour into an intermediate class between Behaviour and your implementation classes. This is not as easy using function pointers.
struct AlsoWaveArmsBase : public Behaviour
{
void act( Object * obj )
{
start_waving_arms(obj); // Concrete call
do_other_action(obj); // Abstract call
end_waving_arms(obj); // Concrete call
}
void start_waving_arms(Object*obj);
void end_waving_arms(Object*obj);
virtual void do_other_actions(Object * obj)=0;
};
struct WaveAndWalk : public AlsoWaveArmsBase
{
void do_other_actions(Object * obj) { walk(obj); }
};
struct WaveAndDance : pubic AlsoWaveArmsBase
{
void do_other_actions(Object * obj) { walk(obj); }
}
You might want to use state in your behaviour
struct Count : public Behavior
{
Behaviour() : i(0) {}
int i;
void act(Object * obj)
{
count(obj,i);
++i;
}
}
You might want to add helper functions e.g. you might want to add a can_act like this:
void Object::behave(){
std::list<Behavior*>::iterator iter = behavior.front();
while(iter != behavior.end()){
if( iter->can_act(this) ){
iter->act(this);
}
++iter;
};
};
IMO, these flexibilities outweigh the benefits of moving to a pure function approach.
For maintainability, your current approach is the best (virtual functions). You might get a tiny little gain from using free function pointers, but I doubt it's measurable, and even if so, I don't think it is worth the trouble. The current OO approach is fast enough and maintainable. The little gain I'm talking about comes from the fact that you are dereferencing a pointer to an object and then (behind the scenes) dereferencing a pointer to a function (which happening as the implementation of calling a virtual function).
I wouldn't use std::function, because it's not very performant (though that might differ between implementations). See this and this. Function pointers are as fast as it gets when you need this kind of dynamism at runtime.
If you need to improve the performance, I suggest to look into improving the algorithm, not this implementation.

Should I use global variables?

I have been reading about global variables and how bad they are but I am stuck in one place due to that. I am going to be very specific about if I should use global variables in this scenario.
I am working on a game engine. And my engine consists of lots of managers. Managers do certain tasks - they store resources, load them, update them etc.
I have made all my managers a singleton because so many classes and functions needs access to them. I was thinking of removing the singleton but I don't know how i can not have it and get access to these managers.
Here is an example of what I am trying to tell (im bad at english, sorry):
Singleton.h
template<class T> class Singleton {
private:
Singleton( const Singleton& );
const Singleton& operator=( const Singleton& );
protected:
Singleton() { instance = static_cast<T*>(this); }
virtual ~Singleton() {}
protected:
static T * instance;
public:
static T &Instance() {
return *instance;
}
};
ScriptManager.h
class ScriptManager : public Singleton<ScriptManager> {
public:
virtual void runLine(const String &line)=0;
virtual void runFile(const String &file)=0;
};
PythonScriptManager.cpp
class PythonScriptManager : public ScriptManager {
public:
PythonScriptManager() { Py_Initialize(); }
~PythonScriptManager() { Py_Finalize(); }
void runFile(const String &file) {
FILE * fp = fopen(file.c_str(), "r");
PyRun_SimpleFile(fp, file.c_str());
fclose(fp);
fp=0;
}
void runLine(const String &line) {
PyRun_SimpleString(line.c_str());
}
};
Entity ScriptComponent
#include <CoreIncludes.h>
#include <ScriptManager.h>
#include <ScriptComponent.h>
void update() {
ScriptManager::Instance().runFile("test_script.script");
//i know its not a good idea to open the stream on every frame but thats not the main concern right now.
}
Application
int main(int argc, const char * argv) {
Application * app = new Application(argc, argv);
ScriptManager * script_manager = new PythonScriptManager;
//all other managers
return app->run();
}
As you see I am not even including the files above in my ScriptComponent.cpp file which wins me some compilation time. How can I get that kind of a result without globals which will make it easy to integrate as this one. The singleton is not thread safe but adding threads won't take a long time.
I hope I could explain the problem.
Thanks in advance,
Gasim Gasimzada
I won't say you should never use globals, but:
Never use singletons. Here is why. They're horrible, and they're much worse than plain old globals.
"Manager" classes are bad. What do they "manage"? How do they "manage" it? "Manager" classes need to be broken up into something that you can describe. Once you've figured out what it means to "manage" an object, you can define one or more object with better-defined responsibilities.
When you use globals, don't make them mutable. A write-only global can be acceptable (consider a logger. You write to it, but its state never ever affects the application), and read-only globals can be ok too (consider various constants that are never changed, but which you frequently need to read from). Where globals become harmful is when they have mutable state: when you both read to, and write from, them.
And finally, the very very simple alternative:
Just pass dependencies as arguments. If an object needs something in order to function, pass it that "something" in its constructor. If a function needs something in order to operate, pass it that "something" as an argument.
This might sound like a lot of work, but it isn't. When your design is cluttered with globals and singletons, you get a big huge spaghetti architecture where everything depends on everything else. Because the dependencies are not explicitly visible, you get sloppy, and rather than thinking about what the best way to connect two components is, you just make them communicate through one or more globals. Once you have to think about which dependencies to explicitly pass around, most of them turn out to be unnecessary, and your design becomes much cleaner, more readable and maintainable, and much much easier to reason about. And your number of dependencies will drop dramatically, so that you only actually need to pass an extra argument or two to a small number of objects or functions.
How about removing the ScriptManager base class and use static methods in the specialization classes? It looks like there is no state involved with any ScriptManagers, and no real heritage other than the purely virtual functions.
I could not figure out from your code samples if you actually use polymorphism here. If not, static member functions look OK to me.
Do not ever use global variables. If you need an object of a type, then you pass it in, by reference if necessary.

Good practice for choosing an algorithm randomly with c++

Setting:
A pseudo-random pattern has to be generated. There are several ways / or algorithms availible to create different content. All algorithms will generate a list of chars (but could be anything else)... the important part is, that all of them return the same type of values, and need the same type of input arguments.
It has to be possible to call a method GetRandomPattern(), which will use a random one of the algorithms everytime it is called.
My first aproach was to put each algorithm in it's own function and select a random one of them each time GetRandompattern() is called. But I didn't come up with another way of choosing between them, than with a switch case statement which is unhandy, ugly and inflexible.
class PatternGenerator{
public:
list<char> GetRandomPattern();
private:
list<char>GeneratePatternA(foo bar);
list<char>GeneratePatternB(foo bar);
........
list<char>GeneratePatternX(foo bar);
}
What would be a good way to select a random GeneratePattern function every time the GetRandomPattern() method is called ?
Or should the whole class be designed differently ?
Thanks a lot
Create a single class for each algorithm, each one subclassing a generator class. Put instances of those objects into a list. Pick one randomly and use it!
More generically, if you start creating several alternative methods with the same signature, something's screaming "put us into sibling classes" at you :)
Update
Can't resist arguing a bit more for an object-oriented solution after the pointer-suggestion came
Imagine at some point you want to print which method created which random thing. With objects, it's easy, just add a "name" method or something. How do you want to achieve this if all you got is a pointer? (yea, create a dictionary from pointers to strings, hm...)
Imagine you find out that you got ten methods, five of which only differ by a parameter. So you write five functions "just to keep the code clean from OOP garbage"? Or won't you rather have a function which happens to be able to store some state with it (also known as an object?)
What I'm trying to say is that this is a textbook application for some OOP design. The above points are just trying to flesh that out a bit and argue that even if it works with pointers now, it's not the future-proof solution. And you shouldn't be afraid to produce code that talks to the reader (ie your future you, in four weeks or so) telling that person what it's doing
You can make an array of function pointers. This avoids having to create a whole bunch of different classes, although you still have to assign the function pointers to the elements of the array. Any way you do this, there are going to be a lot of repetitive-looking lines. In your example, it's in the GetRandomPattern method. In mine, it's in the PatternGenerator constructor.
#define FUNCTION_COUNT 24
typedef list<char>(*generatorFunc)(foo);
class PatternGenerator{
public:
PatternGenerator() {
functions[0] = &GeneratePatternA;
functions[1] = &GeneratePatternB;
...
functions[24] = &GeneratePatternX;
}
list<char> GetRandomPattern() {
foo bar = value;
int funcToUse = rand()%FUNCTION_COUNT;
functions[funcToUse](bar);
}
private:
generatorFunc functions[FUNCTION_COUNT];
}
One way to avoid switch-like coding is using Strategy design pattern. As example:
class IRandomPatternGenerator
{
public:
virtual list<int> makePattern(foo bar);
};
class ARandomPatternGenerator : public IRandomPatternGenerator
{
public:
virtual list<int> makePattern(foo bar)
{
...
}
};
class BRandomPatternGenerator : public IRandomPatternGenerator
{
public:
virtual list<int> makePattern(foo bar)
{
...
}
};
Then you can choose particular algorithm depending on runtime type of your RandomPatternGenerator instance. (As example creating list like nicolas78 suggested)
Thank you for all your great input.
I decided to go with function pointers, mainly because I didn't know them before and they seem to be very powerfull and it was a good chance to get to know them, but also because it saves me lot of lines of code.
If I'd be using Ruby / Java / C# I'd have decided for the suggested Strategy Design pattern ;-)
class PatternGenerator{
typedef list<char>(PatternGenerator::*createPatternFunctionPtr);
public:
PatternGenerator(){
Initialize();
}
GetRandomPattern(){
int randomMethod = (rand()%functionPointerVector.size());
createPatternFunctionPtr randomFunction = functionPointerVector.at( randomMethod );
list<char> pattern = (this->*randomFunction)();
return pattern;
}
private:
void Initialize(){
createPatternFunctionPtr methodA = &PatternGenerator::GeneratePatternA;
createPatternFunctionPtr methodB = &PatternGenerator::GeneratePatternB;
...
functionPointerVector.push_back( methodA );
functionPointerVector.push_back( methodB );
}
list<char>GeneratePatternA(){
...}
list<char>GeneratePatternB(){
...}
vector< createPattern > functionPointerVector;
The readability is not much worse as it would have been with the Design Pattern Solution, it's easy to add new algorithms, the pointer arithmetics are capsuled within a class, it prevents memory leaks and it's very fast and effective...