My application uses a large amount of Panda objects. Each Panda has a list of Bamboo objects. This list does not change once the Panda is initialized (no Bamboo objects are added or removed). Currently, my class is implemented as follows:
class Panda
{
int a;
int b;
int _bambooCount;
Bamboo* _bamboo;
Panda (int count, Bamboo* bamboo)
{
_bambooCount = count;
_bamboo = new Bamboo[count];
// ... copy bamboo into the array ...
}
}
To alleviate the overhead of allocating an array of Bamboo objects, I could implement this class as follows -- basically, instead of creating objects via the regular constructor, a construction method allocates a single memory block to hold both the Panda object and its Bamboo array:
class Panda
{
int a;
int b;
Panda ()
{
// ... other initializations here ...
}
static Panda *createPanda (int count, Bamboo* bamboo)
{
byte* p = new byte[sizeof(Panda) +
sizeof(Bamboo) * count];
new (p) Panda ();
Bamboo* bamboo = (Bamboo*)
p + sizeof(Panda);
// ... copy bamboo objects into the memory
// behind the object...
return (Panda*)p;
}
}
Can you foresee any problems with the second design, other than the increased maintenance effort? Is this an acceptable design pattern, or simply a premature optimization that could come back to bite me later?
C++ gives you another option. You should consider using std::vector.
class Panda
{
int a;
int b;
std::vector<Bamboo> bamboo;
// if you do not want to store by value:
//std::vector< shared_ptr<Bamboo> > bamboo;
Panda (int count, Bamboo* bamb) : bamboo( bamb, bamb+count ) {}
}
If you want to store Panda and Bamboos in continuous memory you could use solution from this article. The main idea is to overload operator new and operator delete.
How do we convince people that in programming simplicity and clarity --in short: what mathematicians call 'elegance'-- are not a dispensable luxury, but a crucial matter that decides between success and failure?
-- Edsger W. Dijkstra
You'll be bitten if someone takes a Panda by value e.g.
//compiler allocates 16-bytes on the stack for this local variable
Panda panda = *createPanda(15, bamboo);
It may be acceptable (but is very probably a premature and horrible optimization) if you only ever refer to things by pointer and never by value, and if you beware the copy constructor and assignment operator.
Based on my experience, premature optimization is most always "premature".. That is to say you should profile your code and determine whether or not there is a need for optimization or you are just creating more work for yourself in the long run.
Also, it seems to me that the questions as to whether the optimization is worth it or not depends a lot on the size of the Bamboo class and the average number of Bamboo objects per Panda.
This was find in C.
But in C++ there is no real need.
The real question is why do you want to do this?
This is a premature optimization, just use a std::vector<> internally and all your problems will disappear.
Because you are using a RAW pointer internally that the class owns you would need to override the default versions of:
Default Constructor
Destructor
Copy Constructor
Assignment operator
If you're that desperate, you can probably do something like this:
template<std::size_t N>
class Panda_with_bamboo : public Panda_without_bamboo
{
int a;
int b;
Bamboo bamboo[N];
}
But I believe you're not desperate, but optimizing prematurely.
You use "new" look of new operator. It is fully correct relative Panda, but why don't you use Bamboo initializer?
Related
I have a scenario where I need to create different objects in each iteration of a 'for' loop.
The catch here is the synthesizer I am working does not support the "new" keyword. The Synthesizer I am using translates C/C++ code to RTL code (Hardware). So many of the constructs in C++ is not supported by the compiler.
I want to implement something like this:
test inst[5];
for(int i=0;i<5;i++)
inst[i].test_func();
I googled this problem, but all the solutions i have come across use "new" keyword.
I need a way to create different objects on every iteration of the loop without the "new" keyword. Is there a way to do so?
Essentially I am trying to emulate the behavior of 'For-generate' construct in VHDL.
Any help or suggestions is greatly appreciated.
If you can't allocate memory dynamically, you'd have to resort to redefining operator new and new[] to use memory from statically allocated pool. You will also have to implement operator delete and delete[] as well. Quite a daunting task, I'd say, unless you have something to relax some requirements for such allocators in general.
I have a suspicion you may be better off forgetting about strange subsets of C++ as a means of generating hardware, and simply writing what you want in VHDL, which, being a hardware description language, has the tools for the job.
While VHDL supports new for simulation, naturally new cannot be used for synthesis, as it implies the dynamic allocation of hardware resources ... not supported by any ASIC or FPGA toolchain in existence today.
So as far as I can see, you simply want an array of 488 objects of whatever type test is, and to operate on all of them simultaneously with the test_func() operation (whatever that is). For which you probably want a for ... generate statement.
I'm not sure if this is what you are looking for, but you could do something like this:
class Test {};
class Test0 : public Test {};
class Test1 : public Test {};
class Test2 : public Test {};
class Test3 : public Test {};
class Test4 : public Test {};
static Test0 test0;
static Test1 test1;
static Test2 test2;
static Test3 test3;
static Test4 test4;
int main(int, char **)
{
Test * tests[5] = {&test0, &test1, &test2, &test3, &test4};
for (int i=0; i<5; i++)
{
Test * t = tests[i];
// t->init_func(); // or etc
}
return 0;
}
You could have all objects preallocated and reusable. I mean, suppose you know you will only need at most 10 objects living concurrently. You then create 10 objects and push them to a list of unused objects
Whenever you need to "create" an object, just take it from the unused objects list. When you no longer need it, you can push it back to that list.
If you know the constant size of each object, you could just allocate an array of chars, and then when you need object #i, take the pointer.
int const size_of_obj_in_bytes = 20;
int const num_of_objects_to_allocate = 488;
char c[const num_of_objects_to_allocate*size_of_obj_in_bytes];
obj* get_ptr_to_obj_at_index(int i) {
return (obj*)(&c[i*size_of_obj_in_bytes]);
}
if the object is to live in the context of a function, you might be able to utilize stack allocation (alloca) to handle it. Stack allocations should be supported in your subset. You can override the 'new' method to use this function (or whatever is available for stack manipulations).
Just remember, as soon as you leave the parent function, all will be destroyed. You will need to take extra care to call a destructor, if needed.
I know that has been asked a lot, I googled but couldn't put everything together. Maybe because it is not possible to do, what I want?
I have
struct Universe
{
}
and
struct Atom: Universe
{
}
struct Molecule: Universe
{
}
Universe U;
Atom A;
Molecule M;
_atoms = vector<Universe*>(3);
_atoms.push_back(&U);
_atoms.push_back(dynamic_cast<Universe*>(&A));
_atoms.push_back(dynamic_cast<Universe*>(&M));
auto THIS_IS_ATOM = _atoms[1];
This code is most likely wrong in many ways. But my idea was to store different derived structs like this, and later access them from array or list, without any dataloss or class truncating. I wanted to get some element from array, like _atoms[1], and be able to know what type this struc is (Universe, or Atom) and e.t.c
How should I do it properly in C++?
Your code has several problems.
Universe needs a virtual destructor.
You must create your instances on the heap.
You are using the wrong std::vector constructor.
Here is a solution that should work:
struct Universe {
virtual ~Universe() {} // otherwise Atom and Molecule will not be deleted properly
}
struct Atom : Universe {
}
struct Molecule : Universe {
}
std::vector<Universe*> _atoms; // you don't need to pass anything in the constructor
_atoms.reserve(3); // but if you want to make sure that the vector has exactly a capacity of 3, use this
_atoms.push_back(new Universe());
_atoms.push_back(new Atom());
_atoms.push_back(new Molecule());
auto this_is_atom = _atoms[1]; // will actually be equivalent to
Universe* this_is_atom = _atoms[1];
// finally you must delete all the instances which you created on the heap
while (!_atoms.empty()) delete _atoms.back(), _atoms.pop_back();
Addendum: If you need to treat the objects in the vector non-polymorphically, you can cast them to the appropriate types with a static cast:
Atom* a = static_cast<Atom*>(_atoms[1]);
Edit: Instead of using a vector of raw pointers, it is advisable to use a vector of smart pointers instead, for example std::unique_ptr or std::shared_ptr, depending on the ownership semantics you are trying to model.
I'm not quite sure that I need an object pool, yet it seems the most viable solution, but has some un-wanted cons associated with it. I am making a game, where entities are stored within an object pool. These entities are not allocated directly with new, instead a std::deque handles the memory for them.
This is what my object pool more or less looks like:
struct Pool
{
Pool()
: _pool(DEFAULT_SIZE)
{}
Entity* create()
{
if(!_destroyedEntitiesIndicies.empty())
{
_nextIndex = _destroyedEntitiesIndicies.front();
_destroyedEntitiesIndicies.pop();
}
Entity* entity = &_pool[_nextIndex];
entity->id = _nextIndex;
return entity;
}
void destroy(Entity* x)
{
_destroyedEntitiesIndicies.emplace(x->id);
x->id = 0;
}
private:
std::deque<Entity> _pool;
std::queue<int> _destroyedEntitiesIndicies;
int _nextIndex = 0;
};
If I destroy an entity, it's ID will be added to the _destroyedEntitiesIndicies queue, which will make it so that the ID will be re-used, and lastly it's ID will be set to 0. Now the only pitfall to this is, if I destroy an entity and then immediately create a new one, the Entity that was previously destroyed will be updated to be the same entity that was just created.
i.e.
Entity* object1 = pool.create(); // create an object
pool.destroy(object1); // destroy it
Entity* object2 = pool.create(); // create another object
// now object1 will be the same as object2
std::cout << (object1 == object2) << '\n'; // this will print out 1
This doesn't seem right to me. How do I avoid this? Obviously the above will probably not happen (as I'll delay object destruction until the next frame). But this may cause some disturbance whilst saving entity states to a file, or something along those lines.
EDIT:
Let's say I did NULL entities to destroy them. What if I was able to get an Entity from the pool, or store a copy of a pointer to the actual entity? How would I NULL all the other duplicate entities when destroyed?
i.e.
Pool pool;
Entity* entity = pool.create();
Entity* theSameEntity = pool.get(entity->getId());
pool.destroy(entity);
// now entity == nullptr, but theSameEntity still points to the original entity
If you want an Entity instance only to be reachable via create, you will have to hide the get function (which did not exist in your original code anyway :) ).
I think adding this kind of security to your game is quite a bit of an overkill but if you really need a mechanism to control access to certain parts in memory, I would consider returning something like a handle or a weak pointer instead of a raw pointer. This weak pointer would contain an index on a vector/map (that you store somewhere unreachable to anything but that weak pointer), which in turn contains the actual Entity pointer, and a small hash value indicating whether the weak pointer is still valid or not.
Here's a bit of code so you see what I mean:
struct WeakEntityPtr; // Forward declaration.
struct WeakRefIndex { unsigned int m_index; unsigned int m_hash; }; // Small helper struct.
class Entity {
friend struct WeakEntityPtr;
private:
static std::vector< Entity* > s_weakTable( 100 );
static std::vector< char > s_hashTable( 100 );
static WeakRefIndex findFreeWeakRefIndex(); // find next free index and change the hash value in the hashTable at that index
struct WeakEntityPtr {
private:
WeakRefIndex m_refIndex;
public:
inline Entity* get() {
Entity* result = nullptr;
// Check if the weak pointer is still valid by comparing the hash values.
if ( m_refIndex.m_hash == Entity::s_hashTable[ m_refIndex.m_index ] )
{
result = WeakReferenced< T >::s_weakTable[ m_refIndex.m_index ];
}
return result;
}
}
This is not a complete example though (you will have to take care of proper (copy) constructors, assignment operations etc etc...) but it should give you the idea what I am talking about.
However, I want to stress that I still think a simple pool is sufficient for what you are trying to do in that context. You will have to make the rest of your code to play nicely with the entities so they don't reuse objects that they're not supposed to reuse, but I think that is easier done and can be maintained more clearly than the whole handle/weak pointer story above.
This question seems to have various parts. Let's see:
(...) If I destroy an entity and then immediately create a new one,
the Entity that was previously destroyed will be updated to be the
same entity that was just created. This doesn't seem right to me. How
do I avoid this?
You could modify this method:
void destroy(Entity* x)
{
_destroyedEntitiesIndicies.emplace(x->id);
x->id = 0;
}
To be:
void destroy(Entity *&x)
{
_destroyedEntitiesIndicies.emplace(x->id);
x->id = 0;
x = NULL;
}
This way, you will avoid the specific problem you are experiencing. However, it won't solve the whole problem, you can always have copies which are not going to be updated to NULL.
Another way is yo use auto_ptr<> (in C++'98, unique_ptr<> in C++-11), which guarantee that their inner pointer will be set to NULL when released. If you combine this with the overloading of operators new and delete in your Entity class (see below), you can have a quite powerful mechanism. There are some variations, such as shared_ptr<>, in the new version of the standard, C++-11, which can be also useful to you. Your specific example:
auto_ptr<Entity> object1( new Entity ); // calls pool.create()
object1.release(); // calls pool.destroy, if needed
auto_ptr<Entity> object2( new Entity ); // create another object
// now object1 will NOT be the same as object2
std::cout << (object1.get() == object2.get()) << '\n'; // this will print out 0
You have various possible sources of information, such as the cplusplus.com, wikipedia, and a very interesting article from Herb Shutter.
Alternatives to an Object Pool?
Object pools are created in order to avoid continuous memory manipulation, which is expensive, in those situations in which the maximum number of objects is known. There are not alternatives to an object pool that I can think of for your case, I think you are trying the correct design. However, If you have a lot of creations and destructions, maybe the best approach is not an object pool. It is impossible to say without experimenting, and measuring times.
About the implementation, there are various options.
In the first place, it is not clear whether you're experiencing performance advantages by avoiding memory allocation, since you are using _destroyedEntitiesIndicies (you are anyway potentially allocating memory each time you destroy an object). You'll have to experiment with your code if this is giving you enough performance gain in contrast to plain allocation. You can try to remove _destroyedEntitiesIndicies altogether, and try to find an empty slot only when you are running out of them (_nextIndice >= DEFAULT_SIZE ). Another thing to try is discard the memory wasted in those free slots and allocate another chunk (DEFAULT_SIZE) instead.
Again, it all depends of the real use you are experiencing. The only way to find out is experimenting and measuring.
Finally, remember that you can modify class Entity in order to transparently support the object pool or not. A benefit of this is that you can experiment whether it is a really better approach or not.
class Entity {
public:
// more things...
void * operator new(size_t size)
{
return pool.create();
}
void operator delete(void * entity)
{
}
private:
Pool pool;
};
Hope this helps.
So I am new to c++ and I'm writing for a scientific application.
Data needs to be read in from a few input text files.
At the moment I am storing these input variables in an object. (lets call it inputObj).
Is it right that I have to pass this "inputObj" around all my objects now. It seems like it has just become a complicated version of global variables. So I think I may be missing the point of OOP.
I have created a g++ compilable small example of my program:
#include<iostream>
class InputObj{
// this is the class that gets all the data
public:
void getInputs() {
a = 1;
b = 2;
};
int a;
int b;
};
class ExtraSolver{
//some of the work may be done in here
public:
void doSomething(InputObj* io) {
eA = io->a;
eB = io->b;
int something2 = eA+eB;
std::cout<<something2<<std::endl;
};
private:
int eA;
int eB;
};
class MainSolver{
// I have most things happening from here
public:
void start() {
//get inputs;
inputObj_ = new InputObj();
inputObj_ -> getInputs();
myA = inputObj_->a;
myB = inputObj_->b;
//do some solve:
int something = myA*myB;
//do some extrasolve
extraSolver_ = new ExtraSolver();
extraSolver_ -> doSomething(inputObj_);
};
private:
InputObj* inputObj_;
ExtraSolver* extraSolver_;
int myA;
int myB;
};
int main() {
MainSolver mainSolver;
mainSolver.start();
}
Summary of question: A lot of my objects need to use the same variables. Is my implementation the correct way of achieving this.
Don't use classes when functions will do fine.
Don't use dynamic allocation using new when automatic storage will work fine.
Here's how you could write it:
#include<iostream>
struct inputs {
int a;
int b;
};
inputs getInputs() {
return { 1, 2 };
}
void doSomething(inputs i) {
int something2 = i.a + i.b;
std::cout << something2 << std::endl;
}
int main() {
//get inputs;
inputs my_inputs = getInputs();
//do some solve:
int something = my_inputs.a * my_inputs.b;
//do some extrasolve
doSomething(my_inputs);
}
I'll recommend reading a good book: The Definitive C++ Book Guide and List
my answer would be based off your comment
"Yea I still haven't got the feel for passing objects around to each other, when it is essentially global variables im looking for "
so this 'feel for passing object' will come with practice ^^, but i think it's important to remember some of the reasons why we have OO,
the goal (in it simplified version) is to modularise your code so as increase the reuse segment of code.
you can create several InputObj without redefining or reassignig them each time
another goal is data hiding by encapsulation,
sometimes we don't want a variable to get changed by another function, and we don't want to expose those variable globally to protect their internal state.
for instance, if a and b in your InputObj where global variable declared and initialized at the beginning of your code, can you be certain that there value doesn't get changed at any given time unless you want to ? for simple program yes.. but as your program scale so does the chances of your variable to get inadvertently changed (hence some random unexpected behavior)
also there if you want the initial state of a and b to be preserved , you will have to do it yourself ( more temp global variables? )
you get more control over the flow of your code by adding level abstractions with classes/inheritances/operation overriding/polymorphisms/Abtract and interface and a bunch of other concepts that makes our life easier to build complex architectures.
now while many consider global variable to be evil, i think they are good and useful when used properly... otherwise is the best way to shoot yourself in the foot.
I hope this helped a bit to clear out that uneasy feeling for passing out objects :)
Is using your approach good or not strongly depends on situation.
If you need some high speed calculation you can't provide incapsulation methods for your InputObj class, though they are recommended, because it will strongly reduce speed of calculation.
However there are two rules that your can follow to reduce bugs:
1) Carefully using 'const' keyword every time you really don't want your object to modify:
void doSomething(InputObj * io) -> void doSomething(const InputObj * io)
2) Moving every action related with initial state of the object(in your case, as far as I can guess, your InputObj is loaded from file and thus without this file loading is useless) to constructor:
Instead of:
InputObj() { }
void getInputs(String filename) {
//reading a,b from file
};
use:
InputObj(String filename) {
//reading a,b from file
};
You are right that this way you have implemented global variables, but I would call your approach structured, and not complicated, as you encapsulate your global values in an object. This will make your program more maintainable, as global values are not spread all over the place.
You can make this even nicer by implementing the global object as a singleton (http://en.wikipedia.org/wiki/Singleton_pattern) thus ensuring there is only one global object.
Further, access the object through a static member or function. That way you don't need to pass it around as a variable, but any part of your program can easily access it.
You should be aware that a global object like this will e.g. not work well in a multithreaded application, but I understand that this not the case.
You should also be aware that there is a lot of discussions if you should use a singleton for this kind of stuff or not. Search SO or the net for "C++ singleton vs. global static object"
In a system where current object is operated by other contained objects, when reference to current object is passed, it appears that the link goes on and on....without any end ( For the code below, Car->myCurrentComponent->myCar_Brake->myCurrentComponent->myCar_Brake->myCurrentComponent ....).
ICar and Car->myCurrentComponent->myCar_Brake refer to same address, point to same objects. It's like Car contains Brake which refers to Car.
In fact, Car is the only object, myCar_Brake and myCar_Speed just refer(point) to it.Is this kind of use of reference and pointer normal? Are there any potential problem with this approach?
Sample Code
class Brake
class C
class Car
{
public:
Car();
// Objects of type B and C.
Brake* myBrake;
Speed* mySpeed;
// Current component under action.
Component* myCurrentComponent;
}
/******************************/
// Constructor
Car::Car()
{
myBrake = new Brake(*this);
mySpeed = new Speed(*this);
myCurrentComponent = myBrake;
}
/******************************/
class Brake: public Component
{
public:
Brake(Car&);
// Needs to operate on A.
Car* myCar_Brake;
}
// Constructor
Brake::Brake(Car&)
{
myCar_Brake = Car;
}
/******************************/
class Speed
{
public:
Speed(Car&);
// Needs to operate on A.
Car* myCar_Speed;
}
// Constructor
Speed::Speed(Car&)
{
myCar_Speed = Car;
}
/****************************/
There's no fundamental problem with having circular references in your object graph, so long as you understand that and don't try to traverse your object graph without keeping track of which objects you've encountered. To specifically answer your question, having circular references between objects is relatively common; it's the way a doubly-linked list works, for example.
Although, as Paul mentions, there is no problem with having circular references, the above code example is totally missing encapsulation and is not memory leak safe.
Does it make sense to allow something like this?
Speed::Speed(Car& value)
{
myCar_Speed = value;
// WTF code below
value->myCurrentComponent->myCar_Brake = NULL;
}
Also,
Car::Car()
{
myBrake = new Brake(*this);
mySpeed = new Speed(*this);
//if Speed::Speed(Car&) throws an exception, memory allocated for myBrake will leak
myCurrentComponent = myBrake;
}
Never use raw pointers without some kind of a resource manager.
Without debating the validity of the actual object structure of the relation of Car, Break and Speed, this approach has one minor problem: it can be in invalid states.
If - something - goes wrong, it is possible in this setup, that a Car instance#1 has a Break instance#2 that belongs to a Car instance#3. A general problem with doubly-linked lists too - the architecture itself enables invalid states. Of course careful visibility modifier choosing and good implementation of functions can guarantee it will not happen. And when its done and safe, you stop modifying it, take it as a 'black box', and just use it, thus eliminating the probability of screwing it up.
But, I'd personally recommend to avoid architectures that allow invalid states for high level, constantly maintained code. A doubly-linked list is a low level balck box code that will most likely not need any code changes, like ever. Can you say that about your Car, Break and Speed?
If a Car had a Break and Speed, and Break and Speed would not know of their "owning Car", it would be impossible to make and invalid state. Of course, it might not suit the concrete situation.