This is the question :
How to do IT Right ?
IT = add objects dynamically (mean create class structures to support that)
class Branch
{
Leaves lv; //it should have many leaves!!
}
class Tree
{
Branch br; //it should have many branchs!!!
}
Now a Non-Working Example (neither is c++ !!, but I try to draw the idea)
class Branch
{
static lv_count;
Leaves lv; //it should have many leaves!! (and should be some pointer)
public:
add(Leave lv)
{
lv[lv_count] = lv;
lv_count ++ ;
}
}
class Tree
{
static br_count;
Branch br; //it should have many branchs!!! (and should be some pointer)
Tree
public:
add(Branch br)
{
br[br_count] = lv;
br_count ++ ;
}
}
this is for example, reaching a stupid approach:
class Branch
{
static count;
Leaves l[1000]; //mmm i don't like this
//...
}
class Tree
{
static count;
Branch b[1000]; //mmm i don't like this
//...
}
I would like to know the formal normal way for doing this, Thanks!!!!!!
std::vector is the thing, you are looking for, I guess...
class Tree
{
std::vector<Branch> branches;
};
Vectors are the generic solution. However you shiould look at memory allocation before you start using libraries code such as vectors -- e.g., C++ new, calloc, malloc, thread local memory etc... Each STL container has it's own algorithmic complexities and studying these will aid you in picking the right one
Discussion :
If you want something to grow and you don't have the space for it.... well you have to realloc() or put algorithmically. Acquire a bigger memory buffer and copy the old buffer into it at the correct index offsets. This is what vector does behind the scenes; of-course vector just does this very well by (this is implementation specific) growing using the linear function (2x). growing in this way means it has more memory than it needs, which means that data added to the vector in the future won't cause an immediate reallocation.
However, I must add that this is very inefficient, you can almost always avoid the cost of copying things around. Vector's main use is for contiguous memory regions, you can almost always improve on vector by using linked data structures, perhaps in a binary tree for searching on a key :D
Related
I want to improve the performance of the following code. What aspect might affect the performance of the code when it's executed?
Also, considering that there is no limit to how many objects you can add to the container, what improvements could be made to “Object” or “addToContainer” to improve the performance of the program?
I was wondering if std::push_back in C++ affects performance of the code in any way? Especially if there is no limit to adding to list.
struct Object{
string name;
string description;
};
vector<Object> container;
void addToContainer(Object object) {
container.push_back(object);
}
int main() {
addToContainer({ "Fira", "+5 ATTACK" });
addToContainer({ "Potion", "+10 HP" });
}
Before you do ANYTHING profile the code and get a benchmark. After you make a change profile the code and get a benchmark. Compare the benchmarks. If you do not do this, you're rolling dice. Is it faster? Who knows.
Profile profile profile.
With push_back you have two main concerns:
Resizing the vector when it fills up, and
Copying the object into the vector.
There are a number of improvements you can make to the resizing cost cost of push_back depending on how items are being added.
Strategic use of reserve to minimize the amount of resizing, for example. If you know how many items are about to be added, you can check the capacity and size to see if it's worth your time to reserve to avoid multiple resizes. Note this requires knowledge of vector's expansion strategy and that is implementation-specific. An optimization for one vector implementation could be a terribly bad mistake on another.
You can use insert to add multiple items at a time. Of course this is close to useless if you need to add another container into the code in order to bulk-insert.
If you have no idea how many items are incoming, you might as well let vector do its job and optimize HOW the items are added.
For example
void addToContainer(Object object) // pass by value. Possible copy
{
container.push_back(object); // copy
}
Those copies can be expensive. Get rid of them.
void addToContainer(Object && object) //no copy and can still handle temporaries
{
container.push_back(std::move(object)); // moves rather than copies
}
std::string is often very cheap to move.
This variant of addToContainer can be used with
addToContainer({ "Fira", "+5 ATTACK" });
addToContainer({ "Potion", "+10 HP" });
and might just migrate a pointer and as few book-keeping variables per string. They are temporaries, so no one cares if it will rips their guts out and throws away the corpses.
As for existing Objects
Object o{"Pizza pop", "+5 food"};
addToContainer(std::move(o));
If they are expendable, they get moved as well. If they aren't expendable...
void addToContainer(const Object & object) // no copy
{
container.push_back(object); // copy
}
You have an overload that does it the hard way.
Tossing this one out there
If you already have a number of items you know are going to be in the list, rather than appending them all one at a time, use an initialization list:
vector<Object> container{
{"Vorpal Cheese Grater", "Many little pieces"},
{"Holy Hand Grenade", "OMG Damage"}
};
push_back can be extremely expensive, but as with everything, it depends on the context. Take for example this terrible code:
std::vector<float> slow_func(const float* ptr)
{
std::vector<float> v;
for(size_t i = 0; i < 256; ++i)
v.push_back(ptr[i]);
return v;
}
each call to push_back has to do the following:
Check to see if there is enough space in the vector
If not, allocate new memory, and copy the old values into the new vector
copy the new item to the end of the vector
increment end
Now there are two big problems here wrt performance. Firstly each push_back operation depends upon the previous operation (since the previous operation modified end, and possibly the entire contents of the array if it had to be resized). This pretty much destroys any vectorisation possibilities in the code. Take a look here:
https://godbolt.org/z/RU2tM0
The func that uses push_back does not make for very pretty asm. It's effectively hamstrung into being forced to copy a single float at a time. Now if you compare that to an alternative approach where you resize first, and then assign; the compiler just replaces the whole lot with a call to new, and a call to memcpy. This will be a few orders of magnitude faster than the previous method.
std::vector<float> fast_func(const float* ptr)
{
std::vector<float> v(256);
for(size_t i = 0; i < 256; ++i)
v[i] = ptr[i];
return v;
}
BUT, and it's a big but, the relative performance of push_back very much depends on whether the items in the array can be trivially copied (or moved). If you example you do something silly like:
struct Vec3 {
float x = 0;
float y = 0;
float z = 0;
};
Well now when we did this:
std::vector<Vec3> v(256);
The compiler will allocate memory, but also be forced to set all the values to zero (which is pointless if you are about to overwrite them again!). The obvious way around this is to use a different constructor:
std::vector<Vec3> v(ptr, ptr + 256);
So really, only use push_back (well, really you should prefer emplace_back in most cases) when either:
additional elements are added to your vector occasionally
or, The objects you are adding are complex to construct (in which case, use emplace_back!)
without any other requirements, unfortunately this is the most efficient:
void addToContainer(Object) { }
to answer the rest of your question. In general push_back will just add to the end of the allocated vector O(1), but will need to grow the vector on occasion, which can be amortized out but is O(N)
also, it would likely be more efficient not to use string, but to keep char * although memory management might be tricky unless it is always a literal being added
This question is about owning pointers, consuming pointers, smart pointers, vectors, and allocators.
I am a little bit lost on my thoughts about code architecture. Furthermore, if this question has already an answer somewhere, 1. sorry, but I haven't found a satisfying answer so far and 2. please point me to it.
My problem is the following:
I have several "things" stored in a vector and several "consumers" of those "things". So, my first try was like follows:
std::vector<thing> i_am_the_owner_of_things;
thing* get_thing_for_consumer() {
// some thing-selection logic
return &i_am_the_owner_of_things[5]; // 5 is just an example
}
...
// somewhere else in the code:
class consumer {
consumer() {
m_thing = get_thing_for_consumer();
}
thing* m_thing;
};
In my application, this would be safe because the "things" outlive the "consumers" in any case. However, more "things" can be added during runtime and that can become a problem because if the std::vector<thing> i_am_the_owner_of_things; gets reallocated, all the thing* m_thing pointers become invalid.
A fix to this scenario would be to store unique pointers to "things" instead of "things" directly, i.e. like follows:
std::vector<std::unique_ptr<thing>> i_am_the_owner_of_things;
thing* get_thing_for_consumer() {
// some thing-selection logic
return i_am_the_owner_of_things[5].get(); // 5 is just an example
}
...
// somewhere else in the code:
class consumer {
consumer() {
m_thing = get_thing_for_consumer();
}
thing* m_thing;
};
The downside here is that memory coherency between "things" is lost. Can this memory coherency be re-established by using custom allocators somehow? I am thinking of something like an allocator which would always allocate memory for, e.g., 10 elements at a time and whenever required, adds more 10-elements-sized chunks of memory.
Example:
initially:
v = ☐☐☐☐☐☐☐☐☐☐
more elements:
v = ☐☐☐☐☐☐☐☐☐☐ 🡒 ☐☐☐☐☐☐☐☐☐☐
and again:
v = ☐☐☐☐☐☐☐☐☐☐ 🡒 ☐☐☐☐☐☐☐☐☐☐ 🡒 ☐☐☐☐☐☐☐☐☐☐
Using such an allocator, I wouldn't even have to use std::unique_ptrs of "things" because at std::vector's reallocation time, the memory addresses of the already existing elements would not change.
As alternative, I can only think of referencing the "thing" in "consumer" via a std::shared_ptr<thing> m_thing, as opposed to the current thing* m_thing but that seems like the worst approach to me, because a "thing" shall not own a "consumer" and with shared pointers I would create shared ownership.
So, is the allocator-approach a good one? And if so, how can it be done? Do I have to implement the allocator by myself or is there an existing one?
If you are able to treat thing as a value type, do so. It simplifies things, you don't need a smart pointer for circumventing the pointer/reference invalidation issue. The latter can be tackled differently:
If new thing instances are inserted via push_front and push_back during the program, use std::deque instead of std::vector. Then, no pointers or references to elements in this container are invalidated (iterators are invalidated, though - thanks to #odyss-jii for pointing that out). If you fear that you heavily rely on the performance benefit of the completely contiguous memory layout of std::vector: create a benchmark and profile.
If new thing instances are inserted in the middle of the container during the program, consider using std::list. No pointers/iterators/references are invalidated when inserting or removing container elements. Iteration over a std::list is much slower than a std::vector, but make sure this is an actual issue in your scenario before worrying too much about that.
There is no single right answer to this question, since it depends a lot on the exact access patterns and desired performance characteristics.
Having said that, here is my recommendation:
Continue storing the data contiguously as you are, but do not store aliasing pointers to that data. Instead, consider a safer alternative (this is a proven method) where you fetch the pointer based on an ID right before using it -- as a side-note, in a multi-threaded application you can lock attempts to resize the underlying store whilst such a weak reference lives.
So your consumer will store an ID, and will fetch a pointer to the data from the "store" on demand. This also gives you control over all "fetches", so that you can track them, implement safety measure, etc.
void consumer::foo() {
thing *t = m_thing_store.get(m_thing_id);
if (t) {
// do something with t
}
}
Or more advanced alternative to help with synchronization in multi-threaded scenario:
void consumer::foo() {
reference<thing> t = m_thing_store.get(m_thing_id);
if (!t.empty()) {
// do something with t
}
}
Where reference would be some thread-safe RAII "weak pointer".
There are multiple ways of implementing this. You can either use an open-addressing hash table and use the ID as a key; this will give you roughly O(1) access time if you balance it properly.
Another alternative (best-case O(1), worst-case O(N)) is to use a "reference" structure, with a 32-bit ID and a 32-bit index (so same size as 64-bit pointer) -- the index serves as a sort-of cache. When you fetch, you first try the index, if the element in the index has the expected ID you are done. Otherwise, you get a "cache miss" and you do a linear scan of the store to find the element based on ID, and then you store the last-known index value in your reference.
IMO best approach would be create new container which will behave is safe way.
Pros:
change will be done on separate level of abstraction
changes to old code will be minimal (just replace std::vector with new container).
it will be "clean code" way to do it
Cons:
it may look like there is a bit more work to do
Other answer proposes use of std::list which will do the job, but with larger number of allocation and slower random access. So IMO it is better to compose own container from couple of std::vectors.
So it may start look more or less like this (minimum example):
template<typename T>
class cluster_vector
{
public:
static const constexpr cluster_size = 16;
cluster_vector() {
clusters.reserve(1024);
add_cluster();
}
...
size_t size() const {
if (clusters.empty()) return 0;
return (clusters.size() - 1) * cluster_size + clusters.back().size();
}
T& operator[](size_t index) {
thowIfIndexToBig(index);
return clusters[index / cluster_size][index % cluster_size];
}
void push_back(T&& x) {
if_last_is_full_add_cluster();
clusters.back().push_back(std::forward<T>(x));
}
private:
void thowIfIndexToBig(size_t index) const {
if (index >= size()) {
throw std::out_of_range("cluster_vector out of range");
}
}
void add_cluster() {
clusters.push_back({});
clusters.back().reserve(cluster_size);
}
void if_last_is_full_add_cluster() {
if (clusters.back().size() == cluster_size) {
add_cluster();
}
}
private:
std::vector<std::vector<T>> clusters;
}
This way you will provide container which will not reallocate items. It doesn't meter what T does.
[A shared pointer] seems like the worst approach to me, because a "thing" shall not own a "consumer" and with shared pointers I would create shared ownership.
So what? Maybe the code is a little less self-documenting, but it will solve all your problems.
(And by the way you are muddling things by using the word "consumer", which in a traditional producer/consumer paradigm would take ownership.)
Also, returning a raw pointer in your current code is already entirely ambiguous as to ownership. In general, I'd say it's good practice to avoid raw pointers if you can (like you don't need to call delete.) I would return a reference if you go with unique_ptr
std::vector<std::unique_ptr<thing>> i_am_the_owner_of_things;
thing& get_thing_for_consumer() {
// some thing-selection logic
return *i_am_the_owner_of_things[5]; // 5 is just an example
}
Although not new to programming, I'm relatively new to C++ and struggling to remember and understand it. I created a k-d-tree class which is already working, but I while writing it I struggled with the program crashing with an access reading violation 0x00000000 error.
Basically, the tree class has two property vectors to hold the items stored into the tree and to hold the children of the current branch respectively. As far as I've understood, you usually don't have to increase the size of vectors. However, here if I remove the line with this->ownObjects.reserve(86);, the program crashes when I'm trying to push items to the vector. Same happens with the children vector. Another weird thing is that the vectors have to be reserved for 86 items or more; otherwise the program crashes with the same message.
It's probably something very simple (or I'm doing something wrong), but I'm not used to C++ enough to get it. It's an ugly workaround that I would like to resolve, and I'm also interested in the reason behind that behavior. And being a k-d tree class, it's very expensive and unacceptable for memory usage.
I tried to keep the code as short as possible, but there shouldn't be anything relevant missing. What I've been speculating is that it might be a problem with a vector of objects and memory allocation, or a feature related to C++ constructors that I don't know of. Is it something with C++ that I don't understand, or something else?
header file
[...]
class KDTree {
private:
Vector3 maxBounds;
Vector3 minBounds;
std::vector<Triangle> ownObjects;
std::vector<KDTree> children;
public:
KDTree();
KDTree(const Vector3 &maxBounds, const Vector3 &minBounds, AlignedArray<Triangle> &objects);
};
.cpp file
[...]
KDTree::KDTree() { }
KDTree::KDTree(const Vector3 &min, const Vector3 &max, AlignedArray<Triangle> &objects) :
maxBounds(max),
minBounds(min),
ownObjects(),
children()
{
this->children.reserve(128);
this->ownObjects.reserve(85); // No errors if these values are changed to 86 or more.
for (size_t i = 0; i < objects.size(); i++)
this->ownObjects.push_back(objects[i]); //The program crashes here.
[...]
for (unsigned i = 0; i < 8; i++) {
// Another crash here if the this->children.reserve(128); earlier is not called. The groups[i] is a filtered vector of items from the objects array.
children.push_back(KDTree([.. tested valid parameters ..]);
}
}
The following code compiles and runs fine:
#include <memory>
struct MyTree {
std::shared_ptr <MyTree> left;
std::shared_ptr <MyTree> right;
int val;
MyTree(
std::shared_ptr <MyTree> left_,
std::shared_ptr <MyTree> right_,
int val_
) : left(left_), right(right_), val(val_) {};
};
int main() {
std::shared_ptr <MyTree> t(
new MyTree( std::shared_ptr <MyTree>(),
std::shared_ptr <MyTree>(),
0)
);
for(int i=0;i<10000;i++) {
t.reset(new MyTree(t,t,0));
}
}
However, when the for loop is changed from 10000 to 100000, I receive a segfault. Looking at the result in gdb, it looks like the destructors being called as a result of the garbage collection in std::shared_ptr create a backtrace that's thousands deep. As such, I think the segfault is due to running out of room on the stack from the function calls. I've two questions. First, is this a correct assessment of the segfault? Second, if so, is there a good way to manage custom data structures such as trees that need to be garbage collected, but may be extremely large. Thanks.
This isn't usually a problem, because normally you keep a tree balanced and the depth is O(lg N).
You've instead got a weird singly-linked list, with a duplicate copy of every pointer. That's... odd.
A real singly-linked list would be very deep recursion, but might benefit from tail call optimization and not exhaust the stack.
The problem you're having is really quite unique to your mixing of the two data structures. Which has no benefits that I can see.
Your assessment looks totally correct to me. It looks like the recursive calls to delete child subtrees are exceeding yoru stack size. This is unrelated to shared_ptr though as I would expect any recursive algorithms on the data structure to also fail in the same way.
If possible on your platform, the simplest way to deal with the need for large structures like that is simply to increase the size of your stack (for example ulimit) to allow the natural recursive algorithm to function.
If that's not possible you're going to have to traverse the nodes yourself, storing the result of the traversal into a container of some sort so you can chop of subnodes and not require a full depth traversal of the tree structure.
This looks to me like a misuse of std::shared_ptr. And some
very poor naming: your class MyTree isn't a tree, but simply
a node. The tree should be a separate class, and should delete
all of the nodes in its destructor.
Having said that, this won't change much with regards to the
problem at hand. You're visiting the nodes on the tree
recursively (about the only way that makes sense), and if you
let the tree get too deep, the stack will overflow, regardless
of whether the visiting is implicit (through the destructor
calls in std::shared_ptr) or explicit. Creating such trees to
begin with makes no sense, since there's no point in creating
a tree whose nodes you cannot visit before you start
destructing it.
EDIT:
To take into account the discussion of garbage collection in the comments.
Using the Boehm collector, or some other garbage collector, will solve the
problem of deallocating the elements. But it still won't allow you to visit
them before deallocation, so such a tree remains useless. (I think that there
are very strong arguments in favor of garbage collection in C++, but this isn't
one of them.)
I think this type of error can reproduced without shared_ptr. This is basically a long recursion call. Ok, I just created and deleted half of the tree but...
struct MyTree {
MyTree *left;
int val;
MyTree()
{
left = 0;
}
MyTree(MyTree *left_)
: left(left_) {};
~MyTree()
{
delete left;
}
};
int main() {
MyTree *t = new MyTree();
for(int i=0;i<100000;i++) {
t = new MyTree(t);
}
delete t;
}
Just add one more zero after 100000 and you got the same error message
#include <stdio.h>
Class XObject
{
int id;
char *type;
}
Class XSubObject : XObject
{
int remark;
char* place;
}
**Sorry for my bad example, but more or less data looks like this.
std::vector objects;
data stored in objects are like this:
#1=XObject(1001,"chair"), #2=XObject(1002,"spoon"), #3=XSubObject(1004,"table",2,"center"), #4=XSubObject(1005,"item",0,"left") and so..on
we cna have different XObjects with same types.
Class XStructure
{
XObject parent;
}
Class XStructureRow
{
XObject child;
XStructure parentStruct;
}
std::vector structures;
data stored in Structures are like this:
#5=XStructure(NULL), #7=XStructure(#1),#8=XStructure(#2),#9=XStructure(#3),#10=XStructure(#4) and so..on
std::vector structurerows;
data stored in Structures are like this:
XStructureRow(#4,#5), XStructureRow(#2,#1),XStructureRow(#2,#7),XStructureRow(#3,#10),XStructureRow(#4,#8) and so..on
How can i write a fast alogirthm that starts with XObject and finds it in which structurerow and fetching its structure and fetching its parent. For ex, I want to retrieve all the parents of Object with name "table"
and retrive its parents with name "chair".
My written algorithm is:
std::vector<XObject> getParents(XObject "chair")
{
std::vector<XObject> objs;
for (int i=0;i<structurerows.size() ;i++ )
{
XStructurerow sr=structurerows[i];
XStructutre parent= sr.fetchParent();
if(parent!=NULL)
{
if(parent.fetchName()=="chair")
objs.push_back(parent);
}
}
return objs;
}
if i have to fetch all the objects parents then it is taking too much time if i have huge data. I mean is there any solution that helps to find the parent objects at O(1) way instead iterating the complete loop? I want to fetch these parents with minimal iterations. Here the complexity is O(n) which i am no satisfied. I hope i made some valid points. Suggestions please..
A few suggestions.
First, your getParents() function is making multiple copies of objects and arrays. It constructs a new instance of vector called objs, fills it up with copies of items in the row. Then returns a copy of the array (which creates a new copy of each object, which creates copies of its members). That's likely the root cause of your performance problems.
Second your class hierarchy has classes with "child" and "parent" objects, but are storing copies of these XObject instances. So if you were to update one of these objects independently, all the parent and child objects you think are referring to them have a different copy. (And hence, will create some strange bugs later especially since the base classes contain pointers). Your object relationships in the class declarations should be via pointers, not instance copies.
Third, string comparisons during a lookup algorithm are also harsh on performance. You should represent your objects unique key by integers if at all possible.
Not knowing anything else about your problem set, if you addressed those three things, you'd likely have better performance and wouldn't care about finding the O(1) solution.
Now to actually answer your question:
I would keep a map or (hash_map) table of arrays to track all the objects of a certain type. That is:
std::map<std::string, std::vector<XObject*>> lookupmap;
Then as each object is created, you can look up it's type in "lookupmap" and add it:
void OnObjectCreated(XObject* pObj)
{
std::string strType(pObj->type);
lookupmap[strType].push_back(pObj);
}
I'll leave the part where you use std::map or std::hash_map as an exercise for you.
The only way to "find" something with O(1) complexity is to use a hash-table. The process of creating a hash-value from a key-value and then accessing the object indexed into the table by that hash-value will have O(1) complexity. Otherwise any other search algorithm will at best be O(log n) for a sorted list or sorted tree-type structure.