I'm running a single-threaded system in FreeRTOS with limited resources.
I already preallocate buffers for the RapidJSON allocators as so:
char valueBuffer[2048];
char parseBuffer[1024];
rapidjson::MemoryPoolAllocator<FreeRTOSRapidJSONAllocator> valueAllocator (valueBuffer, sizeof(valueBuffer))
rapidjson::MemoryPoolAllocator<FreeRTOSRapidJSONAllocator> parseAllocator (parseBuffer, sizeof(parseBuffer));
The issue I have is that every time one of the allocators is used, its size keeps on increasing (and allocating new memory if necessary) unless they are cleared. The problem with calling Clear() on the allocators is that Malloc is called again when the allocators are next resized, which I want to avoid.
Is there a way to simply reuse the existing preallocated memory, such as by setting the allocators' size back to zero, for example?
I resolved this by creating a custom allocator. Essentially a copy of rapidjson::MemoryPoolAllocator with the addition of the following method:
void Reset()
{
chunkHead_->size = 0;
chunkHead_->next = 0;
}
Which should be called every time you're done with the last string that was parsed.
I needed to do the same, and thought I'd note what from reading the allocator code seems to be an alternative:
While I again create a char buffer[2048], I do not create and keep an allocator alongside it.
Rather, I create and delete an allocator anew when needed, while re-using the memory block. From reading the code of the allocator I see no Malloc so that should all be on the stack, no?
Edit - code example:
class MyClass
{
public:
myClass() :
_json_log_string_buffer(&_json_string_buffer_allocator,
_json_buffer_size) {}
(...)
private:
static constexpr int _json_buffer_size {4096};
char _json_logger_buffer[_json_buffer_size];
rapidjson::CrtAllocator _json_string_buffer_allocator;
rapidjson::StringBuffer _json_log_string_buffer;
};
MyClass::log_to_json()
{
rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator>
json_logger_allocator {_json_logger_buffer,
sizeof(_json_logger_buffer)};
rapidjson::Document json_doc(&json_logger_allocator);
auto& allocator = json_doc.GetAllocator();
json_doc.SetObject();
// using it:
json_doc.AddMember("id", id(), allocator);
json_doc.AddMember("KEY", "VALUE",
allocator);
(...)
// Here, monitoring the size of allocator, it never goes
// beyond the 4096 on repeat invocations.
// If I had it as a member, only creating once, it would grow.
std::cout << "JSON allocator size: " << allocator.Size() <<
", capacity: " << allocator.Capacity() << std::endl;
// Bonus point: I'm also using a rapidjson::StringBuffer.
// From a first read it doesn't seem to re-allocate.
// It doesn't use a MemoryPoolAllocator so I cannot do the
// same for it as I did for rapidjson::Document.
_json_log_string_buffer.Clear();
rapidjson::Writer<rapidjson::StringBuffer>
writer(_json_log_string_buffer);
json_doc.Accept(writer);
auto as_string = _json_log_string_buffer.GetString();
// Using string with logger...
}
Related
I want to create my own game engine so I bought a few books one being Game Engine Architecture Second Edition by Jason Gregory and in it he suggests implementing a few custom allocators. One type of allocator the book talked about was a stack-based allocator, but I got confused when reading it. How do you store data in it? What data type do you use? For example, do you use a void*, void**, an array of char[]? The book says you're meant to allocate one big block of memory using malloc in the begining and free it in the end, and "allocate" memory by incrementing a pointer. If you could help explain this more that would be great because I can't seem to find a tutorial that doesn't use std::allocator. I also thought this might help others interested in a custom allocator so I posted the question here.
This is the header file example they give in the book:
class StackAllocator
{
public:
// Represents the current top of the stack.
// You can only roll back to the marker not to arbitrary locations within the stack
typedef U32 Marker;
explicit StackAllocator(U32 stackSize_bytes);
void* alloc(U32 size_bytes); // Allocates a new block of the given size from stack top
Marker getMarker(); // Returns a Marker to the current stack top
void freeToMarker(Marker marker); // Rolls the stack back to a previous marker
void clear(); // Clears the entire stack(rolls the stack back to zero)
private:
// ...
}
EDIT:
After a while I got this working but I don't know if I'm doing it right
Header File
typedef std::uint32_t U32;
struct Marker {
size_t currentSize;
};
class StackAllocator
{
private:
void* m_buffer; // Buffer of memory
size_t m_currSize = 0;
size_t m_maxSize;
public:
void init(size_t stackSize_bytes); // allocates size of memory
void shutDown();
void* allocUnaligned(U32 size_bytes);
Marker getMarker();
void freeToMarker(Marker marker);
void clear();
};
.cpp File
void StackAllocator::init(size_t stackSize_bytes) {
this->m_buffer = malloc(stackSize_bytes);
this->m_maxSize = stackSize_bytes;
}
void StackAllocator::shutDown() {
this->clear();
free(m_buffer);
m_buffer = nullptr;
}
void* StackAllocator::allocUnaligned(U32 size_bytes) {
assert(m_maxSize - m_currSize >= size_bytes);
m_buffer = static_cast<char*>(m_buffer) + size_bytes;
m_currSize += size_bytes;
return m_buffer;
}
Marker StackAllocator::getMarker() {
Marker marker;
marker.currentSize = m_currSize;
return marker;
}
void StackAllocator::freeToMarker(Marker marker) {
U32 difference = m_currSize - marker.currentSize;
m_currSize -= difference;
m_buffer = static_cast<char*>(m_buffer) - difference;
}
void StackAllocator::clear() {
m_buffer = static_cast<char*>(m_buffer) - m_currSize;
}
Okay for simplicity let's say you're tracking a collection of MyFunClass for your engine. It could be anything, and your linear allocator doesn't necessarily have to track objects of a homogenous type, but often that's how it's done. In general, when using custom allocators you're trying to "shape" your memory allocations to separate static data from dynamic, infrequently accessed vs. frequently accessed, with a view towards optimizing your working set and achieving locality of reference.
Given the code you provided, first, you'd allocate your memory pool. For simplicity, assume you want enough space to pool 1000 objects of type MyFunClass.
StackAllocator sa;
sa.Init( 1000 * sizeof(MyFunClass) );
Then each time you need to "allocate" a new block of memory for a FunClass, you might do it like this:
void* mem = sa.allocUnaligned( sizeof(MyFunClass) );
Of course, this doesn't actually allocate anything. All the allocation already happened in Step 1. It just marks some of your already-allocated memory as in-use.
It also doesn't construct a MyFunClass. Your allocator isn't strongly typed, so the memory it returns can be interpreted however you want: as a stream of bytes; as a backing representation of a C++ class object; etc.
Now, how would you use a buffer allocated in this fashion? One common way is with placement new:
auto myObj = new (mem) MyFunClass();
So now you're constructing your C++ object in the memory space you reserved with the call to allocUnaligned.
(Note that the allocUnaligned bit gives you some insight into why we don't usually write our own custom allocators: because they're hard as heck to get right! We haven't even mentioned alignment issues yet.)
For extra credit, take a look at scope stacks which take the linear allocator approach to the next level.
I am creating a message queue which is used by two processes. One of them is putting something in it and the other is reading it.
The message queue is following struct I created.
struct MSGQueue {
Action actions_[256];
int count;
MSGQueue() { count = 0; }
interprocess_mutex mutex;
Action Pop() {
--count;
return actions_[count];
}
void Put(Action act) {
actions_[count] = act;
++count;
}
};
Action is a custom class I created.
class Action {
public:
// Getter functions for the member
private:
std::string name_;
ActionFn action_fn_; // this is an enum
void* additional_data_;
}
I am creating a shared memory like this in the main program
shm_messages = shared_memory_object(create_only,"MySharedMemory", read_write);
shm_messages.truncate(sizeof(MSGQueue));
region = mapped_region(shm_messages_, read_write);
In my other program I am opening it and put an action in the queues array of actions.
boost::interprocess::shared_memory_object shm_messages_;
boost::interprocess::mapped_region region_;
shm_messages_ = shared_memory_object(open_only, "MySharedMemory", read_write);
shm_messages_.truncate(sizeof(MSGQueue));
region_ = mapped_region(shm_messages_, read_write);
//Get the address of the mapped region
void * addr = region_.get_address();
//Construct the shared structure in memory
MSGQueue * data = static_cast<MSGQueue*>(addr);
Action open_roof("OpenRoof", ActionFn::AFN_ON, NULL);
{ // Code block for scoped_lock. Mutex will automatically unlock after block.
// even if an exception occurs
scoped_lock<interprocess_mutex> lock(data->mutex);
// Put the action in the shared memory object
data->Put(open_roof);
}
The main program is checking if we got some new messages and if there is one it shall read it and put it in a list.
std::vector<ghpi::Action> actions;
//Get the address of the mapped region
void * addr = region_.get_address();
//Construct the shared structure in memory
MSGQueue * data = static_cast<ghpi::Operator::MSGQueue*>(addr);
if (!data) {
std::cout << " Error while reading shared memory" << std::endl;
return actions;
}
{
scoped_lock<interprocess_mutex> lock(data->mutex);
while (data->count > 0) {
actions.push_back(data->Pop()); // memory access violation here
std::cout << " Read action from shm" << std::endl;
}
}
The second program which is putting the action works fine. But after it run the main program is seeing the count has increased and is trying to read and throws an memory access violation at me.
I don't know why i am getting this violation error. Is there something special about sharing class objects or structs?
Let's take a look at the objects you're trying to pass between processes:
class Action {
// ...
std::string name_;
}
Well, looky here. What do we have here? We have here a std::string.
Did you know that sizeof(x), where x is a std::string will always give you the same answer, whether the string is empty, or has their entire contents of "War And Peace"? That's because your std::string does a lot of work that you don't really have to think about. It takes care of allocating the requirement memory for the string, and deallocating when it is no longer used. When a std::string gets copied or moved, the class takes care of handling these details correctly. It does its own memory allocation and deallocation. You can think of your std::string to consist of something like this:
namespace std {
class string {
char *data;
size_t length;
// More stuff
};
}
Usually there's a little bit more to this, in your typical garden-variety std::string, but this gives you the basic idea of what's going on.
Now try to think of what happens when you put your std::string into shared memory. Where do you think that char pointer still points to? Of course, it still points to somewhere, someplace, in your process's memory where your std::string allocated the memory for whatever string it represents. You have no idea where, because all that information is hidden in the string.
So, you placed this std::string in your shared memory region. You did place the std::string itself but, of course, not the actual string that it contains. There's no way you can possibly do this, because you have no means of accessing std::string's internal pointers and data. So, you've done that, and you're now trying to access this std::string from some other process.
This is not going to end well.
Your only realistic option is to replace the std::string with a plain char array, and then go through the extra work of making sure that it's initialized properly, doesn't overflow, etc...
Generally, in the context of IPC, shared memory, etc..., using any kind of a non-trivial class is a non-starter.
First off, this is not a duplicate. My question is how to do it with dynamic memory. The reason this is distinct is because my delete[] is hanging.
So, here's what I have:
class PacketStrRet {
public:
PacketStrRet(char p_data[], int p_len) : len(p_len) {
data = new char[p_len];
memcpy(data, p_data, p_len * sizeof(char));
}
~PacketStrRet() {
delete[] data;
data = nullptr;
}
char* data;
int len;
};
And yes, I'm aware that my code is not using the best practices. I'll clean it up later.
The problem I'm having is in the DTOR. That delete is hanging forever. The data being passed in to the CTOR is not dynamic memory, so I need to make it dynamic so things don't go out of scope. p_len holds the correct amount of data, so there's no problem there.
From what I've read, memcpy seems to be the most likely culprit here. So how do I copy a string that is not null-terminated to dynamic memory, and then still be able to delete it later?
Thanks.
The problem is not the delete, only everything that comes before and even that would be ok if there didn't occur any problems.
class PacketStrRet {
// Use RAII
std::unique_ptr<char> data; // I own this data and will destroy it.
// now the parent class is also only movable, use shared_ptr if you can't live with that.
int len;
public:
PacketStrRet(
// <RED ALERT>
char p_data[], int p_len // user can lie to us.
// </RED ALERT>
) try : // function try block, se 1)
len(p_len), data(new char[p_len]) {
memcpy(data, p_data.get(), p_len * sizeof(char));
} catch(const std::exception& e) {
std::cerr << "arg=" << arg << " failed: " << e.what() << '\n';
}
~PacketStrRet() {
// unique_ptr takes care of memory management and garbage collection.
}
// access functions
};
Now the possible errors you could make to blow the code up.
You could have copied the object, essentially making two owning raw pointers to the same data. This would blow up at delete, you coudl use memory-sanitizer / valgrind to confirm this happens. Use smart pointers to save you the trouble, the unique pointer should cause a compiler error if you tried to copy, unless you memcpy the entire structure ignoring the copy/assignment constructors.
You could give the wrong len to the constructor, what is the source of the data and len? Valgrind / memory-sanitizer can save you.
The memory corruption could happen in a totally different place. Valgrind / memory-sanitizer can save you.
In case valgrind mem-san are too much, you can try to make a check for double delete, if you make a counter in c'tor and d'tor and if it ever goes to negative you have your error.
In this class your at least missing a copy constructor. Check up on rule of 3, 5, 7 and 0 (zero) to find out how many you need.
1) http://en.cppreference.com/w/cpp/language/function-try-block
Try to use std:: copy(). It will be like this:
std::copy(p_data, p_data + p_len, data).
I am currently working on a threadsafe circular buffer for storing pointers. As basis I used the code from this thread: Thread safe implementation of circular buffer. My code is shown below.
Because I want to store pointers in the circular buffer, I need to make sure that allocated memory is deleted, in case the buffer is full and the first element is overwritten, as mentioned in the boost documentation:
When a circular_buffer becomes full, further insertion will overwrite the stored pointers - resulting in a memory leak. 1
So I tried to delete the first element in the add method, in case the buffer is full and the type T of the template is actually a pointer. This leads to a C2541-error, because I try to delete an object, which is not seen as a pointer.
Is my basic approach correct? How can I solve the above issue?
#pragma once
#include <boost/thread/condition.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/thread.hpp>
#include <boost/circular_buffer.hpp>
#include <type_traits>
#include "Settings.hpp"
template <typename T>
class thread_safe_circular_buffer : private boost::noncopyable
{
public:
typedef boost::mutex::scoped_lock lock;
thread_safe_circular_buffer(bool *stop) : stop(stop) {}
thread_safe_circular_buffer(int n, bool *stop) : stop(stop) {cb.set_capacity(n);}
void add (T imdata) {
monitor.lock();
std::cout << "Buffer - Add Enter, Size: " << cb.size() << "\n";
if(cb.full())
{
std::cout << "Buffer - Add Full.\n";
T temp = cb.front();
if(std::is_pointer<T>::value)
delete[] temp;
}
std::cout << "Buffer - Push.\n";
cb.push_back(imdata);
monitor.unlock();
std::cout << "Buffer - Add Exit.\n";
}
T get() {
std::cout << "Buffer - Get Enter, Size: " << cb.size() << "\n";
monitor.lock();
while (cb.empty())
{
std::cout << "Buffer - Get Empty, Size: " << cb.size() << "\n";
monitor.unlock();
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
if(*stop)
return NULL;
monitor.lock();
}
T imdata = cb.front();
// Remove first element of buffer
std::cout << "Buffer - Pop.\n";
cb.pop_front();
monitor.unlock();
std::cout << "Buffer - Get Exit.\n";
return imdata;
}
void clear() {
lock lk(monitor);
cb.clear();
}
int size() {
lock lk(monitor);
return cb.size();
}
void set_capacity(int capacity) {
lock lk(monitor);
cb.set_capacity(capacity);
}
bool *stop;
private:
boost::condition buffer_not_empty;
boost::mutex monitor;
boost::circular_buffer<T> cb;
};
The error tells you the problem: you can't delete things that aren't pointers. When T isn't a pointer type, delete[] temp; doesn't make sense. It's also a bad idea if your buffer is storing things that weren't allocated with new[], or when your circular buffer doesn't conceptually 'own' the pointers.
I think you maybe misunderstand the whole problem. The warning from the boost documentation only applies to situations where you can't afford to "lose" any of the data stored in the buffer. One example where this is a problem — the one they highlight specifically — is if you were storing raw pointers in the buffer that are your only references to some dynamically allocated memory.
There are, I think, only three reasonable designs:
Don't use a circular buffer when you can't afford to lose data (e.g. this can mean modifying your data so you can afford to lose it; e.g. circular_buffer<unique_ptr<T[]>> for storing dynamically allocated arrays) . And consequently, make your class not worry about what to do with lost data.
Make your class take a 'deleter'; i.e. a function object that specifies what to do with a data element that is about to be overwritten. (and you probably want to have a default parameter of "do nothing")
Change the functionality of the buffer to do something other than overwriting when full (e.g. block)
Do as boost does: let the user of your code handle this. If objects are stored, its destructor should handle possible memory management, if pointers are stored, you have no way of knowing what it actually points to: arrays, objects, memory that needs to be freed, memory that is managed elsewhere, not dynamic memory.
I have a strong use case for pre-allocating all the memory I need upfront and releasing it upon completion.
I have came out with this real simple buffer pool C++ implementation which I have to test but I am not sure that the pointer arithmetic I am trying to use will allow me to do that. Basically the bit where I do next and release. I would prefer some trick around this idea and not relying on any sort of memory handler which just makes the client code more convoluted.
#include <stdio.h>
#include <queue>
#include "utils_mem.h"
using namespace std;
template <class T>
class tbufferpool {
private:
const int m_initial;
const int m_size;
const int m_total;
T* m_buffer;
vector<T*> m_queue;
public:
// constructor
tbufferpool(int initial, int size) : m_initial(initial), m_size(size), m_total(initial*size*sizeof(T)) {
m_buffer = (T*) malloc(m_total);
T* next_buffer = m_buffer;
for (int i=0; i < initial; ++i, next_buffer += i*size) {
m_queue.push_back(next_buffer);
}
}
// get next buffer element from the pool
T* next() {
// check for pool overflow
if (m_queue.empty()) {
printf("Illegal bufferpool state, our bufferpool has %d buffers only.", m_initial);
exit(EXIT_FAILURE);
}
T* next_buffer = m_queue.back();
m_queue.pop_back();
return next_buffer;
}
// release element, make it available back in the pool
void release(T* buffer) {
assert(m_buffer <= buffer && buffer < (buffer + m_total/sizeof(T)));
m_queue.push_back(buffer);
}
void ensure_size(int size) {
if (size >= m_size) {
printf("Illegal bufferpool state, maximum buffer size is %d.", m_size);
exit(EXIT_FAILURE);
}
}
// destructor
virtual ~tbufferpool() {
free(m_buffer);
}
};
First, when you increase a pointer to T, it will point the next element of T in the memory.
m_queue.push(m_buffer + (i*size*sizeof(T)));
This should be like
m_buffer = (T*) malloc(m_total);
T* next = m_buffer;
for (int i=0; i < initial; ++i) {
m_queue.push(next++);
}
Second,
assert(m_buffer <= buffer && buffer < m_total);
It should be like
assert(m_buffer <= buffer && buffer <= m_buffer + m_total/sizeof(T));
Hope it helps!
I don't understand why you're "wrapping" the STL queue<> container. Just put your "buffers" in the queue, and pull the addresses as you need them. When you're done with a "segment" in the buffer, just pop it off of the queue and it's released automatically. So instead of pointers to buffers, you just have the actual buffer classes.
It just strikes me as re-inventing the wheel. Now since you need the whole thing allocated at once, I'd use vector not queue, because the vector<> type can be allocated all at once on construction, and the push_back() method doesn't re-allocate unless it needs to, the same with pop_back(). See here for the methods used.
Basically, though, here's my back-of-the-envelope idea:
#include <myType.h> // Defines BufferType
const int NUMBUFFERS = 30;
int main()
{
vector<BufferType> myBuffers(NUMBUFFERS);
BufferType* segment = &(myBuffers[0]); // Gets first segment
myBuffers.pop_back(); // Reduces size by one
return 0;
}
I hope that gives you the general idea. You can just use the buffers in the vector that way, and there's only one allocation or de-allocation, and you can use stack-like logic if you wish. The dequeue type may also be worth looking at, or other standard containers, but if it's just "I only want one alloc or de-alloc" I'd just use vector, or even a smart pointer to an array possibly.
Some stuff I've found out using object pools:
I'm not sure about allocating all the objects at once. I like to descend all my pooled objects from a 'pooledObject' class that contains a private reference to its own pool, so allowing a simple, parameterless 'release' method and I'm always absolutely sure that an object is always released back to its own pool. I'm not sure how to load up every instance with the pool reference with a static array ctor - I've always constructed the objects one-by-one in a loop.
Another useful private member is an 'allocated' boolean, set when an object is depooled and cleared when released. This allows the pool class to detect and except immediately if an object is released twice. 'Released twice' errors can be insanely nasty if not immediately detected - weird behaviour or a crash happens minutes later and, often, in another thread in another module. Best to detect double-releases ASAP!
I find it useful and reassuring to dump the level of my pools to a status bar on a 1s timer. If a leak occurs, I can see it happening and, often, get an idea of where the leak is by the activity I'm on when a number drops alarmingly. Who needs Valgrind:)
On the subject of threads, if you have to make your pools thread-safe, it helps to use a blocking queue. If the pool runs out, threads trying to get objects can wait until they are released and the app just slows down instead of crashing/deadlocking. Also, be careful re. false sharing. You may have to use a 'filler' array data member to ensure that no two objects share a cache line.