I am creating a message queue which is used by two processes. One of them is putting something in it and the other is reading it.
The message queue is following struct I created.
struct MSGQueue {
Action actions_[256];
int count;
MSGQueue() { count = 0; }
interprocess_mutex mutex;
Action Pop() {
--count;
return actions_[count];
}
void Put(Action act) {
actions_[count] = act;
++count;
}
};
Action is a custom class I created.
class Action {
public:
// Getter functions for the member
private:
std::string name_;
ActionFn action_fn_; // this is an enum
void* additional_data_;
}
I am creating a shared memory like this in the main program
shm_messages = shared_memory_object(create_only,"MySharedMemory", read_write);
shm_messages.truncate(sizeof(MSGQueue));
region = mapped_region(shm_messages_, read_write);
In my other program I am opening it and put an action in the queues array of actions.
boost::interprocess::shared_memory_object shm_messages_;
boost::interprocess::mapped_region region_;
shm_messages_ = shared_memory_object(open_only, "MySharedMemory", read_write);
shm_messages_.truncate(sizeof(MSGQueue));
region_ = mapped_region(shm_messages_, read_write);
//Get the address of the mapped region
void * addr = region_.get_address();
//Construct the shared structure in memory
MSGQueue * data = static_cast<MSGQueue*>(addr);
Action open_roof("OpenRoof", ActionFn::AFN_ON, NULL);
{ // Code block for scoped_lock. Mutex will automatically unlock after block.
// even if an exception occurs
scoped_lock<interprocess_mutex> lock(data->mutex);
// Put the action in the shared memory object
data->Put(open_roof);
}
The main program is checking if we got some new messages and if there is one it shall read it and put it in a list.
std::vector<ghpi::Action> actions;
//Get the address of the mapped region
void * addr = region_.get_address();
//Construct the shared structure in memory
MSGQueue * data = static_cast<ghpi::Operator::MSGQueue*>(addr);
if (!data) {
std::cout << " Error while reading shared memory" << std::endl;
return actions;
}
{
scoped_lock<interprocess_mutex> lock(data->mutex);
while (data->count > 0) {
actions.push_back(data->Pop()); // memory access violation here
std::cout << " Read action from shm" << std::endl;
}
}
The second program which is putting the action works fine. But after it run the main program is seeing the count has increased and is trying to read and throws an memory access violation at me.
I don't know why i am getting this violation error. Is there something special about sharing class objects or structs?
Let's take a look at the objects you're trying to pass between processes:
class Action {
// ...
std::string name_;
}
Well, looky here. What do we have here? We have here a std::string.
Did you know that sizeof(x), where x is a std::string will always give you the same answer, whether the string is empty, or has their entire contents of "War And Peace"? That's because your std::string does a lot of work that you don't really have to think about. It takes care of allocating the requirement memory for the string, and deallocating when it is no longer used. When a std::string gets copied or moved, the class takes care of handling these details correctly. It does its own memory allocation and deallocation. You can think of your std::string to consist of something like this:
namespace std {
class string {
char *data;
size_t length;
// More stuff
};
}
Usually there's a little bit more to this, in your typical garden-variety std::string, but this gives you the basic idea of what's going on.
Now try to think of what happens when you put your std::string into shared memory. Where do you think that char pointer still points to? Of course, it still points to somewhere, someplace, in your process's memory where your std::string allocated the memory for whatever string it represents. You have no idea where, because all that information is hidden in the string.
So, you placed this std::string in your shared memory region. You did place the std::string itself but, of course, not the actual string that it contains. There's no way you can possibly do this, because you have no means of accessing std::string's internal pointers and data. So, you've done that, and you're now trying to access this std::string from some other process.
This is not going to end well.
Your only realistic option is to replace the std::string with a plain char array, and then go through the extra work of making sure that it's initialized properly, doesn't overflow, etc...
Generally, in the context of IPC, shared memory, etc..., using any kind of a non-trivial class is a non-starter.
Related
I'm running a single-threaded system in FreeRTOS with limited resources.
I already preallocate buffers for the RapidJSON allocators as so:
char valueBuffer[2048];
char parseBuffer[1024];
rapidjson::MemoryPoolAllocator<FreeRTOSRapidJSONAllocator> valueAllocator (valueBuffer, sizeof(valueBuffer))
rapidjson::MemoryPoolAllocator<FreeRTOSRapidJSONAllocator> parseAllocator (parseBuffer, sizeof(parseBuffer));
The issue I have is that every time one of the allocators is used, its size keeps on increasing (and allocating new memory if necessary) unless they are cleared. The problem with calling Clear() on the allocators is that Malloc is called again when the allocators are next resized, which I want to avoid.
Is there a way to simply reuse the existing preallocated memory, such as by setting the allocators' size back to zero, for example?
I resolved this by creating a custom allocator. Essentially a copy of rapidjson::MemoryPoolAllocator with the addition of the following method:
void Reset()
{
chunkHead_->size = 0;
chunkHead_->next = 0;
}
Which should be called every time you're done with the last string that was parsed.
I needed to do the same, and thought I'd note what from reading the allocator code seems to be an alternative:
While I again create a char buffer[2048], I do not create and keep an allocator alongside it.
Rather, I create and delete an allocator anew when needed, while re-using the memory block. From reading the code of the allocator I see no Malloc so that should all be on the stack, no?
Edit - code example:
class MyClass
{
public:
myClass() :
_json_log_string_buffer(&_json_string_buffer_allocator,
_json_buffer_size) {}
(...)
private:
static constexpr int _json_buffer_size {4096};
char _json_logger_buffer[_json_buffer_size];
rapidjson::CrtAllocator _json_string_buffer_allocator;
rapidjson::StringBuffer _json_log_string_buffer;
};
MyClass::log_to_json()
{
rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator>
json_logger_allocator {_json_logger_buffer,
sizeof(_json_logger_buffer)};
rapidjson::Document json_doc(&json_logger_allocator);
auto& allocator = json_doc.GetAllocator();
json_doc.SetObject();
// using it:
json_doc.AddMember("id", id(), allocator);
json_doc.AddMember("KEY", "VALUE",
allocator);
(...)
// Here, monitoring the size of allocator, it never goes
// beyond the 4096 on repeat invocations.
// If I had it as a member, only creating once, it would grow.
std::cout << "JSON allocator size: " << allocator.Size() <<
", capacity: " << allocator.Capacity() << std::endl;
// Bonus point: I'm also using a rapidjson::StringBuffer.
// From a first read it doesn't seem to re-allocate.
// It doesn't use a MemoryPoolAllocator so I cannot do the
// same for it as I did for rapidjson::Document.
_json_log_string_buffer.Clear();
rapidjson::Writer<rapidjson::StringBuffer>
writer(_json_log_string_buffer);
json_doc.Accept(writer);
auto as_string = _json_log_string_buffer.GetString();
// Using string with logger...
}
I am currently optimizing a memory intensive application to occupy lesser memory. What I am trying to do in the following code is to allocate the file stream objects ifstream and ofstream dynamically in order to free them exactly after it's use is no longer needed. The code functions perfectly for allocation/de-allocation of ofstream but crashes at runtime due to a possible segmentation fault when the memory contents of ifstream is de-allocated. The following is a snippet of the original code:
#include <fstream>
using namespace std;
// Dummy class to emulate the issue at hand
class dummy {
private:
int randINT;
static bool isSeeded;
public:
dummy() { randINT=rand(); }
int getVal() { return randINT; }
};
bool dummy::isSeeded=false;
int main(int argc, const char* argv[]) {
// Binary file I/O starts here
dummy * obj;
ofstream * outputFile;
ifstream * inputFile;
outputFile=new ofstream("bFile.bin",ios::binary);
if (!(*outputFile).fail()) {
obj=new dummy;
cout << "Value to be stored: " << (*obj).getVal() << "\n";
(*outputFile).write((char *) obj, sizeof(*obj)); // Save object to file
(*outputFile).close();
delete obj;
// don't assign NULL to obj; obj MUST retain the address of the previous object it pointed to
} else {
cout << "Error in opening bFile.bin for writing data.\n";
exit(1);
}
delete outputFile; // This line throws no errors!
inputFile=new ifstream("bFile.bin",ios::binary);
if (!(*inputFile).fail()) {
(*inputFile).read((char *) obj,sizeof(dummy)); // Read the object of type 'dummy' from the binary file and allocate the object at the address pointed by 'obj' i.e. the address of the previously de-allocated object of type 'dummy'
cout << "Stored Value: " << (*obj).getVal() << "\n";
(*inputFile).close();
} else {
cout << "Error in opening bFile.bin for reading data.\n";
exit(1);
}
delete inputFile; // Runtime error is thrown here for no reason!
cout << "\n-----END OF PROGRAM-----\n";
}
The above code saves an object to a binary file when the user evokes save functions. As stated in the code above, the de-allocation of inputFile pointer throws a runtime error.
Please note that I am using clang(more specifically, clang++) to compile the project.
Thanks in advance.
std::basic_istream::read(char_type*address, streamsize count) reads up to count characters and places them at address. It does not create any objects at address, as you seem to believe. The address must point to a valid chunk of memory at least count*sizeof(char_type) in size. By passing the address of a deleted object, you violate that condition: your code is broken and segmentation fault is not unexpected.
EDIT
What you're doing is undefined behaviour, short UB. When you do that, nothing is guaranteed and any logic about what happens in which order invalid. The program is allowed to do anything, including immediately crashing, running for a while and then crashing, or "make demons fly out of your nose".
In your case, I suspect that std::basic_istream::read() writing to unprotected memory causes some data to be overwritten, for example the address of another object, which later causes the segmentation fault. But this is pure speculation and not really worth pursuing.
In your case no object is created. The binary file just contains some bytes. The read() copies them to the address provided, which was not reserved for that purpose. To avoid the UB, simply add
obj = new dummy;
before the read to create an object.
If you want to re-use the memory from the previous object, you could use placement new (see points 9 and 10 of that link). For example
char*buffer = nullptr; // pointer to potential memory buffer
if(writing) {
if(!buffer)
buffer = new char[sizeof(dummy)]; // reserve memory buffer
auto pobj = new(buffer) dummy(args); // create object in buffer
write(buffer,sizeof(dummy)); // write bytes of object
pobj->~pobj(); // destruct object, but don't free buffer
}
if(reading) {
if(!buffer)
buffer = new char[sizeof(dummy)]; // not required if writing earlier
read(buffer,sizeof(dummy)); // read bytes into object
auto pobj = reinterpret_case<dummy*>(buffer); // no guarantees here
use(pobj); // do something with the object read
pobj->~pobj(); // destruct object
}
delete[] buffer; // free reserved memory
Note that if the reading does not generate a valid object, the later usage of that object, i.e. the call to its destructor, may crash.
However, all this micro-optimisation is pointless anyway (it's only worth doing if you can avoid many calls to new and delete). Don't waste your time with that.
First off, this is not a duplicate. My question is how to do it with dynamic memory. The reason this is distinct is because my delete[] is hanging.
So, here's what I have:
class PacketStrRet {
public:
PacketStrRet(char p_data[], int p_len) : len(p_len) {
data = new char[p_len];
memcpy(data, p_data, p_len * sizeof(char));
}
~PacketStrRet() {
delete[] data;
data = nullptr;
}
char* data;
int len;
};
And yes, I'm aware that my code is not using the best practices. I'll clean it up later.
The problem I'm having is in the DTOR. That delete is hanging forever. The data being passed in to the CTOR is not dynamic memory, so I need to make it dynamic so things don't go out of scope. p_len holds the correct amount of data, so there's no problem there.
From what I've read, memcpy seems to be the most likely culprit here. So how do I copy a string that is not null-terminated to dynamic memory, and then still be able to delete it later?
Thanks.
The problem is not the delete, only everything that comes before and even that would be ok if there didn't occur any problems.
class PacketStrRet {
// Use RAII
std::unique_ptr<char> data; // I own this data and will destroy it.
// now the parent class is also only movable, use shared_ptr if you can't live with that.
int len;
public:
PacketStrRet(
// <RED ALERT>
char p_data[], int p_len // user can lie to us.
// </RED ALERT>
) try : // function try block, se 1)
len(p_len), data(new char[p_len]) {
memcpy(data, p_data.get(), p_len * sizeof(char));
} catch(const std::exception& e) {
std::cerr << "arg=" << arg << " failed: " << e.what() << '\n';
}
~PacketStrRet() {
// unique_ptr takes care of memory management and garbage collection.
}
// access functions
};
Now the possible errors you could make to blow the code up.
You could have copied the object, essentially making two owning raw pointers to the same data. This would blow up at delete, you coudl use memory-sanitizer / valgrind to confirm this happens. Use smart pointers to save you the trouble, the unique pointer should cause a compiler error if you tried to copy, unless you memcpy the entire structure ignoring the copy/assignment constructors.
You could give the wrong len to the constructor, what is the source of the data and len? Valgrind / memory-sanitizer can save you.
The memory corruption could happen in a totally different place. Valgrind / memory-sanitizer can save you.
In case valgrind mem-san are too much, you can try to make a check for double delete, if you make a counter in c'tor and d'tor and if it ever goes to negative you have your error.
In this class your at least missing a copy constructor. Check up on rule of 3, 5, 7 and 0 (zero) to find out how many you need.
1) http://en.cppreference.com/w/cpp/language/function-try-block
Try to use std:: copy(). It will be like this:
std::copy(p_data, p_data + p_len, data).
I would like experts review on the following dynamic memory allocation process and suggest whether there are any memory leaks. Following code is not real code in use, but trying understand concept of memory allocations and de-allocation in different ways.
class ObjMapData
{
private:
int* itsMapData;
........
public:
ObjMapData();
~ObjMapData(){if(itsMapData!= NULL) delete[] itsMapData;}
ClearMemory() {if(itsMapData!= NULL) {delete[] itsMapData; itsMapData= NULL}}
.......
void SetMapData(int* ptrData) { itsMapData = ptrData;} // or should I use int*&ptrData??
int* GetMapData() const { return itsMapData;}
}
Now can I do the following without any memory leaks?
bool Function1(ObjMapData& objMyMap)
{
//populate the ptrData with some data values using many different ways
int* ptrData = new int[1000]; // usually a variable from binary file header
......................
objMyMap.SetMapData(ptrData);
//don't want to delete the memory yet
return true;
}
bool Function2(ObjMapData& objMyMap)
{
int* ptrData = objMyMap.GetMapData();
//do some work such as writing updated data into a binary file
}
bool Function3(ObjMapData& objMyMap)
{
//populate the data
bool bStatus = Function1(objMyMap);
int* ptrData = objMyMap.GetMapData();
//update the map data using ptrData variable
..........
bStatus = Function2(objMyMap); // write data to a binary file
objMyMap.ClearMemory(); // not real code in use, but for understanding the concept
bStatus = Function1(objMyMap); // re-allocate memory
ptrData = objMyMap.GetMapData();
//update the map data using ptrData variable
objMyMap.SetMapData(ptrData); // Do I need to set again or member pointer get updated automatically?
return true
}
int main()
{
ObjMapData objMyMap;
bool bStatus = Function3(objMyMap);
//default destructor of objMyMap can take care of the allocated memory cleanup
return 0;
}
Thank you for your time to confirm the dynamic memory allocation..
Although this may seem to be more to do with style than your question about memory leaks, I would handle the data privately within the class:
class ObjMapData
{
private:
int* itsMapData;
// consider adding 'datasize' member variable
........
public:
ObjMapData(){ itsMapData=NULL; }; // initialise member variable(s)!
~ObjMapData(){ delete[] itsMapData;}
ClearMemory() { delete[] itsMapData; itsMapData=NULL; }
.......
void CreateMapData(int size) { ClearMemory(); itsMapData= new int[size]; }
void FillDataFrom( ???? ) { ???? };
int* GetMapData() const { return itsMapData;}
}
You are then in a better position to improve the class by adding copy constructor and assignment methods which will prevent memory leaks when you use the class.
EDIT
You ask:
My concern here is which of the following is right: void
SetMapData(int* ptrData) Vs void SetMapData(int*&ptrData)
Both are 'right' in the sense that both allow the external (to the class) pointer to be copied and used within your class - with respect to 'memory leaks' it depends on which part of your code you want to manage the memory you allocated. You could:
Have a class handle allocation/deallocation internally
Allocate memory, use some class to manipulate it, deallocate memory outside class
Have a class allocate memory and later deallocate it outside the class
Allocate memory and have some class manipulate and deallocate it.
Usually I find 1 and 2 make more sense than 3 or 4. i.e. it is easier to follow what is going on, less likely to hide errors and so on.
However, as far as 'leaking memory' is concerned: it does not matter where the pointer to an allocated memory block is, how it has been copied, assigned or referenced - it is it's value as a memory address which is important. So, as long as you new and delete that memory address correctly you will not leak memory (whether those actions are inside a class or not).
If, in your application, you need to allocate/deallocate the int array external to your class, it does make some sense for the member functions reference the pointer as a hint to the reader that the class is not responsible for its deallocation - but some decent comments should make that clear anyway :)
Over the years I've come across umpteen bugs due to the mishandling of the "passing of ownership" of allocated memory (more so with good ol 'C') where some piece of code has been written assuming either that it has to free a block or someone else will do it.
Does that answer your question or have I missed the point?
I have a strong use case for pre-allocating all the memory I need upfront and releasing it upon completion.
I have came out with this real simple buffer pool C++ implementation which I have to test but I am not sure that the pointer arithmetic I am trying to use will allow me to do that. Basically the bit where I do next and release. I would prefer some trick around this idea and not relying on any sort of memory handler which just makes the client code more convoluted.
#include <stdio.h>
#include <queue>
#include "utils_mem.h"
using namespace std;
template <class T>
class tbufferpool {
private:
const int m_initial;
const int m_size;
const int m_total;
T* m_buffer;
vector<T*> m_queue;
public:
// constructor
tbufferpool(int initial, int size) : m_initial(initial), m_size(size), m_total(initial*size*sizeof(T)) {
m_buffer = (T*) malloc(m_total);
T* next_buffer = m_buffer;
for (int i=0; i < initial; ++i, next_buffer += i*size) {
m_queue.push_back(next_buffer);
}
}
// get next buffer element from the pool
T* next() {
// check for pool overflow
if (m_queue.empty()) {
printf("Illegal bufferpool state, our bufferpool has %d buffers only.", m_initial);
exit(EXIT_FAILURE);
}
T* next_buffer = m_queue.back();
m_queue.pop_back();
return next_buffer;
}
// release element, make it available back in the pool
void release(T* buffer) {
assert(m_buffer <= buffer && buffer < (buffer + m_total/sizeof(T)));
m_queue.push_back(buffer);
}
void ensure_size(int size) {
if (size >= m_size) {
printf("Illegal bufferpool state, maximum buffer size is %d.", m_size);
exit(EXIT_FAILURE);
}
}
// destructor
virtual ~tbufferpool() {
free(m_buffer);
}
};
First, when you increase a pointer to T, it will point the next element of T in the memory.
m_queue.push(m_buffer + (i*size*sizeof(T)));
This should be like
m_buffer = (T*) malloc(m_total);
T* next = m_buffer;
for (int i=0; i < initial; ++i) {
m_queue.push(next++);
}
Second,
assert(m_buffer <= buffer && buffer < m_total);
It should be like
assert(m_buffer <= buffer && buffer <= m_buffer + m_total/sizeof(T));
Hope it helps!
I don't understand why you're "wrapping" the STL queue<> container. Just put your "buffers" in the queue, and pull the addresses as you need them. When you're done with a "segment" in the buffer, just pop it off of the queue and it's released automatically. So instead of pointers to buffers, you just have the actual buffer classes.
It just strikes me as re-inventing the wheel. Now since you need the whole thing allocated at once, I'd use vector not queue, because the vector<> type can be allocated all at once on construction, and the push_back() method doesn't re-allocate unless it needs to, the same with pop_back(). See here for the methods used.
Basically, though, here's my back-of-the-envelope idea:
#include <myType.h> // Defines BufferType
const int NUMBUFFERS = 30;
int main()
{
vector<BufferType> myBuffers(NUMBUFFERS);
BufferType* segment = &(myBuffers[0]); // Gets first segment
myBuffers.pop_back(); // Reduces size by one
return 0;
}
I hope that gives you the general idea. You can just use the buffers in the vector that way, and there's only one allocation or de-allocation, and you can use stack-like logic if you wish. The dequeue type may also be worth looking at, or other standard containers, but if it's just "I only want one alloc or de-alloc" I'd just use vector, or even a smart pointer to an array possibly.
Some stuff I've found out using object pools:
I'm not sure about allocating all the objects at once. I like to descend all my pooled objects from a 'pooledObject' class that contains a private reference to its own pool, so allowing a simple, parameterless 'release' method and I'm always absolutely sure that an object is always released back to its own pool. I'm not sure how to load up every instance with the pool reference with a static array ctor - I've always constructed the objects one-by-one in a loop.
Another useful private member is an 'allocated' boolean, set when an object is depooled and cleared when released. This allows the pool class to detect and except immediately if an object is released twice. 'Released twice' errors can be insanely nasty if not immediately detected - weird behaviour or a crash happens minutes later and, often, in another thread in another module. Best to detect double-releases ASAP!
I find it useful and reassuring to dump the level of my pools to a status bar on a 1s timer. If a leak occurs, I can see it happening and, often, get an idea of where the leak is by the activity I'm on when a number drops alarmingly. Who needs Valgrind:)
On the subject of threads, if you have to make your pools thread-safe, it helps to use a blocking queue. If the pool runs out, threads trying to get objects can wait until they are released and the app just slows down instead of crashing/deadlocking. Also, be careful re. false sharing. You may have to use a 'filler' array data member to ensure that no two objects share a cache line.