C API: Error allocating / deallocating memory for array - c++

I'm in the process of implementing an API for C. The code base itself is purely written in C++ and I only plan to offer said interface for any consumer using C. The interface is defined in a .h file, whereas the implementation itself is written in C++. I've read multiple times that using C++ to implement a C interface is not the best idea, but it works great in my case.
Anyway the header definition looks similar to this:
extern 'C' {
typedef struct Person {
const char *name;
uint32_t age;
uint32_t post_code;
} Person;
typedef struct PersonArray {
Person *person;
size_t size;
} PersonArray;
PersonArray *create(size_t size);
void destroy(PersonArray *array);
int fillArray(PersonArray *array);
}
I'd like the consumer to retrieve a handle for PersonArray, which contains an array of Person structure, allocated with the size passed to the create() function.
Since the implementation is in C++ I've tried to allocate the memory the following way:
static inline Person convert(const otherNamespace::Person &o_person) {
Person p{};
p.name = o_person.name;
p.age = o_person.age;
p.post_code = o_person.post_code;
return p;
}
PersonArray *create(size_t size) {
if (size <= 0) {
return nullptr;
}
PersonArray *array = new PersonArray();
array->size = size;
array->person = new Person[size]
return array;
}
void destory(PersonArray *array) {
delete array;
}
int fillArray(PersonArray *array) {
if (array == nullptr) {
return 1;
}
auto data = // retrieve std::vector<otherNamespace::Person> via RPC
for (auto i{0U}; i < array->size; i++) {
array->person[i] = convert(data.at(i);
}
return 0;
}
Unfortunately, this approach does not seem to work correctly, because when using a memchecker like valgrind, there are still blocks on the heap that are not correctly deallocated. I suppose the line new Person[size] does not get deallocated.
Any idea how to fix this memory leak? Or is there another design which would be better suited for this specific use case? If possible, I would really like to keep the implementation in C++.

You must use delete on person before array, but since it was allocated with new [] you must delete it with delete [].
void destroy(PersonArray *array) {
if (array) {
if (array->person) {
delete [] array->person;
}
delete array;
}
}

Related

How to use shared_ptr to manage an object placed with placement new?

A fairly common thing I need to do is allot an object and some memory it'd like, in a strictly contagious region of memory together:
class Thing{
static_assert(alignof(Thing) == alignof(uint32), "party's over");
public:
~Thing(){
//// if only, but this would result in the equivalent of `free(danglingPtr)` being called
//// as the second stage of shared_ptr calling `delete this->get()`, which can't be skipped I believe?
// delete [] (char*)this;
}
static Thing * create(uint32 count) {
uint32 size = sizeof(Thing) + sizeof(uint32) * count; // no alignment concerns
char * data = new char[size];
return new (data)Thing(count);
}
static void destroy(Thing *& p) {
delete [] (char*)p;
p = NULL;
}
uint32 & operator[](uint32 index) {
assert(index < m_count);
return ((uint32*)((char*)(this + sizeof(Thing))))[index];
}
private:
Thing(uint32 count) : m_count(count) {};
uint32 m_count;
};
int main(){
{
auto p = shared_ptr<Thing>(Thing::create(1));
// how can I tell p how to kill the Thing?
}
return 0;
}
In Thing::Create() this is done with placement new into a section of memory.
I'd also like to have a shared pointer manage it in this case, using auto p = shared_ptr<Thing>(Thing::create(1)). But If it calls the equivalent of delete p.get() when the ref count empties, that'd be undefined as it mismatches the type and, more importantly, mismatches plural new with singular delete. I need it to delete in a special way.
Is there a way to easily set that up without defining an outside function? Perhaps by having the shared pointer call Thing::destroy() when the ref count empties? I know that shared pointer can accept a "deleter" as a template argument, but I'm unsure how to use it, or if it's even the proper way to address this?
std::shared_ptr accepts a deleter function as a second parameter, so you can use that to define how the managed object will be destroyed.
Here's a simplified example:
class Thing
{
public:
~Thing()
{
std::cout << "~Thing\n";
}
static std::shared_ptr<Thing> create() {
char * data = new char[sizeof(Thing)];
Thing* thing = new (data) Thing{};
return std::shared_ptr<Thing>{thing, &Thing::destroy};
}
static void destroy(Thing* p) {
p->~Thing();
delete [] (char*)p;
}
};
int main()
{
auto p = Thing::create();
}
Live Demo

Memory leak after pointing to NEW object

struct StructA {
StructA(parameters) { ... } //StructA onstructor
};
struct StructB {
StructA *pObjectA;
int counter = 0;
void function() {
if (counter < 1) { pObjectA = new StructA[100]; }
pObjectA[counter] = *new StructA(parameters); //Memory leak here
counter++;
}
};
struct StructC {
StructB objectB;
~StructC() { //StructC destructor
delete[] objectB.pObjectA;
objectB.pObjectA = NULL;
}
};
int main() {
StructC objectC;
for (int i = 0; i < 900; i++) {
objectC.objectB.function();
}
return 0;
} //Struct C destructor here
I need to create an object array and then, with each call to objectB.function(), to pass specific parameters to the constructor of StructA. The code above works perfectly, except for the memory leak, which I am unable to get rid of.
My guess is that the StructC destructor deletes only the object array, not each *new StructA(parameters). I tried to play around with pointers and delete[] a little bit, but all I got was access memory violation errors. This is the only way I can think of that works. All help appreciated.
A class destructor should release resources that were acquired in its constructor. It seems like you wanted to defer deleting an array allocated in one class to the destructor of a second class. Thats never a good idea. In the best case you dont have to do anything in the destructor because you use automatic storage (means what the name suggest: memory is managed automatically).
Your code could look like this:
struct StructA {
StructA(parameters) { ... } //StructA onstructor
};
struct StructB {
std::vector<StructA> pObjectA;
int counter = 0;
void function() {
if (counter < 1) { pObjectA.reserve(100); }
pObjectA.emplace_back(parameters);
counter++;
}
};
struct StructC {
StructB objectB;
};
int main() {
StructC objectC;
for (int i = 0; i < 900; i++) {
objectC.objectB.function();
}
return 0;
}
Note that I tried to keep the structure as is maybe there are other things to change. For example you dont need counter, as you can use std::vector::size to query the number of elements in the vector.
PS: As you already noticed, this is a memory leak:
pObjectA[counter] = *new StructA(parameters); //Memory leak here
It is not really clear why you wrote that code in the first place. The idomatic way to create an object of type StructA is StructA a; (no new!).
As you correctly assumed, memory leaks are caused by not properly cleaning up all new with corresponsing delete. However in idiomatic C++ there's no use to use new and delete directly.
Use std::vector, std::shared_ptr and std::unique_ptr to let RAII keep track of dynamically created objects, references to them and when to clean up. Not only is it more robust, it's also a lot shorter and easier to read.
With your code's general overall structure:
#include <memory>
#include <vector>
struct StructA {
};
struct StructB {
std::vector<std::shared_ptr<StructA>> objectAs;
void function() {
objectAs.push_back(
std::make_shared<StructA>( /*parameters*/ )
);
}
};
struct StructC {
StructB objectB;
};
int main() {
StructC objectC;
for (int i = 0; i < 900; i++) {
objectC.objectB.function();
}
return 0;
}

How to create pool of contiguous memory of non-POD type?

I have a scenario where I have multiple operations represented in the following way:
struct Op {
virtual void Run() = 0;
};
struct FooOp : public Op {
const std::vector<char> v;
const std::string s;
FooOp(const std::vector<char> &v, const std::string &s) : v(v), s(s) {}
void Run() { std::cout << "FooOp::Run" << '\n'; }
};
// (...)
My application works in several passes. In each pass, I want to create many of these operations and at the end of the pass I can discard them all at the same time. So I would like to preallocate some chunk of memory for these operations and allocate new operations from this memory. I came up with the following code:
class FooPool {
public:
FooPool(int size) {
foo_pool = new char[size * sizeof(FooOp)]; // what about FooOp alignment?
cur = 0;
}
~FooPool() { delete foo_pool; }
FooOp *New(const std::vector<char> &v, const std::string &s) {
return new (reinterpret_cast<FooOp*>(foo_pool) + cur) FooOp(v,s);
}
void Release() {
for (int i = 0; i < cur; ++i) {
(reinterpret_cast<FooOp*>(foo_pool)+i)->~FooOp();
}
cur = 0;
}
private:
char *foo_pool;
int cur;
};
This seems to work, but I'm pretty sure I need to take care somehow of the alignment of FooOp. Moreover, I'm not even sure this approach is viable since the operations are not PODs.
Is my approach flawed? (most likely)
What's a better way of doing this?
Is there a way to reclaim the existing memory using unique_ptrs?
Thanks!
I think this code will have similar performance characteristics without requiring you to mess around with placement new and aligned storage:
class FooPool {
public:
FooPool(int size) {
pool.reserve(size);
}
FooOp* New(const std::vector<char>& v, const std::string& s) {
pool.emplace_back(v, s); // in c++17: return pool.emplace_back etc. etc.
return &pool.back();
}
void Release() {
pool.clear();
}
private:
std::vector<FooOp> pool;
}
The key idea here being that your FooPool is essentially doing what std::vector does.

C++ dynamically increase array size for object-array?

I'm making a game with cocos2d-x. The different objects in the game i want to store in a class (I dont know if this is a good idea, but so i can give every object a lot of attributes). Then i make an array out of the objects and for that i need an own datastructure, where i can push and pop my objects. I tried to write this datastructure, but i think i doing wrong with my push function (i want to dynamically increase array size), especially the delete []? Doesn't that destroy my object-pointers stored?
ObjectArray.h:
#pragma once
#include "C:\Cocos\Projects\FirstGame\proj.win32\anObject.h"
class ObjectArrayList
{
public:
ObjectArrayList(int c);
ObjectArrayList();
virtual ~ObjectArrayList(void);
void push(anObject *obj);
void pop(int id);
int findIndex(int id);
int getSize();
int getCapacity();
private:
int capacity;
int size;
anObject **objectList;
};
ObjectArray.cpp:
#include "ObjectArrayList.h"
#include <iostream>
using namespace std;
objectArrayList::ObjectArrayList(int c)
{
size=0;
capacity = c;
objectList = new anObject*[capacity];
}
ObjectArrayList::ObjectArrayList() {
}
ObjectArrayList::~ObjectArrayList(void) {
}
void ObjectArrayList::push(anObject *obj) {
if(size < capacity) {
} else {
int newCap = 2*capacity;
anObject **tmpObjectList = new anObject*[newCap];
for(int i = 0;i<capacity;i++) {
tmpObjectList[i] = objectList[i];
}
delete [] objectList;
objectList = tmpObjectList;
capacity = newCap;
}
objectList[size] = obj;
size++;
}
void ObjectArrayList::pop(int id) { //not finish yet
if(size != 0) {
size--;
}
}
int ObjectArrayList::findIndex(int id) {
return id;
}
int ObjectArrayList::getSize() {
return size;
}
int ObjectArrayList::getCapacity() {
return capacity;
}
anObject.h:
#pragma once
#include "cocos2d.h"
class anObject
{
public:
anObject(int hp_init, int x, int y);
anObject();
virtual ~anObject(void);
void decreaseHp();
int getHp();
void setMyPosition(int x, int y);
cocos2d::Sprite *getMySprite();
private:
cocos2d::Sprite *mySprite;
int hp;
int midX;
int midY;
int isX;
int isY;
};
Bert
As the comments pointed out... you should save some time and energy and try to use the Standard Template Library (STL).
If you insist on fixing this code, I think you should try referencing objectlist after the delete to see if it's still there... maybe this assignment...
objectList = tmpObjectList;
...is not tolerated.
Instead... try to build a copy constructor "public Object(Object copiedObject){}", make a new Object*[] of the 2X size, populate it, then get rid of the old and... without deleting objectList assign your new Object*[] to it...
The rest seems fine to me... hope this helps.
delete [] objectList;
This only frees the array which objectList points to. It does not free any of the objects in that array.

Valgrind complains about a memory leak but I'm calling new and delete

I have used pointers to create an array and then wrote a delete procedure in the destructor
class cBuffer{
private:
struct entry {
uint64_t key;
uint64_t pc;
};
entry *en;
public:
cBuffer(int a, int b, int mode)
{
limit = a;
dist = b;
md = mode;
en = new entry[ limit ];
for (int i=0; i<limit; i++) {
en[i].key = 0;
en[i].pc = 0;
}
};
~cBuffer() { delete [] en; }
...
}
In another class I use cBuffer like this:
class foo() {
cBuffer *buf;
foo()
{
buf = new cBuffer(gSize, oDist, Mode);
}
};
However, valgrind complains about the new operator
==20381== 16,906,240 bytes in 32 blocks are possibly lost in loss record 11,217 of 11,221
==20381== at 0x4A0674C: operator new[](unsigned long) (vg_replace_malloc.c:305)
==20381== by 0x166D92F8: cBuffer::cBuffer(int, int, int)
cBuffer *buf;
foo()
{
buf = new cBuffer(gSize, oDist, Mode);
}
You need to call
delete buf;
Since you explicitly called new
Your class foo will cause your leak, since you never delete the dynamically allocated cBuffer. The solution is simple: there's no need for dynamic allocation at all here.
class foo {
cBuffer buf; // An object, not a pointer
foo() : buf(gSize, oDist, Mode) {}
};
More generally, when you do need dynamic allocation, be careful that you always delete what you new. The most reliable way to do this is to use RAII types such as containers and smart pointers to manage all dynamic resources for you.