I currently have a c++ class as follows:
template<class T>
class MyQueue {
T** m_pBuffer;
unsigned int m_uSize;
unsigned int m_uPendingCount;
unsigned int m_uAvailableIdx;
unsigned int m_uPendingndex;
public:
MyQueue(): m_pBuffer(NULL), m_uSize(0), m_uPendingCount(0), m_uAvailableIdx(0),
m_uPendingndex(0)
{
}
~MyQueue()
{
delete[] m_pBuffer;
}
bool Initialize(T *pItems, unsigned int uSize)
{
m_uSize = uSize;
m_uPendingCount = 0;
m_uAvailableIdx = 0;
m_uPendingndex = 0;
m_pBuffer = new T *[m_uSize];
for (unsigned int i = 0; i < m_uSize; i++)
{
m_pBuffer[i] = &pItems[i];
}
return true;
}
};
So, I have this pointer to arrays m_pBuffer object and I was wondering if it is possible to replace this way of doing things with the c++ smart pointer perhaps? I know I can do things like:
std::unique_ptr<T> buffer(new T[size]);
Is using a vector of smart pointers the way to go? Is this recommended and safe?
[EDIT]
Based on the answers and the comments, I have tried to make a thread-safe buffer array. Here it is. Please comment.
#ifndef __BUFFER_ARRAY_H__
#define __BUFFER_ARRAY_H__
#include <memory>
#include <vector>
#include <mutex>
#include <thread>
#include "macros.h"
template<class T>
class BufferArray
{
public:
class BufferArray()
:num_pending_items(0), pending_index(0), available_index(0)
{}
// This method is not thread-safe.
// Add an item to our buffer list
void add(T * buffer)
{
buffer_array.push_back(std::unique_ptr<T>(buffer));
}
// Returns a naked pointer to an available buffer. Should not be
// deleted by the caller.
T * get_available()
{
std::lock_guard<std::mutex> lock(buffer_array_mutex);
if (num_pending_items == buffer_array.size()) {
return NULL;
}
T * buffer = buffer_array[available_index].get();
// Update the indexes.
available_index = (available_index + 1) % buffer_array.size();
num_pending_items += 1;
return buffer;
}
T * get_pending()
{
std::lock_guard<std::mutex> lock(buffer_array_mutex);
if (num_pending_items == 0) {
return NULL;
}
T * buffer = buffer_array[pending_index].get();
pending_index = (pending_index + 1) % buffer_array.size();
num_pending_items -= 1;
}
private:
std::vector<std::unique_ptr<T> > buffer_array;
std::mutex buffer_array_mutex;
unsigned int num_pending_items;
unsigned int pending_index;
unsigned int available_index;
// No copy semantics
BufferArray(const BufferArray &) = delete;
void operator=(const BufferArray &) = delete;
};
#endif
Vector of smart pointers is good idea. It is safe enough inside your class - automatic memory deallocation is provided.
It is not thread-safe though, and it's not safe in regard of handling external memory given to you by simple pointers.
Note that you current implementation does not delete pItems memory in destructor, so if after refactoring you mimic this class, you should not use vector of smart pointers as they will delete memory referenced by their pointers.
On the other side you cannot garantee that noone outside will not deallocate memory for pItems supplied to your Initialize. IF you want to use vector of smart pointers, you should formulate contract for this function that clearly states that your class claims this memory etc. - and then you should rework outside code that calls your class to fit into new contract.
If you don't want to change memory handling, vector of simple pointers is the way to go. Nevertheless, this piece of code is so simple, that there is no real benefit of vector.
Note that overhead here is creation of smart pointer class for each buffer and creation of vector class. Reallocation of vector can take up more memory and happens without your direct control.
The code has two issues:
1) Violation of the rule of zero/three/five:
To fix that you do not need a smart pointer here. To represent a dynamic array with variable size use a std:vector<T*>. That allows you to drop m_pBuffer, m_uSize and the destructor, too.
2) Taking the addresses of elements of a possible local array
In Initialize you take the addresses of the elements of the array pItems passed as argument to the function. Hence the queue does not take ownership of the elements. It seems the queue is a utility class, which should not be copyable at all:
template<class T>
class MyQueue
{
std::vector<T*> m_buffer;
unsigned int m_uPendingCount;
unsigned int m_uAvailableIdx;
unsigned int m_uPendingndex;
public:
MyQueue(T *pItems, unsigned int uSize)
: m_buffer(uSize, nullptr), m_uPendingCount(0), m_uAvailableIdx(0), m_uPendingndex(0)
{
for (unsigned int i = 0; i < uSize; i++)
{
m_buffer[i] = &pItems[i];
}
}
private:
MyQueue(const MyQueue&); // no copy (C++11 use: = delete)
MyQueue& operator = (const MyQueue&); // no copy (C++11 use: = delete)
};
Note:
The red herring is the local array.
You may consider a smart pointer for that, but that is another question.
Related
I have two programs. The first allocates a Shared-Memory file and the second reads from it.. I am using placement-new to place objects into this memory guaranteeing that the objects do NOT use new or allocate any memory outside of the Shared-Memory file.
My Array structure:
template<typename T, size_t Size>
struct SHMArray {
SHMArray() : ptr(elements) {}
SHMArray(const SHMArray& other) { std::copy(other.begin(), other.end(), begin()); }
SHMArray(SHMArray&& other)
{
std::swap(other.ptr, ptr);
std::fill_n(ptr.get(), Size, T());
}
~SHMArray()
{
std::fill_n(ptr.get(), Size, T());
}
constexpr bool empty() const noexcept
{
return Size == 0;
}
constexpr size_type size() const noexcept
{
return Size;
}
T& operator[](std::size_t pos)
{
return *(ptr.get() + pos);
}
constexpr const T& operator[](std::size_t pos) const
{
return *(ptr.get() + pos);
}
T* data() noexcept
{
return ptr.get();
}
constexpr const T* data() const noexcept
{
return ptr.get();
}
private:
offset_ptr<T> ptr;
T elements[];
};
Program 1:
int main()
{
//Allocate a shared memory file of 1mb..
auto memory_map = SharedMemoryFile("./memory.map", 1024 * 1024, std::ios::in | std::ios::out);
memory_map.lock();
//Pointer to the shared memory.
void* data = memory_map.data();
//Place the object in the memory..
SHMArray<int, 3>* array = ::new(data) SHMArray<int, 3>();
(*array)[0] = 500;
(*array)[1] = 300;
(*array)[2] = 200;
memory_map.unlock(); //signals other program it's okay to read..
}
Program 2:
int main()
{
//Open the file..
auto memory_map = SharedMemoryFile("./memory.map", 1024 * 1024, std::ios::in | std::ios::out);
memory_map.lock();
//Pointer to the shared memory.
void* data = memory_map.data();
//Place the object in the memory..
//I already understand that I could just cast the `data` to an SHMArray..
SHMArray<int, 3>* array = ::new(data) SHMArray<int, 3>();
for (int i = 0; i < array.size(); ++i)
{
std::cout<<(*array)[i]<<"\n";
}
memory_map.unlock(); //signals other program it's okay to read..
}
Program One placed the SHMArray in memory with placement new. Program Two does the same thing on top of program one's already placed object (overwriting it). Is this undefined behaviour? I don't think it is but I want to confirm.
Neither program calls the destructor array->~SHMVEC(); I also don't think this leaks as long as I close the MemoryMapped file then it should all be fine.. but I want to make sure this is fine. If I ran the programs again on the same file, it shouldn't be a problem.
I am essentially making the assumption that placement new is working as if I placed a C struct in memory in this particular scenario via: struct SHMArray* array = (struct SHMArray*)data;.. Is this correct?
I am essentially making the assumption that placement new is working
as if I placed a C struct in memory in this particular scenario via:
struct SHMArray* array = (struct SHMArray*)data;.. Is this correct?
No, this is not correct. Placement new also invokes the object's appropriate constructor. "struct SHMArray* array = (struct SHMArray*)data;" does not invoke any object's constructor. It's just a pointer conversion cast. Which does not invoke anyone's constructor. Key difference.
In your sample code, you do actually want to invoke the templated object's constructor. Although the shown example has other issues, as already mentioned in the comments, this does appear to be what needs to be done in this particular situation.
But insofar as the equivalent of placement new versus a pointer cast, no they're not the same. One invokes a constructor, one does not. new always invokes the constructor, whether it's placement new, or not. This is a very important detail, that's not to be overlooked.
This question already has answers here:
How to resize array in C++?
(5 answers)
Closed 4 years ago.
I am sorry if this has already been covered before. I know how to do this is C and Java but not C++. Without using a pre-existing class which includes the use of Vector, how would you increase the size of an array given the code below?
The array expansion and assignment to the array takes place in push() noted with the all caps comment.
EDIT: As I have mentioned in comments below this is a question regarding manually reallocating arrays rather than using std::vector or "Dynamic Arrays."
Line.h
#include <iostream>
#include "Point.h"
using namespace std;
class Line {
public:
Line();
virtual ~Line();
// TAKE IN NEW POINT, INCREASE THE ARRAY SIZE AND ADD NEW POINT TO THE END OF THE ARRAY
void push(const Point& p);
private:
unsigned int index; // size of "points" array
Point* points;
};
Main.cpp
#include <iostream>
#include "Point.h"
#include "Line.h"
using namespace std;
int main() {
int x, y;
int size; // Some user defined size for the array
Line line;
Point a[size]; // Some points that are already filled
// Push the data in a[] to the variable "line"
for(int i = 0; i < size; i++){
// Increase array size of Point* points in variable line and add a[i] to the end of the array
line.push(points[i]);
}
return 0;
}
The simple answer is you should always use std::vector in this case. However it might be useful to explain just why that is. So lets consider how you would implement this without std::vector so you might see just why you would want to use std::vector:
// Naive approach
Line::push(const Point& p)
{
Point* new_points = new Points[index + 1];
std::copy(std::make_move_iterator(points), std::make_move_iterator(points+index), new_points);
new_points[index] = p;
delete[] points;
points = new_points;
index += 1;
}
This approach has many problems. We are forced to reallocate and move the entire array every time an entry is inserted. However a vector will pre-allocate a reserve and use space out of the reserve for each insert, only re-allocating space once the reserve limit is surpassed. This mean vector will far out perform your code in terms of performance as less time will be spent allocating and moving data unnecessarily. Next is the issue of exceptions, this implementation has no exception guarantees, where as the std::vector provides you with a strong exception guarantee: https://en.wikipedia.org/wiki/Exception_safety. Implementing a strong exception guarantee for your class is none trivial, however you would have automatically got this had you implemented this in terms of std::vector as such
Line::push(const Point& p)
{
points.push_back(p);
}
There are also other more subtle problems with your approach, your class does not define copy or assignment operators and so gets compiler generated shallow copy versions generated which means if someone copies your class then allocated members will get deleted twice. To resolve this you need to follow the rule of 3 paradigm pre C++11 and the rule of 5 for C++ 11 onwards: https://en.wikipedia.org/wiki/Rule_of_three_(C%2B%2B_programming). However had you used a vector none of this would be needed as you would benefit from the rule of zero and be able to rely on the compiler generated defaults: https://blog.rmf.io/cxx11/rule-of-zero
Essentially the only way is to use a dynamic array (one created using new[]) and to create an entirely new dynamic array and copy (or move) the objects from the old array to the new one.
Something like this:
class Line {
public:
Line(): index(0), points(nullptr) {} // initialize
virtual ~Line() { delete[] points; } // Clean up!
void push(const Point& p)
{
// create new array one element larger than before
auto new_points = new Point[index + 1];
// copy old elements to new array (if any)
for(unsigned int p = 0; p < index; ++p)
new_points[p] = points[p];
new_points[index] = p; // then add our new Point to the end
++index; // increase the recorded number of elements
delete[] points; // out with the old
points = new_points; // in with the new
}
private:
unsigned int index; // size of "points" array
Point* points;
};
But this approach is very inefficient. To do this well is quite complex. The main problems with doing things this way are:
Exception safety - avoiding a memory leak if an exception is thrown.
Allocation - avoiding having to reallocate (and re-copy) every single time.
Move semantics - taking advantage of some objects ability to be moved much more efficiently than they are copied.
A (slightly) better version:
class Line {
public:
Line(): index(0) {} // initialize
virtual ~Line() { } // No need to clean up because of `std::unique_ptr`
void push(const Point& p)
{
// create new array one element larger than before
auto new_points = std::unique_ptr<Point[]>(new Point[index + 1]);
// first add our new Point to the end (in case of an exception)
new_points[index] = p;
// then copy/move old elements to new array (if any)
for(unsigned int p = 0; p < index; ++p)
new_points[p] = std::move(points[p]); // try to move else copy
++index; // increase the recorded number of elements
std::swap(points, new_points); // swap the pointers
}
private:
unsigned int index; // size of "points" array
std::unique_ptr<Point[]> points; // Exception safer
};
That takes care of exception safety and (to some degree - but not entirely) move semantics. However it must be pointed out that exception safety is only going to be complete if the elements stored in the array (type Point) are themselves exception safe when being copied or moved.
But this does not deal with efficient allocation. A std::vector will over allocate so it doesn't have to do it with every new element. This code also misses a few other tricks that a std::vector would employ (like allocating uninitialized memory and constructing/destructing the elements manually as and when they are needed/discarded).
You basically have no way but to allocate a new array, copy existing values inside and delete [] the old one. That's why vector is doing the reallocation by a multiplicative factor (say each reallocation doubles the size). This is one of the reasons you want to use the standard library structures instead of reimplementing.
Keep It Simple
In my opinion, in this case, it's better to use a Linked-List of CPoint in CLine:
struct CPoint
{
int x = 0, y = 0;
CPoint * m_next = nullptr;
};
class CLine
{
public:
CLine() {};
virtual ~CLine()
{
// Free Linked-List:
while (m_points != nullptr) {
m_current = m_points->m_next;
delete m_points;
m_points = m_current;
}
};
// TAKE IN NEW POINT, INCREASE THE ARRAY SIZE AND ADD NEW POINT TO THE END OF THE ARRAY
void push(const CPoint& p)
{
m_current = (((m_points == nullptr) ? (m_points) : (m_current->m_next)) = new CPoint);
m_current->m_x = p.m_x;
m_current->m_y = p.m_y;
m_index++;
};
private:
unsigned int m_index = 0; // size of "points" array
CPoint * m_points = nullptr, * m_current = nullptr;
};
.
Or, even better with smart pointers:
#include <memory>
struct CPoint
{
int m_x = 0, m_y = 0;
std::shared_ptr<CPoint> m_next;
};
class CLine
{
public:
CLine() {};
virtual ~CLine() {}
// TAKE IN NEW POINT, INCREASE THE ARRAY SIZE AND ADD NEW POINT TO THE END OF THE ARRAY
void push(const CPoint& p)
{
m_current = (((m_points == nullptr) ? (m_points) : (m_current->m_next)) = std::make_shared<CPoint>());
m_current->m_x = p.m_x;
m_current->m_y = p.m_y;
m_index++;
};
private:
unsigned int m_index = 0; // size of "points" array
std::shared_ptr<CPoint> m_points, m_current;
};
So I have a struct as shown below, I would like to create an array of that structure and allocate memory for it (using malloc).
typedef struct {
float *Dxx;
float *Dxy;
float *Dyy;
} Hessian;
My first instinct was to allocate memory for the whole structure, but then, I believe the internal arrays (Dxx, Dxy, Dyy) won't be assigned. If I assign internal arrays one by one, then the structure of arrays would be undefined. Now I think I should assign memory for internal arrays and then for the structure array, but it seems just wrong to me. How should I solve this issue?
I require a logic for using malloc in this situation instead of new / delete because I have to do this in cuda and memory allocation in cuda is done using cudaMalloc, which is somewhat similar to malloc.
In C++ you should not use malloc at all and instead use new and delete if actually necessary. From the information you've provided it is not, because in C++ you also rather use std::vector (or std::array) over C-style-arrays. Also the typedef is not needed.
So I'd suggest rewriting your struct to use vectors and then generate a vector of this struct, i.e.:
struct Hessian {
std::vector<float> Dxx;
std::vector<float> Dxy;
std::vector<float> Dyy;
};
std::vector<Hessian> hessianArray(2); // vector containing two instances of your struct
hessianArray[0].Dxx.push_back(1.0); // example accessing the members
Using vectors you do not have to worry about allocation most of the time, since the class handles that for you. Every Hessian contained in hessianArray is automatically allocated for you, stored on the heap and destroyed when hessianArray goes out of scope.
It seems like problem which could be solved using STL container. Regarding the fact you won't know sizes of arrays you may use std::vector.
It's less error-prone, easier to maintain/work with and standard containers free their resources them self (RAII). #muXXmit2X already shown how to use them.
But if you have/want to use dynamic allocation, you have to first allocate space for array of X structures
Hessian *h = new Hessian[X];
Then allocate space for all arrays in all structures
for (int i = 0; i < X; i++)
{
h[i].Dxx = new float[Y];
// Same for Dxy & Dyy
}
Now you can access and modify them. Also dont forget to free resources
for (int i = 0; i < X; i++)
{
delete[] h[i].Dxx;
// Same for Dxy & Dyy
}
delete[] h;
You should never use malloc in c++.
Why?
new will ensure that your type will have their constructor called. While malloc will not call constructor. The new keyword is also more type safe whereas malloc is not typesafe at all.
As other answers point out, the use of malloc (or even new) should be avoided in c++. Anyway, as you requested:
I require a logic for using malloc in this situation instead of new / delete because I have to do this in cuda...
In this case you have to allocate memory for the Hessian instances first, then iterate throug them and allocate memory for each Dxx, Dxy and Dyy. I would create a function for this like follows:
Hessian* create(size_t length) {
Hessian* obj = (Hessian*)malloc(length * sizeof(Hessian));
for(size_t i = 0; i < length; ++i) {
obj[i].Dxx = (float*)malloc(sizeof(float));
obj[i].Dxy = (float*)malloc(sizeof(float));
obj[i].Dyy = (float*)malloc(sizeof(float));
}
return obj;
}
To deallocate the memory you allocated with create function above, you have to iterate through Hessian instances and deallocate each Dxx, Dxy and Dyy first, then deallocate the block which stores the Hessian instances:
void destroy(Hessian* obj, size_t length) {
for(size_t i = 0; i < length; ++i) {
free(obj[i].Dxx);
free(obj[i].Dxy);
free(obj[i].Dyy);
}
free(obj);
}
Note: using the presented method will pass the responsibility of preventing memory leaks to you.
If you wish to use the std::vector instead of manual allocation and deallocation (which is highly recommended), you can write a custom allocator for it to use cudaMalloc and cudaFree like follows:
template<typename T> struct cuda_allocator {
using value_type = T;
cuda_allocator() = default;
template<typename U> cuda_allocator(const cuda_allocator<U>&) {
}
T* allocate(std::size_t count) {
if(count <= max_size()) {
void* raw_ptr = nullptr;
if(cudaMalloc(&raw_ptr, count * sizeof(T)) == cudaSuccess)
return static_cast<T*>(raw_ptr);
}
throw std::bad_alloc();
}
void deallocate(T* raw_ptr, std::size_t) {
cudaFree(raw_ptr);
}
static std::size_t max_size() {
return std::numeric_limits<std::size_t>::max() / sizeof(T);
}
};
template<typename T, typename U>
inline bool operator==(const cuda_allocator<T>&, const cuda_allocator<U>&) {
return true;
}
template<typename T, typename U>
inline bool operator!=(const cuda_allocator<T>& a, const cuda_allocator<U>& b) {
return !(a == b);
}
The usage of an custom allocator is very simple, you just have to specify it as second template parameter of std::vector:
struct Hessian {
std::vector<float, cuda_allocator<float>> Dxx;
std::vector<float, cuda_allocator<float>> Dxy;
std::vector<float, cuda_allocator<float>> Dyy;
};
/* ... */
std::vector<Hessian, cuda_allocator<Hessian>> hessian;
Following my understanding of C++ convention, I have:
class BlockRepresentation : public FPRepresentation
{
private:
class Block
{
public:
int id;
int fpDimensions;
int* position; // pointers in question
int* blockDimensions; // pointers in question
~Block();
};
std::vector<Block> all_blocks;
public:
BlockRepresentation( int count, int dimensions, int volumn[] );
void AddBlock( int id, int position[], int dimensions[] );
std::string ToGPL();
};
where new blocks are created in AddBlock:
void BlockRepresentation::AddBlock( int id, int position[],
int dimensions[] )
{
Block newBlock;
newBlock.id = id;
newBlock.fpDimensions = fpDimensions;
newBlock.position = new int[fpDimensions]; // pointers in question
newBlock.blockDimensions = new int[fpDimensions]; // pointers in question
for (int i = 0; i < fpDimensions; ++i)
{
newBlock.position[i] = position[i];
newBlock.blockDimensions[i] = dimensions[i];
}
all_blocks.push_back( newBlock );
}
so I have the following destructor:
BlockRepresentation::Block::~Block()
{
delete[] position;
delete[] blockDimensions;
}
but then I get:
rep_tests(11039,0x7fff71390000) malloc: *** error for object 0x7fe4fad00240: pointer being freed was not allocated
Why should I not delete[] the 2 pointers here?
As was pointed out in the comments, you violated the rule of three, and the violation is very obvious:
{
Block newBlock;
// snip
all_blocks.push_back( newBlock );
}
When this function returns, the newBlock object goes out of scope, and its destructor will delete all the newed arrays.
But you push_back()ed this object. This constructs a copy of the object into the vector. Because your Block does not define a copy constructor, the default copy-constructor simply makes a copy of all the pointers to the newed arrays.
If you somehow manage to avoid dereferencing the no-longer valid pointers, or you survived that experience, you're not of the woods yet. That's because, when the vector gets destroyed, and its Blocks get destroyed, their destructors will, once again, attempt to delete the same newed arrays that were already deleted once before.
Instant crash.
There is nothing wrong with your Block destructor. It is doing its job, which is releasing the memory that is pointed to by your two int * member variables. The problem is that the destructor is being called on the same pointer value multiple times, which results in a double-free error.
The entity that causes this is the std::vector<Block>, since a std::vector will make copies of your Block object, and your Block object is not safely copyable.
Since the member variables of Block that are pointers are position and blockDimensions, the most painless way to alleviate this issue is to use std::vector<int> instead of int *, as demonstrated by this sample program.
However, if you really wanted to use int *, you would need to implement a user-defined copy constructor. In addition, a user-defined assignment operator would complement the copy constructor. This is what is called the Rule of Three.
#include <algorithm>
//...
class Block
{
public:
int id;
int fpDimensions;
int *position;
int *blockDimensions;
Block() : position(nullptr), blockDimensions(nullptr),
id(0), fpDimensions(0) {}
~Block()
{
delete [] position;
delete [] blockDimensions;
}
Block(const Block& rhs) : id(rhs.id), fpDimensions(rhs.fpDimensions),
position(new int[rhs.fpDimensions]),
blockDimensions(new int[rhs.fpDimensions])
{
std::copy(rhs.position, rhs.position + fpDimensions, position);
std::copy(rhs.blockDimensions, rhs.blockDimensions + fpDimensions,
blockDimensions);
}
Block& operator=(const Block& rhs)
{
Block temp(rhs);
std::swap(temp.position, position);
std::swap(temp.blockDimensions, blockDimensions);
std::swap(temp.id, id);
std::swap(temp.fpDimensions, fpDimensions);
return *this;
}
};
See the live sample here.
See all of the hoops we had to jump through to get the Block class to behave correctly when used within a std::vector, as opposed to simply using std::vector<int>?
I have a class that contains several arrays whose sizes can be determined by parameters to its constructor. My problem is that instances of this class have sizes that can't be determined at compile time, and I don't know how to tell a new method at run time how big I need my object to be. Each object will be of a fixed size, but different instances may be different sizes.
There are several ways around the problem:- use a factory- use a placement constructor- allocate arrays in the constructor and store pointers to them in my object.
I am adapting some legacy code from an old application written in C. In the original code, the program figures out how much memory will be needed for the entire object, calls malloc() for that amount, and proceeds to initialize the various fields.
For the C++ version, I'd like to be able to make a (fairly) normal constructor for my object. It will be a descendant of a parent class, and some of the code will be depending on polymorphism to call the right method. Other classes descended from the same parent have sizes known at compile time, and thus present no problem.
I'd like to avoid some of the special considerations necessary when using placement new, and I'd like to be able to delete the objects in a normal way.
I'd like to avoid carrying pointers within the body of my object, partially to avoid ownership problems associated with copying the object, and partially because I would like to re-use as much of the existing C code as possible. If ownership were the only issue, I could probably just use shared pointers and not worry.
Here's a very trimmed-down version of the C code that creates the objects:
typedef struct
{
int controls;
int coords;
} myobject;
myobject* create_obj(int controls, int coords)
{
size_t size = sizeof(myobject) + (controls + coords*2) * sizeof(double);
char* mem = malloc(size);
myobject* p = (myobject *) mem;
p->controls = controls;
p->coords = coords;
return p;
}
The arrays within the object maintain a fixed size of the life of the object. In the code above, memory following the structure of myobject will be used to hold the array elements.
I feel like I may be missing something obvious. Is there some way that I don't know about to write a (fairly) normal constructor in C++ but be able to tell it how much memory the object will require at run time, without resorting to a "placement new" scenario?
How about a pragmatic approach: keep the structure as is (if compatibility with C is important) and wrap it into a c++ class?
typedef struct
{
int controls;
int coords;
} myobject;
myobject* create_obj(int controls, int coords);
void dispose_obj(myobject* obj);
class MyObject
{
public:
MyObject(int controls, int coords) {_data = create_obj(controls, coords);}
~MyObject() {dispose_obj(_data);}
const myobject* data() const
{
return _data;
}
myobject* data()
{
return _data;
}
int controls() const {return _data->controls;}
int coords() const {return _data->coords;}
double* array() { return (double*)(_data+1); }
private:
myobject* _data;
}
While I understand the desire to limit the changes to the existing C code, it would be better to do it correctly now rather than fight with bugs in the future. I suggest the following structure and changes to your code to deal with it (which I suspect would mostly be pulling out code that calculates offsets).
struct spots
{
double x;
double y;
};
struct myobject
{
std::vector<double> m_controls;
std::vector<spots> m_coordinates;
myobject( int controls, int coordinates ) :
m_controls( controls ),
m_coordinates( coordinates )
{ }
};
To maintain the semantics of the original code, where the struct and array are in a single contigious block of memory, you can simply replace malloc(size) with new char[size] instead:
myobject* create_obj(int controls, int coords)
{
size_t size = sizeof(myobject) + (controls + coords*2) * sizeof(double);
char* mem = new char[size];
myobject* p = new(mem) myobject;
p->controls = controls;
p->coords = coords;
return p;
}
You will have to use a type-cast when freeing the memory with delete[], though:
myobject *p = create_obj(...);
...
p->~myobject();
delete[] (char*) p;
In this case, I would suggest wrapping that logic in another function:
void free_obj(myobject *p)
{
p->~myobject();
delete[] (char*) p;
}
myobject *p = create_obj(...);
...
free_obj(p);
That being said, if you are allowed to, it would be better to re-write the code to follow C++ semantics instead, eg:
struct myobject
{
int controls;
int coords;
std::vector<double> values;
myobject(int acontrols, int acoords) :
controls(acontrols),
coords(acoords),
values(acontrols + acoords*2)
{
}
};
And then you can do this:
std::unique_ptr<myobject> p = std::make_unique<myobject>(...); // C++14
...
std::unique_ptr<myobject> p(new myobject(...)); // C++11
...
std::auto_ptr<myobject> p(new myobject(...)); // pre C++11
...
New Answer (given comment from OP):
Allocate a std::vector<byte> of the correct size. The array allocated to back the vector will be contiguous memory. This vector size can be calculated and the vector will manage your memory correctly. You will still need to be very careful about how you manage your access to that byte array obviously, but you can use iterators and the like at least (if you want).
By the way here is a little template thing I use to move along byte blobs with a little more grace (note this has aliasing issues as pointed out by Sergey in the comments below, I'm leaving it here because it seems to be a good example of what not to do... :-) ) :
template<typename T>
T readFromBuf(byte*& ptr) {
T * const p = reinterpret_cast<T*>(ptr);
ptr += sizeof(T);
return *p;
}
Old Answer:
As the comments suggest, you can easily use a std::vector to do what you want. Also I would like to make another suggestion.
size_t size = sizeof(myobject) + (controls + coords*2) * sizeof(double);
The above line of code suggests to me that you have some "hidden structure" in your code. Your myobject struct has two int values from which you are calculating the size of what you actually need. What you actually need is this:
struct ControlCoord {
double control;
std::pair<double, double> coordinate;
};
std::vector<ControlCoord>> controlCoords;
When the comments finally scheded some light on the actual requirements, the solution would be following:
allocate a buffer large enough to hold your object and the array
use placement new in the beginning of the buffer
Here is how:
class myobject {
myobject(int controls, int coords) : controls(controls), coords(coords) {}
~myobject() {};
public:
const int controls;
const int coords;
static myobject* create(int controls, int coords) {
std::unique_ptr<char> buffer = new char[sizeof(myobject) + (controls + coords*2) * sizeof(double)];
myobject obj* = new (buffer.get()) myobject(controls, coords);
buffer.release();
return obj;
}
void dispose() {
~myobject();
char* p = (char*)this;
delete[] p;
}
};
myobject *p = myobject::create(...);
...
p->dispose();
(or suitably wrapped inside deleter for smart pointer)