I am used to java and php and now I need to write some c++ code. I got difficulties to create a BYTE-array with dynamic size. How to achieve this?
int byteSize = shm.getMemorySize();
BYTE byte[44]; // replace 44 by byteSize
You should use std::vector unless you have very specific reason to use arrays. Normally, in the similar context where you were using arrays in other languages in C++ the default choice should be std::vector.
Never use a naked pointer or it is open door for bugs and memory leaks, instead, here some alternatives :
int len = something;
std::vector<char> buffer(len,0);
or c++11 smart pointer
std::unique_ptr<char[]> buffer{ new char[len] };
or c++14 with make_unique
auto buffen = std::make_unique<char[]>(len);
Use a vector, unless you absolutely need to handle memory yourself. Also, this is more of a preference thing, but I prefer to use uint8_t instead of BYTE. Its a bit more compliant as it doesn't depend on GCC.
#include <vector>
#include <cstdint>
...
std::vector<uint8_t> foo;
or
#include <cstdint>
...
uint8_t* data;
data = new uint8_t[10];
...
delete[] data;
You can use the STL standard container class std::vector to have a dynamic resizable array:
#include <vector> // For std::vector
....
int byteSize = shm.getMemorySize();
// Create a vector of 'byteSize' bytes.
std::vector<BYTE> bytes(byteSize);
You can access vector elements using the usual [] syntax (e.g. bytes[0], bytes[1], ...bytes[i]).
vector's destructor will automatically release the memory allocated by the vector, so you don't have to pay attention to freeing memory.
You can use vector's push_back() method to add items at the end of the vector (with automatic resizing).
You can use vector's clear() method to empty it.
And vector's size() method returns the current number of items in the vector.
If you want to learn more about std::vector, you can watch this great introductory video by Stephan T. Lavavej.
One way to do it is to use dynamic memory allocation with new
BYTE* data = new BYTE[byteSize];
data[0] = 0;
delete [] data;
However this approach in the hands of a beginner can lead to memory corruptions, crashes, memory leaks, and all sorts of behaviors that are hard to find and fix.
A better way is to use std::vector that is almost safe against the problems the first approach has:
std::vector<BYTE> data;
data.resize(byteSize);
data[0] = 0;
You can create the dynamic array as follows:
int byteSize = shm.getMemorySize();
BYTE* byte = new BYTE[byteSize];
//Use the byte
delete [] byte;
Related
But if you want to prevent the 5000 copy (or more) of some potentially ugly heavy objects. Just for performance and ignore readable code you can do this
#include <iostream>
#include <vector>
int main(int argc, char* argv[])
{
std::vector<int> v;
int* a = (int*) malloc(10*sizeof(int));
//int a[10];
for( int i = 0; i<10;++i )
{
*(a + i) = i;
}
delete v._Myfirst; // ? not sure
v._Myfirst = a;
for( int i = 0; i<10;++i )
{
std::cout << v[i] << std::endl;
}
system("PAUSE");
return 0;
}
simply replace the _Myfirst underlying "array". but be very careful, this will be deleted by the vector.
Does my delete/free of my first fail in some cases?
Does this depend on the allocator?
Is there a swap for that? There is the brand new swap for vectors, but how about from an array?
I had to interface with a code I cannot modify (political reasons), which provides me with arrays of objects and have to make them vector of objects.
The pointer seems the right solution, but it's a lot of memory jumps and I have to perform the loop anyway. I was just wondering if there was some way to avoid the copy and tell the vector, "now this is your underlying array".
I agree that the above is highly ugly and this is why I actually reserve and push in my real code. I just wanted to know if there was a way to avoid the loop.
Please don't answer with a loop and a push back, as obviously that's what my "real" code is already doing. If there is no function to perform this swap can we implement a safe one?
Question 1: does my delete/free of my first fail in some cases ?
Yes. You do not know how the memory for _Myfirst is allocated (apart from it uses the allocator). The standard does not specify that that the default allocator should use malloc to allocate the memory so you do not know if delete will work.
Also you are mixing allocation schemes.
You are allocating with malloc(). But expecting the std::vector<> to allocate with new (because you are calling delete).
Also even if the allocator did use new it would be uisng the array version of new and thus you would need to use the array version of delete to reclaim the memory.
Question 2: does this depends on allocator ?
Yes.
Question 3: is there a swap for that ? there is the brand new swap for vectors, but from an array ?
No.
To prevent expensive copies. Use emplace_back().
std::vector<int> v;
v.reserve(10); // reserve space for 10 objects (min).
for( int i = 0; i<10;++i )
{
v.emplace_back(i); // construct in place in the array (no copy).
}
std::vector<SomeUglyHeavyObject*> vect(5000, NULL);
for (int i=0; i<5000; i++)
{
vect[i] = &arr[i];
}
There, avoids copying and doesn't muck with std::vector internals.
If you don't like the way the vector behaves, don't use a vector. Your code depends on implementation details that you MUST not rely upon.
Yes it's ugly as you said. Make it pretty by using the correct container.
Which container you use depends on what operations you need on the container. It may be something as simple as a C-style array will suit your needs. It may be a vector of pointers, or shared pointers if you want the vector to manage the lifetime of the HeavyObjects.
In response to the need to increase the size of collection (see comments below):
std::vector guarantees that the underlying storage is contiguous, so if you increase the size, it will allocate a new larger buffer and copy the old one into the new (using the copy constructor for the objects if one exists.) Then swap the new buffer into place -- freeing the old one using its own allocator.
This will trash your memory unless the buffer you brute forced into the vector was allocated in a manner consistent with the vector's allocator.
So if your original array is big enough, all you need is a "maxUsed" size_t to tell you how big the collection is and where to put new entries. If it's not big enough, then your technique will either fail horribly, or incur the copy costs you are trying to avoid.
int * a;
a = new int[10];
cout << sizeof(a)/sizeof(int);
if i would use a normal array the answer would be 10,
alas, the lucky number printed was 1, because sizeof(int) is 4 and iszeof(*int) is 4 too. How do i owercome this? In my case keeping size in memory is a complicated option. How do i get size using code?
My best guess would be to iterate through an array and search for it's end, and the end is 0, right? Any suggestions?
--edit
well, what i fear about vectors is that it will reallocate while pushing back, well you got the point, i can jus allocate the memory. Hoever i cant change the stucture, the whole code is releevant. Thanks for the answers, i see there's no way around, so ill just look for a way to store the size in memory.
what i asked whas not what kind of structure to use.
Simple.
Use std::vector<int> Or std::array<int, N> (where N is a compile-time constant).
If you know the size of your array at compile time, and it doens't need to grow at runtime, then use std::array. Else use std::vector.
These are called sequence-container classes which define a member function called size() which returns the number of elements in the container. You can use that whenever you need to know the size. :-)
Read the documentation:
std::array with example
std::vector with example
When you use std::vector, you should consider using reserve() if you've some vague idea of the number of elements the container is going to hold. That will give you performance benefit.
If you worry about performance of std::vector vs raw-arrays, then read the accepted answer here:
Is std::vector so much slower than plain arrays?
It explains why the code in the question is slow, which has nothing to do with std::vector itself, rather its incorrect usage.
If you cannot use either of them, and are forced to use int*, then I would suggest these two alternatives. Choose whatever suits your need.
struct array
{
int *elements; //elements
size_t size; //number of elements
};
That is self-explanatory.
The second one is this: allocate memory for one more element and store the size in the first element as:
int N = howManyElements();
int *array = int new[N+1]; //allocate memory for size storage also!
array[0] = N; //store N in the first element!
//your code : iterate i=1 to i<=N
//must delete it once done
delete []array;
sizeof(a) is going to be the size of the pointer, not the size of the allocated array.
There is no way to get the size of the array after you've allocated it. The sizeof operator has to be able to be evaluated at compile time.
How would the compiler know how big the array was in this function?
void foo(int size)
{
int * a;
a = new int[size];
cout << sizeof(a)/sizeof(int);
delete[] a;
}
It couldn't. So it's not possible for the sizeof operator to return the size of an allocated array. And, in fact, there is no reliable way to get the size of an array you've allocated with new. Let me repeat this there is no reliable way to get the size of an array you've allocated with new. You have to store the size someplace.
Luckily, this problem has already been solved for you, and it's guaranteed to be there in any implementation of C++. If you want a nice array that stores the size along with the array, use ::std::vector. Particularly if you're using new to allocate your array.
#include <vector>
void foo(int size)
{
::std::vector<int> a(size);
cout << a.size();
}
There you go. Notice how you no longer have to remember to delete it. As a further note, using ::std::vector in this way has no performance penalty over using new in the way you were using it.
If you are unable to use std::vector and std::array as you have stated, than your only remaning option is to keep track of the size of the array yourself.
I still suspect that your reasons for avoiding std::vector are misguided. Even for performance monitoring software, intelligent uses of vector are reasonable. If you are concerned about resizing you can preallocate the vector to be reasonably large.
I'm reading in values from a file which I will store in memory as I read them in. I've read on here that the correct way to handle memory location in C++ is to always use new/delete, but if I do:
DataType* foo = new DataType[sizeof(DataType) * numDataTypes];
Then that's going to call the default constructor for each instance created, and I don't want that. I was going to do this:
DataType* foo;
char* tempBuffer=new char[sizeof(DataType) * numDataTypes];
foo=(DataType*) tempBuffer;
But I figured that would be something poo-poo'd for some kind of type-unsafeness. So what should I do?
And in researching for this question now I've seen that some people are saying arrays are bad and vectors are good. I was trying to use arrays more because I thought I was being a bad boy by filling my programs with (what I thought were) slower vectors. What should I be using???
Use vectors!!! Since you know the number of elements, make sure that you reserve the memory first (by calling myVector.reserve(numObjects) before you then insert the elements.).
By doing this, you will not call the default constructors of your class.
So use
std::vector<DataType> myVector; // does not reserve anything
...
myVector.reserve(numObjects); // tells vector to reserve memory
You can use ::operator new to allocate an arbitrarily sized hunk of memory.
DataType* foo = static_cast<DataType*>(::operator new(sizeof(DataType) * numDataTypes));
The main advantage of using ::operator new over malloc here is that it throws on failure and will integrate with any new_handlers etc. You'll need to clean up the memory with ::operator delete
::operator delete(foo);
Regular new Something will of course invoke the constructor, that's the point of new after all.
It is one thing to avoid extra constructions (e.g. default constructor) or to defer them for performance reasons, it is another to skip any constructor altogether. I get the impression you have code like
DataType dt;
read(fd, &dt, sizeof(dt));
If you're doing that, you're already throwing type safety out the window anyway.
Why are you trying to accomplish by not invoking the constructor?
You can allocate memory with new char[], call the constructor you want for each element in the array, and then everything will be type-safe. Read What are uses of the C++ construct "placement new"?
That's how std::vector works underneath, since it allocates a little extra memory for efficiency, but doesn't construct any objects in the extra memory until they're actually needed.
You should be using a vector. It will allow you to construct its contents one-by-one (via push_back or the like), which sounds like what you're wanting to do.
I think you shouldn't care about efficiency using vector if you will not insert new elements anywhere but at the end of the vector (since elements of vector are stored in a contiguous memory block).
vector<DataType> dataTypeVec(numDataTypes);
And as you've been told, your first line there contains a bug (no need to multiply by sizeof).
Building on what others have said, if you ran this program while piping in a text file of integers that would fill the data field of the below class, like:
./allocate < ints.txt
Then you can do:
#include <vector>
#include <iostream>
using namespace std;
class MyDataType {
public:
int dataField;
};
int main() {
const int TO_RESERVE = 10;
vector<MyDataType> everything;
everything.reserve( TO_RESERVE );
MyDataType temp;
while( cin >> temp.dataField ) {
everything.push_back( temp );
}
for( unsigned i = 0; i < everything.size(); i++ ) {
cout << everything[i].dataField;
if( i < everything.size() - 1 ) {
cout << ", ";
}
}
}
Which, for me with a list of 4 integers, gives:
5, 6, 2, 6
I'm trying to think of a method demonstrating a kind of memory error using Arrays and C++, that is hard to detect. The purpose is to motivate the usage of STL vector<> in combination with iterators.
Edit: The accepted answer is the answer i used to explain the advantages / disadvantages. I also used: this
Improperly pairing new/delete and new[]/delete[].
For example, using:
int *array = new int[5];
delete array;
instead of:
int *array = new int[5];
delete [] array;
And while the c++ standard doesn't allow for it, some compilers support stack allocating an array:
int stack_allocated_buffer[size_at_runtime];
This could be the unintended side effect of scoping rules (e.g constant shadowed by a member variable)... and it works until someone passes 'size_at_runtime' too large and blows out the stack. Then lame errors ensue.
A memory leak? IMO, vector in combination with iterators doesn't particularly protect you from errors, such as going out of bounds or generally using an invalidated iterator (unless you have VC++ with iterator debugging); rather it is convenient because it implements a dynamically resizable array for you and takes care of memory management (NB! helps make your code more exception-safe).
void foo(const char* zzz)
{
int* arr = new int[size];
std::string s = zzz;
//...
delete[] arr;
}
Above can leak if an exception occurs (e.g when creating the string). Not with a vector.
Vector also makes it easier to reason about code because of its value semantics.
int* arr = new int[size];
int* second_ref = arr;
//...
delete [] arr;
arr = 0; //play it safe :)
//...
second_ref[x] = y;
//...
delete [] second_ref;
But perhaps a vector doesn't automatically satisfy 100% of dynamic array use cases. (For example, there's also boost::shared_array and the to-be std::unique_ptr<T[]>)
I think the utility of std::vector really shows when you need dynamic arrays.
Make one example using std::vector. Then one example using an array to realloc. I think it speaks for itself.
One obvious:
for (i = 0; i < NUMBER_OF_ELEMENTS; ++i)
destination_array[i] = whatever(i);
versus
for (i = 0; i < NUMBER_OF_ELEMENTS; ++i)
destination_vector.push_back(whatever(i));
pointing out that you know the second works, but whether the first works depends on how destination_array was defined.
void Fn()
{
int *p = new int[256];
if ( p != NULL )
{
if ( !InitIntArray( p, 256 ) )
{
// Log error
return;
}
delete[] p;
}
}
You wouldn't BELIEVE how often I see that. A classic example of where any form of RAII is useful ...
I would think the basic simplicity of using vectors instead of dynamic arrays is already convincing.
You don't have to remember to delete your memory...which is not so simple since attempts to delete it might be bypassed by exceptions and whatnot.
If you want to do dynamic arrays yourself, the safest way to do it in C++ is to wrap them in a class and use RAII. But vectors do that for you. That's kind of the point, actually.
Resizing is done for you.
If you need to support arbitrary types, you don't have to do any extra work.
A lot of algorithms are provided which are designed to handle containers, both included and by other users.
You can still use functions that need arrays by passing the vector's underlying array if necessary; The memory is guaranteed to be contiguous by the standard, except with vector<bool> (google says as of 2003, see 23.2.4./1 of the spec).
Using an array yourself is probably bad practice in general, since you will be re-inventing the wheel...and your implementation will almost definitely be much worse than the existing one...and harder for other people to use, since they know about vector but not your weird thing.
With a dynamic array, you need to keep track of the size yourself, grow it when you want to insert new elements, delete it when it is no longer needed...this is extra work.
Oh, and a warning: vector<bool> is a dirty rotten hack and a classic example of premature optimization.
Why don't you motivate it based on the algorithms that the STL provides?
In raw array, operator[] (if I may call so) is susceptible to index-out-of-bound problem. With vector it is not (There is at least a run time exception).
Sorry, I did not read the question carefully enough. index-out-of-bound is a problem, but not a memory error.
I am doing a project converting some Pascal (Delphi) code to C++ and would like to write a function that is roughly equivalent to the Pascal "SetLength" method. This takes a reference to a dynamic array, as well as a length and allocates the memory and returns the reference.
In C++ I was thinking of something along the lines of
void* setlength(void* pp, int array_size, int pointer_size, int target_size, ....) {
void * p;
// Code to allocate memory here via malloc/new
// something like: p = reinterpret_cast<typeid(pp)>(p);
// p=(target_size) malloc(array_size);
return p;
}
My question is this: is there a way to pass the pointer type to a function like this and to successfully allocate the memory (perhaps via a typeid parameter?)? Can I use
<reinterpret_cast>
somehow? The ultimate aim would be something like the following in terms of usage:
float*** p;
p=setlength(100,sizeof(float***),sizeof(float**),.....);
class B;
B** cp;
cp=setlength(100,sizeof(B**),sizeof(B*),.....);
Any help would be most welcome. I am aware my suggested code is all wrong, but wanted to convey the general idea. Thanks.
Use std::vector instead of raw arrays.
Then you can simply call its resize() member method.
And make the function a template to handle arbitrary types:
If you want to use your function, it could look something like this:
template <typename T>
std::vector<T>& setlength(std::vector<T>& v, int new_size) {
v.resize(new_size);
return v;
}
But now it's so simple you might want to eliminate the function entirely and just call resize to begin with.
I'm not entirely sure what you're trying to do with the triple-pointers in your example, but it looks like you don't want to resize though, you want to initialize to a certain size, which can be done with the vector constructor:
std::vector<float>v(100);
If you wanted to do it literally, you would do it like this:
template <typename T>
T* SetLength(T* arr, size_t len) {
return static_cast<T*>(realloc(arr, sizeof(T) * len));
}
Note that the array must have been allocated with malloc or calloc. Also note that this does not actually resize the memory—it deallocates the memory and reallocates memory of the appropriate size. If there were any other pointers to the array being passed in, they will be invalid afterwards.
You're really better off using a more idiomatic C++ solution, like std::vector.
For a multidimensional array, probably the best option would be to use boost's multi_array library:
typedef boost::multi_array<float, 3> array_type;
array_type p(boost::extents[100][100][100]); // make an 100x100x100 array of floats
p[1][2][3] = 4.2;
This lets you completely abstract away the allocation and details of setting up the multidimensional array. Plus, because it uses linear storage, you get the efficiency benefits of linear storage with the ease of access of indirections.
Failing that, you have three other major options.
The most C++-y option without using external libraries would be to use a STL container:
std::vector<float **> p;
p.resize(100);
As with multi_array, p will then automatically be freed when it goes out of scope. You can get the vector bounds with p.size(). However the vector will only handle one dimension for you, so you'll end up doing nested vectors (ick!).
You can also use new directly:
float ***p = new float**[100];
To deallocate:
delete [] p;
This has all the disadvantages of std::vector, plus it won't free it for you, and you can't get the size later.
The above three methods will all throw an exception of type std::bad_alloc if they fail to allocate enough memory.
Finally, for completeness, there's the C route, with calloc():
float ***p = (float ***)calloc(100, sizeof(*p));
To free:
free((void*)p);
This comes from C and is a bit uglier with all the casts. For C++ classes it will not call the constructors for you, either. Also, there's no checking that the sizeof in the argument is consistent with the cast.
If calloc() fails to allocate memory it will return NULL; you'll need to check for this and handle it.
To do this the C++ way:
1) As jalf stated, prefer std::vector if you can
2) Don't do void* p. Prefer instead to make your function a template of type T.
The new operator itself is essentially what you are asking for, with the exception that to appropriately allocate for double/triple pointers you must do something along the following lines:
float** data = new float*[size_of_dimension_1];
for ( size_t i=0 ; i<size_of_dimension_1 ; ++i )
data[i] = new float[size_of_dimension_2];
...
// to delete:
for ( size_t i=0 ; i<size_of_dimension_1 ; ++i )
delete [] data[i];
delete [] data;
Edit: I would suggest using one of the many C++ math/matrix libraries out there. I would suggest uBlas.