Malloc on a struct containing a std::vector - c++

Here is the situation :
I use a malloc to allocate memory for a struct.
The struct contains various items such as pointers, string variables and vectors.
The fact is, when we use malloc, no constructors are called. Using a code similar to the following one, I've experienced some situation where some variables worked while others didn't.
Note : The following code doesn't compile. It's purpose is only to illustrate the situation.
struct MyStruct
{
MyClass* mFirstClass;
bool mBool;
std::string mString;
std::vector<MyClass> mVector;
};
int main()
{
MyStruct* wMyStructure;
wMyStructure = (MyStruct*) malloc (sizeof(MyStruct));
MyClass wMyClassObject;
wMyStructure->mFirstClass = new MyClass();
wMyStructure->mFirstClass->func();
wMyStructure->mBool = false;
wMyStructure->mString = "aString";
wMyStructure->mVector.push_back(wMyClassObject);
return 0;
}
By using pointers instead of those variables (std::string* mString), followed by a call to the object constructor (mString = new std::string;) Exception are not thrown.
However, I've experienced a situation where the mString was used without problem without the constructor being called, but when it came to the vector, the application exit automatically.
This left me with many questions:
When will an object throw an exception if no constructor were used?
In the situation I experienced, only the vector caused problem. Could mString be left as it is or should I call it's constructor?
What would be the safest way, using malloc, to do the whole thing?

Using object without constructing it must be an undefined behaviour. Anything may happen at any moment. If you do this, you must not rely on any part of your code to run smoothly, because the language doesn't guarantee anything in this case.

Your code causes undefined behaviour, because your wMyStructure does not point to an object, so you may not use the accessor operator -> on it.
An object only commences its life after its constructor has completed. Since you don't call any constructor, you do not have an object.
(If your struct were a POD, i.e. just consisting of primitive types and PODs, then this would be OK, because PODs have trivial constructors, which do nothing.)
The concrete problem you're facing is that the string and vector members of your struct didn't get to call their constructors, so those members don't exists, and hence the entire object doesn't.
If you want to decouple memory management from object construction, you can use placement syntax:
// get some memory
char arena[HUGE_VAL];
void * morespace = malloc(HUGE_VAL);
// construct some objects
MyClass * px = new (arena + 2000) MyClass; // default constructor
YourClass * py = new (morespace + 5000) YourClass(1, -.5, 'x'); // non-default constructor
(You have to destroy those objects manually, px->~MyClass(); etc., when you're done with them.)

It is undefined behaviour to use a non-initialized object. Exception may be thrown at any time- or not at all.

1 ) When will an object throw an exception if no constructor were used ?
If you don't call the constructor, there is no object. You have just allocated some space.
2 ) In the situation I experienced, only the vector caused problem. Could mString be left as it is or should I call it's constructor ?
This is all undefined behavior, just about anything could happen. There are no rules.
3 ) What would be the safest way, using malloc, to do the whole thing ?
The safest way would be not to use malloc, but allocate using new that will call constructors. It is as simple as this
MyStruct* wMyStructure = new MyStruct;

None of the other answers appear to explain what the compiler is doing. I'll try to explain.
When you call malloc the program reserve some memory space for the struct. That space is filled with memory garbage, (i.e. random numbers in place of the struct fields).
Now consider this code:
// (tested on g++ 5.1.0 on linux)
#include <iostream>
#include <stdlib.h>
struct S {
int a;
};
int main() {
S* s = (S*)malloc(sizeof(S));
s->a = 10;
*((int*)s) = 20;
std::cout << s->a << std::endl; // 20
}
So when accessing a member of a struct you are actually accessing a memory position, there should be no unexpected behavior when writing to it.
But in C++ you can overload operators. Now imagine what would happen if the overloaded assignment operator need the class to be initialized, like in the code below:
// (tested on g++ 5.1.0 on linux)
#include <iostream>
#include <stdlib.h>
class C {
public:
int answer;
int n;
C(int n) { this->n = n; this->answer = 42; }
C& operator=(const C& rhs) {
if(answer != 42) throw "ERROR";
this->n = rhs.n; return *this;
}
};
struct S {
int a;
C c;
};
int main() {
S* s = (S*)malloc(sizeof(S));
C c(10);
C c2(20);
c = c2; // OK
std::cout << c.n << std::endl; // 20
s->c = c; // Not OK
// Throw "ERROR"
std::cout << s->c.n << std::endl; // 20
}
When s->c = c is executed the assignment operator verifies if s->c.answer is 42, if its not it will throw an error.
So you can only do as you did in your example if you know that the overloaded assignment operator of the class std::vector does not expect an initialized vector. I have never read the source code of this class, but I bet it expects.
So its not advisable to do this, but its not impossible to be done with safety if you really need. You just need to be sure you know the behavior of all assignment operators you are using.
In your example, if you really need an std::vector on the struct you can use a vector pointer:
class MyClass { ... };
struct S {
std::vector<MyClass>* vec;
}
int main() {
S s;
MyClass c;
s.vec = new std::vector<MyClass>();
s.vec->push_back(c);
}

Related

C++ struct with dynamically allocated char arrays

I'm trying to store structs in a vector. Struct needs to dynamically allocate memory for char* of a given size.
But as soon as I add the struct to a vector, its destructor gets called, as if I lost the pointer to it.
I've made this little demo for the sake of example.
#include "stdafx.h"
#include <iostream>
#include <vector>
struct Classroom
{
char* chairs;
Classroom() {} // default constructor
Classroom(size_t size)
{
std::cout << "Creating " << size << " chairs in a classroom" << std::endl;
chairs = new char[size];
}
~Classroom()
{
std::cout << "Destroyng chairs in a classroom" << std::endl;
delete[] chairs;
}
};
std::vector<Classroom> m_classrooms;
int main()
{
m_classrooms.push_back(Classroom(29));
//m_classrooms.push_back(Classroom(30));
//m_classrooms.push_back(Classroom(30));
system("Pause");
return 0;
}
The output is
Creating 29 chairs in a classroom
Destroyng chairs in a classroom
Press any key to continue . . .
Destroyng chairs in a classroom
Yes, seems like the destructor gets called twice! Once upon adding to a vector, and second time upon the program finishing its execution.
The exact same thing happens when I try to use a class instead of a struct.
Can someone explain why this happens and what are the possible ways to accomplish my task correctly?
The Classroom class cannot be used in a std::vector<Classroom> safely because it has incorrect copy semantics. A std::vector will make copies of your object, and if the copy semantics have bugs, then you will see all of those bugs manifest themselves when you start using the class in containers such as vector.
For your class to have correct copy semantics, it needs to be able to construct, assign, and destruct copies of itself without error (those errors being things like memory leaks, double deletion calls on the same pointer, etc.)
The other thing missing from your code is that the size argument needs to be known within the class. Right now, all you've posted is an allocation of memory, but there is nothing that saves the size. Without knowing how many characters were allocated, proper implementation of the user-defined copy constructor and assignment operator won't be possible, unless that char * is a null-terminated string.
Having said that, there a multiple ways to fix your class. The easiest way is to simply use types that have correct copy semantics built into them, instead of handling raw dynamically memory yourself. Those classes would include std::vector<char> and std::string. Not only do they clean up themselves, these classes know their own size without having to carry a size member variable.
struct Classroom
{
std::vector<char> chairs;
Classroom() {} // default constructor
Classroom(size_t size) : chairs(size)
{
std::cout << "Creating " << size << " chairs in a classroom" << std::endl;
}
};
The above class will work without any further adjustments to it, since std::vector<char> has correct copy semantics already. Note that there is no longer a need for the destructor, since std::vector knows how to destroy itself.
If for some reason you had to use raw dynamically allocated memory, then your class has to implement a user-defined copy constructor, assignment operation, and destructor.
#include <algorithm>
struct Classroom
{
size_t m_size;
char* chairs;
// Note we initialize all the members here. This was a bug in your original code
Classroom() : m_size(0), chairs(nullptr)
{}
Classroom(size_t size) : m_size(size), chairs(new char[size])
{}
Classroom(const Classroom& cRoom) : m_size(cRoom.m_size),
chairs(new char[cRoom.m_size])
{
std::copy(cRoom.chairs, cRoom.chairs + cRoom.m_size, chairs);
}
Classroom& operator=(const Classroom& cRoom)
{
if ( this != &cRoom )
{
Classroom temp(cRoom);
std::swap(temp.m_size, m_size);
std::swap(temp.chairs, chairs);
}
return *this;
}
~Classroom() { delete [] chairs; }
};
Note the usage of the member-initialization list when initializing the members of the class. Also note the usage of the copy / swap idiom when implementing the assignment operator.
The other issue that was corrected is that your default constructor was not initializing all of the members. Thus in your original class a simple one line program such as:
int main()
{
Classroom cr;
}
would have caused issues, since in the destructor, you would have deleted an uninitialized chairs pointer.
After this, a std::vector<Classroom> should now be able to be safely used.
#LPVOID
Using emplace_back(..) to create the object in place can help you avoid the double free or corruption error you are facing here.
m_classrooms.emplace_back(29)
However, it is a better practice to always follow the rule of 3/5/0 to not end up with a dangling pointer.

How to avoid deletion of object given as a parameter in C++

I am new to C++ and I do not know how to solve the following problem.
The class Foo has a constructor which creates a array of doubles of a given size. The destructor deletes this array. The print method prints the array.
#include <iostream>
class Foo {
private:
int size;
double* d;
public:
Foo(int size);
~Foo();
void print();
};
Foo::Foo(int size)
{
this->size = size;
d = new double[size];
for (int i = 0; i < size; i++)
{
d[i] = size * i;
}
}
Foo::~Foo()
{
delete[] d;
}
void Foo::print()
{
for (int i = 0; i < size; i++)
{
std::cout << d[i] << " ";
}
std::cout << std::endl;
}
Now I have a function func(Foo f) which does nothing.
void func(Foo f){}
int main()
{
Foo f(3);
f.print();
func(f);
Foo g(5);
f.print();
return 0;
}
Executing this code gives the following output:
0 3 6
0 5 10
Although I am printing f both times, somehow the values inside the array have changed.
I guess that the destructor of Foo is called on parameter Foo f after the execution of func(Foo f) and this frees the allocated memory for d, which is reallocated for Foo g(5). But how can I avoid this without using vectors or smart pointers?
The problem is with the design of the class. The default copy constructor will create a new instance of Foo when passed by value into the free standing function named func.
When the instance of Foo named f exits scope then the code invokes the user-provided destructor that deletes the array of doubles. This opens the code to the unfortunate situation of deleting the same array twice when the original instance of Foo named f exits scope at the end of the program.
When run on my machine, the code does not produce the same output. Instead I see two output lines of 0 3 6 followed by fault indicating the double free operation.
The solution is to avoid the copy by passing by reference (or by const reference): void func(Foo const &f) { } or to supply a valid copy constructor that makes a deep copy of the underlying array. Passing by reference is just a bandaid that avoids exercising the bug.
Using std::vector<double> fixes the problem because the default copy constructor will perform a deep copy and avoid double deallocation. This is absolutely the best approach in this small example, but it avoids having to understand the root of the problem. Most C++ developers will learn these techniques then promptly do what they can to avoid having to write code that manually allocates and deallocates memory.
You should probably pass the object as a reference func(Foo& f) or - if you do not want to modify it at all - as a constant reference func(const Foo& f). This will not create or delete any objects during the function call.
Aside from that, as others have already mentioned, your class should better implement the Rule of Three.
When you pass a value to a function, it is supposed to be copied. The destructor is run on the copy and should no effect on the original object. Foo fails to implement a copy constructor, so compiler provides the default one which simply performs a member-wise copy of the struct. As a result, the "copy" of Foo inside Func contains the same pointer as the original, and its destructor frees the data pointed to by both.
In order to be usable by idiomatic C++ code, Foo must implement at least a copy constructor and an assignment operator in addition to the destructor. The rule that these three come together is sometimes referred to as "the rule of three", and is mentioned in other answers.
Here is an (untested) example of what the constructors could look like:
Foo::Foo(const Foo& other) {
// copy constructor: construct Foo given another Foo
size = other->size;
d = new double[size];
std::copy(other->d, other->d + size, d);
}
Foo& Foo::operator=(const Foo& other) {
// assignment: reinitialize Foo with another Foo
if (this != &other) {
delete d;
size = other->size;
d = new double[size];
std::copy(other->d, other->d + size, d);
}
return *this;
}
Additionally, you can also modify functions like func to accept a reference to Foo or a constant reference to Foo to avoid unnecessary copying. Doing this alone would also happen fix the immediate problem you are having, but it would not help other issues, so you should definitely implement a proper copy constructor before doing anything else.
It's a good idea to get a good book on C++ where the rule of three and other C++ pitfalls are explained. Also, look into using STL containers such as std::vector as members. Since they implement the rule of three themselves, your class wouldn't need to.
One problem is that calling func creates a bitwise copy. When that copy goes out of scope the destructor is called which deletes your array.
You should change void func(Foo f){} to void func(Foo& f){}.
But better yet you add a create copy constructor or add a private declaration to stop it being called unexpectedly.

Destructors messing up in Dev-C++ and Code::Blocks

#include <iostream>
using namespace std;
class Exem {
int *a;
public:
Exem() { a = new int; *a = 0; };
Exem (int x) { a = new int; *a = x; };
~Exem () { delete a; };
int f (void);
Exem operator+ (Exem);
};
int Exem::f (void) {
return *a * 2;
}
Exem Exem::operator+ (Exem nimda) {
Exem aux;
*aux.a = *a + *nimda.a;
return aux;
}
int main() {
Exem adabo(1);
Exem inakos(2);
adabo = adabo + inakos;
cout << adabo.f();
cin.get();
}
This is my code, an example class made to showcase the problem. The output of main() would, in theory, be '6', but all that actually shows up are nonsensical numbers.
This apparently has to do with the class' destructor, which, from what I understood, get called too early at the end of the operator+ function - aux gets lost before it is actually passed. I came to such conclusion because ~Exem(), when commented, allows the program to execute as expected.
I'm guessing this has to do with these two compilers, because when I tried to compile the exact same code in Embarcadero RAD Studio it would work.
You need to explicitly define a copy constructor and assignment operator for Exem as you have a dynamically allocated member variable.
If a copy constructor and assignment operator are not explicitly defined for a class the compiler generates default versions of these, which are not suitable for a class that has dynamically allocated members. The reason the default generated versions are unsuitable is that they perform a shallow copy of the members. In the case of Exem, when an instance of it is copied more than one instance of Exem is pointing to the same dynamically allocated int member named a. When one of the instances is destroyed that a deleted, and leaves the other instance with a dangling pointer and undefined behaviour.
See the rule of three.
A simple fix for Exem would be change a from an int* to an int. The default copy constructor, assignment operator and destructor would be correct.
Note that Exem::operator+() should take a const Exem& parameter as it does not change its argument.

Visual Studio 2010 C++ runtime error

I came across strange behavior in Visual Studio 2010 C++ compiler.
Following code compiles but throws "Debug assertion failed" after execution with
message:
"_BLOCK_TYPE_IS_VALID(pHead->nBlockUse)"
Compiles and runs smoothly under GCC. Is it my fault?
#include <iostream>
#include <vector>
using namespace std;
typedef unsigned int uint;
class Foo {
vector<int*> coll;
public:
void add(int* item) {
coll.push_back(item);
}
~Foo() {
for (uint i = 0; i < coll.size(); ++i) {
delete coll[i];
coll[i] = NULL;
}
}
};
int main()
{
Foo foo;
foo.add(new int(4));
Foo bar = foo;
return 0;
}
You didn't implement a copy constructor and copy assignment operator (see rule of three). This results in a shallow copy of the pointers in your vector, causing a double delete and the assertion. EDIT: Double delete is undefined behavior so both VS and gcc are correct here, they're allowed to do whatever they want.
Typically when you implement a destructor with non-trivial behavior you'll also need to write or disable copy construction and copy assignment.
However in your case do you really need to store the items by pointer? If not, just store them by value and that would fix the problem. Otherwise if you do need pointers use shared_ptr (from your compiler or boost) instead of raw pointers to save you from needing to write your own destructor/copy methods.
EDIT: A further note about your interface: Interfaces like this that transfer ownership of passed in pointers can cause confusion by people using your class. If someone passed in the address of an int not allocated on the heap then your destructor will still fail. Better is to either accept by value if possible, or clone the passed in item making your own call to new in the add function.
You're deleting item twice, because the line
Foo bar = foo;
Invokes the default copy constructor, which duplicates the itempointer, rather than allocating and copying the data.
The problem is both the bar and foo member's vector element is same. When foo goes out of scope it's destructor is called which deallocates the pointer leaving the bar vector element dangling.bar destructor tries to deallocate it's vector element which was left dangling and is causing you the runtime error. You should write a copy constructor.
Foo bar = foo; // Invokes default copy constructor.
Edit 1: Look at this thread to know about Rule of three
The simpler solution here is not to use int* in the first place.
#include <iostream>
#include <vector>
using namespace std;
typedef unsigned int uint;
class Foo {
vector<int> coll; // remove *
public:
void add(int item) { // remove *
coll.push_back(item);
}
// remove ~Foo
};
int main()
{
Foo foo;
foo.add(4); // remove `new` call
Foo bar = foo;
return 0;
}
In general, try to avoid new.
If you cannot, use a smart manager (like std::unique_ptr) to handle the memory clean-up for you.
In any case, if you're calling delete manually, you're doing it wrong. Note: not calling delete and letting the memory leak is wrong too

Accessing an object in operator new

My professor in C++ has shown us this as an example in overloading the operator new (which i believe is wrong):
class test {
// code
int *a;
int n;
public:
void* operator new(size_t);
};
void* test::operator new(size_t size) {
test *p;
p=(test*)malloc(size);
cout << "Input the size of array = ?";
cin >> p->n;
p->a = new int[p->n];
return p;
}
Is this right?
It's definitely "not right", in the sense that it's giving me the creeps.
Since test has no user-declared constructors, I think it could work provided that the instance of test isn't value-initialized (which would clear the pointer). And provided that you write the corresponding operator delete.
It's clearly a silly example, though - user interaction inside an overloaded operator new? And what if an instance of test is created on the stack? Or copied? Or created with test *tp = new test(); in C++03? Or placement new? Hardly user-friendly.
It's constructors which must be used to establish class invariants (such as "I have an array to use"), because that's the only way to cover all those cases. So allocating an array like that is the kind of thing that should be done in a constructor, not in operator new. Or better yet, use a vector instead.
As far as the standard is concerned - I think that since the class is non-POD the implementation is allowed to scribble all over the data in between calling operator new and returning it to the user, so this is not guaranteed to work even when used carefully. I'm not entirely sure, though. Conceivably your professor has run it (perhaps many years ago when he first wrote the course), and if so it worked on his machine. There's no obvious reason why an implementation would want to do anything to the memory in the specific case of this class.
I believe that is "wrong" because he
access the object before the
constructor.
I think you're correct on this point too - casting the pointer returned from malloc to test* and accessing members is UB, since the class test is non-POD (because it has private non-static data members) and the memory does not contain a constructed instance of the class. Again, though, there's no reason I can immediately think of why an implementation would want to do anything that stops it working, so I'm not surprised if in practice it stores the intended value in the intended location on my machine.
Did some Standard checking. Since test has private non-static members, it is not POD. So new test default-initializes the object, and new test() value-initializes it. As others have pointed out, value-initialization sets members to zero, which could come as a surprise here.
Default-initialization uses the implicitly defined default constructor, which omits initializers for members a and n.
12.6.2p4: After the call to a constructor for class X has completed, if a member of X is neither specified in the constructor's mem-initializers, nor default-initialized, nor value-initialized, nor given a value during execution of the body of the constructor, the member has indeterminate value.
Not "the value its memory had before the constructor, which is usually indeterminate." The Standard directly says the members have indeterminate value if the constructor doesn't do anything about them.
So given test* p = new test;, p->a and p->n have indeterminate value and any rvalue use of them results in Undefined Behavior.
The creation/destruction of objects in C++ is divided into two tasks: memory allocation/deallocation and object initialization/deinitialization. Memory allocation/deallocation is done very differently depending on an object's storage class (automatic, static, dynamic), object initialization/deinitialization is done using the object's type's constructor/destructor.
You can customize object initialization/deinitialization by providing your own constructors/destructor. You can customize the allocation of dynamically allocated objects by overloading operator new and operator delete for this type. You can provide different versions of these operators for single objects and arrays (plus any number of additional overloads).
When you want to fine-tune the construction/destruction of objects of a specific type you first need to decide whether you want to fiddle with allocation/deallocation (of dynamically allocated objects) or with initialization/deinitialization. Your code mixes the two, violating one of C++' most fundamental design principle, all established praxis, every known C++ coding standard on this planet, and your fellow-workers' assumptions.
Your professor is completely misunderstanding the purpose of operator new whose only task is to allocate as much memory as was asked and to return a void* to it.
After that the constructor is called to initialize the object at that memory location. This is not up to the programmer to avoid.
As the class doesn't have a user-defined constructor, the fields are supposed to be uninitialized, and in such a case the compiler has probably freedom to initialize them to some magic value in order to help finding use of uninitialized values (e.g for debug builds). That would defeat the extra work done by the overloaded operator.
Another case where the extra work will be wasted is when using value-initialization: new test();
This is very bad code because it takes initialization code that should be part of a constructor and puts it in operator new which should only allocate new memory.
The expression new test may leak memory (that allocated by p->a = new int[p->n];) and the expression new test() definitely will leak memory. There is nothing in the standard that prevents the implementation zeroing, or setting to an alternate value, the memory returned by a custom operator new before that memory is initialized with an object even if the subsequent initialization wouldn't ordinarily touch the memory again. If the test object is value-initialized the leak is guaranteed.
There is also no easy way to correctly deallocate a test allocated with new test. There is no matching operator delete so the expression delete t; will do the wrong thing global operator delete to be called on memory allocated with malloc.
This does not work.
Your professor code will fail to initialize correctly in 3/4 of cases.
It does not initialize objects correctly (new only affects pointers).
The default constructor generated for tests has two modes.
Zero Initialization (which happens after new, but POD are set to zero)
Default Initialization (POD are uninitialized)
Running Code (comments added by hand)
$ ./a.exe
Using Test::new
Using Test::new
A Count( 0) // zero initialized: pointer leaked.
A Pointer(0)
B Count( 10) // Works as expected because of default init.
B Pointer(0xd20388)
C Count( 1628884611) // Uninitialized as new not used.
C Pointer(0x611f0108)
D Count( 0) // Zero initialized because it is global (static storage duration)
D Pointer(0)
The Code
#include <new>
#include <iostream>
#include <stdlib.h>
class test
{
// code
int *a;
int n;
public:
void* operator new(size_t);
// Added dredded getter so we can print the values. (Quick Hack).
int* getA() const { return a;}
int getN() const { return n;}
};
void* test::operator new(size_t size)
{
std::cout << "Using Test::new\n";
test *p;
p=(test*)malloc(size);
p->n = 10; // Fixed size for simple test.
p->a = new int[p->n];
return p;
}
// Objects that have static storage duration are zero initialized.
// So here 'a' and 'n' will be set to 0
test d;
int main()
{
// Here a is zero initialized. Resulting in a and n being reset to 0
// Thus you have memory leaks as the reset happens after new has completed.
test* a = new test();
// Here b is default initialized.
// So the POD values are undefined (so the results are what you prof expects).
// But the standard does not gurantee this (though it will usually work because
// of the it should work as a side effect of the 'zero cost principle`)
test* b = new test;
// Here is a normal object.
// New is not called so its members are random.
test c;
// Print out values
std::cout << "A Count( " << a->getN() << ")\n";
std::cout << "A Pointer(" << a->getA() << ")\n";
std::cout << "B Count( " << b->getN() << ")\n";
std::cout << "B Pointer(" << b->getA() << ")\n";
std::cout << "C Count( " << c.getN() << ")\n";
std::cout << "C Pointer(" << c.getA() << ")\n";
std::cout << "D Count( " << d.getN() << ")\n";
std::cout << "D Pointer(" << d.getA() << ")\n";
}
A valid example of what the professor failed to do:
class test
{
// code
int n;
int a[1]; // Notice the zero sized array.
// The new will allocate enough memory for n locations.
public:
void* operator new(size_t);
// Added dredded getter so we can print the values. (Quick Hack).
int* getA() const { return a;}
int getN() const { return n;}
};
void* test::operator new(size_t size)
{
std::cout << "Using Test::new\n";
int tmp;
std::cout << How big?\n";
std::cin >> tmp;
// This is a half arsed trick from the C days.
// It should probably still work.
// Note: This may be what the professor should have wrote (if he was using C)
// This is totally horrible and whould not be used.
// std::vector is a much:much:much better solution.
// If anybody tries to convince you that an array is faster than a vector
// The please read the linked question below where that myth is nailed into
// its over sized coffin.
test *p =(test*)malloc(size + sizeof(int) * tmp);
p->n = tmp;
// p->a = You can now overflow a upto n places.
return p;
}
Is std::vector so much slower than plain arrays?
As you show this is wrong. You can also see how easy it is to get this wrong.
There usually isn't any reason for it unless you are trying to manage your own memory allocations and in a C++ environment you would be better off learning the STL and write custom allocators.