I want to have a pointer to a randomly generated data with a certain size.
I don't know what happens when you do If I do char* data = new char[fileSize] or char* data = (char*)malloc(fileSize). I don't know if it initiates the allocated memory randomly or if all the bytes have the same value(or are zeroed).
Does new or malloc do the job or do I need something else?
Your calls to new and malloc do not initialize the data they return. You'll get whatever values happen to be at the memory address allocated to you.
You're better off using std::vector:
std::vector<char> data(fileSize);
It will initialize the data to '\0' for you, and will take care of freeing up the underlying buffer memory for you when the vector goes out of scope.
both only allocate memory. As for the "C" method you could either use memset or allocate it directly with calloc(which sets it to 0). if allocated with new you have to zero it
Both allocates memory but when you use malloc to allocate memory for class object in c++ it not call constructor of a class but when you use new operator then it also calls constructor of a class that is one of the major difference between malloc and new .
Per the language specification, no new and malloc do not initialize data on primitive types. The compiler/runtime is free to give you memory it hasn't done anything with.
However in practice, the memory you allocate is quite probably zero'd out by the kernel. Modern operating systems do this as a security measure, not zeroing out memory that was once owned by another process could leak that other processes' data into your memory space. What if that other process had stored a password? Note this is not always true, and shouldn't be relied on. Always initialize your memory.
If you actually want random data, you need to generate it yourself. What random source you pull from is dependent on your needs. If you need some "random" data for testing, or for say, generating monsters in a game, use one of the pseudo-random generators available in the standard library to generate your data. If you need cryptographically secure number generation, it's a bit more complicated to be sure you have correct.
Related
I'm writing a C++14 JSON library as an exercise and to use it in my personal projects.
By using callgrind I've discovered that the current bottleneck during a continuous value creation from string stress test is an std::string dynamic memory allocation. Precisely, the bottleneck is the call to malloc(...) made from std::string::reserve.
I've read that many existing JSON libraries such as rapidjson use custom allocators to avoid malloc(...) calls during string memory allocations.
I tried to analyze rapidjson's source code but the large amount of additional code and comments, plus the fact that I'm not really sure what I'm looking for, didn't help me much.
How do custom allocators help in this situation?
Is a memory buffer preallocated somewhere (where? statically?) and std::strings take available memory from it?
Are strings using custom allocators "compatible" with normal strings?
They have different types. Do they have to be "converted"? (And does that result in a performance hit?)
Code notes:
Str is an alias for std::string.
By default, std::string allocates memory as needed from the same heap as anything that you allocate with malloc or new. To get a performance gain from providing your own custom allocator, you will need to be managing your own "chunk" of memory in such a way that your allocator can deal out the amounts of memory that your strings ask for faster than malloc does. Your memory manager will make relatively few calls to malloc, (or new, depending on your approach) under the hood, requesting "large" amounts of memory at once, then deal out sections of this (these) memory block(s) through the custom allocator. To actually achieve better performance than malloc, your memory manager will usually have to be tuned based on known allocation patterns of your use cases.
This kind of thing often comes down to the age-old trade off of memory use versus execution speed. For example: if you have a known upper bound on your string sizes in practice, you can pull tricks with over-allocating to always accommodate the largest case. While this is wasteful of your memory resources, it can alleviate the performance overhead that more generalized allocation runs into with memory fragmentation. As well as making any calls to realloc essentially constant time for your purposes.
#sehe is exactly right. There are many ways.
EDIT:
To finally address your second question, strings using different allocators can play nicely together, and usage should be transparent.
For example:
class myalloc : public std::allocator<char>{};
myalloc customAllocator;
int main(void)
{
std::string mystring(customAllocator);
std::string regularString = "test string";
mystring = regularString;
std::cout << mystring;
return 0;
}
This is a fairly silly example and, of course, uses the same workhorse code under the hood. However, it shows assignment between strings using allocator classes of "different types". Implementing a useful allocator that supplies the full interface required by the STL without just disguising the default std::allocator is not as trivial. This seems to be a decent write up covering the concepts involved. The key to why this works, in the context of your question at least, is that using different allocators doesn't cause the strings to be of different type. Notice that the custom allocator is given as an argument to the constructor not a template parameter. The STL still does fun things with templates (such as rebind and Traits) to homogenize allocator interfaces and tracking.
What often helps is the creation of a GlobalStringTable.
See if you can find portions of the old NiMain library from the now defunct NetImmerse software stack. It contains an example implementation.
Lifetime
What is important to note is that this string table needs to be accessible between different DLL spaces, and that it is not a static object. R. Martinho Fernandes already warned that the object needs to be created when the application or DLL thread is created / attached, and disposed when the thread is destroyed or the dll is detached, and preferrably before any string object is actually used. This sounds easier than it actually is.
Memory allocation
Once you have a single point of access that exports correctly, you can have it allocate a memory buffer up-front. If the memory is not enough, you have to resize it and move the existing strings over. Strings essentially become handles to regions of memory in this buffer.
Placement new
Something that often works well is called the placement new() operator, where you can actually specify where in memory your new string object needs to be allocated. However, instead of allocating, the operator can simply grab the memory location that is passed in as an argument, zero the memory at that location, and return it. You can also keep track of the allocation, the actual size of the string etc.. in the Globalstringtable object.
SOA
Handling the actual memory scheduling is something that is up to you, but there are many possible ways to approach this. Often, the allocated space is partitioned in several regions so that you have several blocks per possible string size. A block for strings <= 4 bytes, one for <= 8 bytes, and so on. This is called a Small Object Allocator, and can be implemented for any type and buffer.
If you expect many string operations where small strings are incremented repeatedly, you may change your strategy and allocate larger buffers from the start, so that the number of memmove operations are reduced. Or you can opt for a different approach and use string streams for those.
String operations
It is not a bad idea to derive from std::basic_str, so that most of the operations still work but the internal storage is actually in the GlobalStringTable, so that you can keep using the same stl conventions. This way, you also make sure that all the allocations are within a single DLL, so that there can be no heap corruption by linking different kinds of strings between different libraries, since all the allocation operations are essentially in your DLL (and are rerouted to the GlobalStringTable object)
Custom allocators can help because most malloc()/new implementations are designed for maximum flexibility, thread-safety and bullet-proof workings. For instance, they must gracefully handle the case that one thread keeps allocating memory, sending the pointers to another thread that deallocates them. Things like these are difficult to handle in a performant way and drive the cost of malloc() calls.
However, if you know that some things cannot happen in your application (like one thread deallocating stuff another thread allocated, etc.), you can optimize your allocator further than the standard implementation. This can yield significant results, especially when you don't need thread safety.
Also, the standard implementation is not necessarily well optimized: Implementing void* operator new(size_t size) and void operator delete(void* pointer) by simply calling through to malloc() and free() gives an average performance gain of 100 CPU cycles on my machine, which proves that the default implementation is suboptimal.
I think you'd be best served by reading up on the EASTL
It has a section on allocators and you might find fixed_string useful.
The best way to avoid a memory allocation is don't do it!
BUT if I remember JSON correctly all the readStr values either gets used as keys or as identifiers so you will have to allocate them eventually, std::strings move semantics should insure that the allocated array are not copied around but reused until its final use. The default NRVO/RVO/Move should reduce any copying of the data if not of the string header itself.
Method 1:
Pass result as a ref from the caller which has reserved SomeResonableLargeValue chars, then clear it at the start of readStr. This is only usable if the caller actually can reuse the string.
Method 2:
Use the stack.
// Reserve memory for the string (BOTTLENECK)
if (end - idx < SomeReasonableValue) { // 32?
char result[SomeReasonableValue] = {0}; // feel free to use std::array if you want bounds checking, but the preceding "if" should insure its not a problem.
int ridx = 0;
for(; idx < end; ++idx) {
// Not an escape sequence
if(!isC('\\')) { result[ridx++] = getC(); continue; }
// Escape sequence: skip '\'
++idx;
// Convert escape sequence
result[ridx++] = getEscapeSequence(getC());
}
// Skip closing '"'
++idx;
result[ridx] = 0; // 0-terminated.
// optional assert here to insure nothing went wrong.
return result; // the bottleneck might now move here as the data is copied to the receiving string.
}
// fallback code only if the string is long.
// Your original code here
Method 3:
If your string by default can allocate some size to fill its 32/64 byte boundary, you might want to try to use that, construct result like this instead in case the constructor can optimize it.
Str result(end - idx, 0);
Method 4:
Most systems already has some optimized allocator that like specific block sizes, 16,32,64 etc.
siz = ((end - idx)&~0xf)+16; // if the allocator has chunks of 16 bytes already.
Str result(siz);
Method 5:
Use either the allocator made by google or facebooks as global new/delete replacement.
To understand how a custom allocator can help you, you need to understand what malloc and the heap does and why it is quite slow in comparison to the stack.
The Stack
The stack is a large block of memory allocated for your current scope. You can think of it as this
([] means a byte of memory)
[P][][][][][][][][][][][][][][][]
(P is a pointer that points to a specific byte of memory, in this case its pointing at the first byte)
So the stack is a block with only 1 pointer. When you allocate memory, what it does is it performs a pointer arithmetic on P, which takes constant time.
So declaring int i = 0; would mean this,
P + sizeof(int).
[i][i][i][i][P][][][][][][][][][][][],
(i in [] is a block of memory occupied by an integer)
This is blazing fast and as soon as you go out of scope, the entire chunk of memory is emptied simply by moving P back to the first position.
The Heap
The heap allocates memory from a reserved pool of bytes reserved by the c++ compiler at runtime, when you call malloc, the heap finds a length of contiguous memory that fits your malloc requirements, marks it as used so nothing else can use it, and returns that to you as a void*.
So, a theoretical heap with little optimization calling new(sizeof(int)), would do this.
Heap chunk
At first : [][][][][][][][][][][][][][][][][][][][][][][][][]
Allocate 4 bytes (sizeof(int)):
A pointer goes though every byte of memory, finds one that is of correct length, and returns to you a pointer.
After : [i][i][i][i][][][]][][][][][][][][][]][][][][][][][]
This is not an accurate representation of the heap, but from this you can already see numerous reasons for being slow relative to the stack.
The heap is required to keep track of all already allocated memory and their respective lengths. In our test case above, the heap was already empty and did not require much, but in worst case scenarios, the heap will be populated with multiple objects with gaps in between (heap fragmentation), and this will be much slower.
The heap is required to cycle though all the bytes to find one that fits your length.
The heap can suffer from fragmentation since it will never completely clean itself unless you specify it. So if you allocated an int, a char, and another int, your heap would look like this
[i][i][i][i][c][i2][i2][i2][i2]
(i stands for bytes occupied by int and c stands for bytes occupied by a char. When you de-allocate the char, it will look like this.
[i][i][i][i][empty][i2][i2][i2][i2]
So when you want to allocate another object into the heap,
[i][i][i][i][empty][i2][i2][i2][i2][i3][i3][i3][i3]
unless an object is the size of 1 char, the overall heap size for that allocation is reduced by 1 byte. In more complex programs with millions of allocations and deallocations, the fragmentation issue becomes severe and the program will become unstable.
Worry about cases like thread safety (Someone else said this already).
Custom Heap/Allocator
So, a custom allocator usually needs to address these problems while providing the benefits of the heap, such as personalized memory management and object permanence.
These are usually accomplished with specialized allocators. If you know you dont need to worry about thread safety or you know exactly how long your string will be or a predictable usage pattern you can make your allocator fast than malloc and new by quite a lot.
For example, if your program requires a lot of allocations as fast as possible without lots of deallocations, you could implement a stack allocator, in which you allocate a huge chunk of memory with malloc at startup,
e.g
typedef char* buffer;
//Super simple example that probably doesnt work.
struct StackAllocator:public Allocator{
buffer stack;
char* pointer;
StackAllocator(int expectedSize){ stack = new char[expectedSize];pointer = stack;}
allocate(int size){ char* returnedPointer = pointer; pointer += size; return returnedPointer}
empty() {pointer = stack;}
};
Get expected size, get a chunk of memory from the heap.
Assign a pointer to the beginning.
[P][][][][][][][][][] ..... [].
then have one pointer that moves for each allocation. When you no longer need the memory, you simply move the pointer to the beginning of your buffer. This gives your the advantage of O(1) speed allocations and deallocations as well as object permanence for the lack of flexible deallocation and large initial memory requirements.
For strings, you could try a chunk allocator. For every allocation, the allocator gives a set chunk of memory.
Compatibility
Compatibility with other strings is almost guaranteed. As long as you are allocating a contiguous chunk of memory and preventing anything else from using that block of memory, it will work.
I know that dynamic memory has advantages over setting a fixed size array and and using a portion of it. But in dynamic memory you have to enter the amount data that you want to store in the array. When using strings you can type as many letters as you want(you can even use strings for numbers and then use a function to convert them). This fact makes me think that dynamic memory for character arrays is obsolete compared to strings.
So i wanna know what are the advantages and disadvantages when using strings? When is the space occupied by strings freed? Is maybe the option to free your dynamically allocated memory with delete an advantage over strings? Please explain.
The short answer is "no, there is no drawbacks, only advantages" with std::string over character arrays.
Of course, strings do USE dynamic memory, it just hides the fact behind the scenes so you don't have to worry about it.
In answer to you question: When is the space occupied by strings freed? this post may be helpful. Basically, std::strings are freed once they go out of scope. Often the compiler can decide when to allocate and release the memory.
std::string usually contains an internal dynamically allocated buffer. When you assign data, or if you push back new data, and the current buffer size is not sufficient, a new buffer is allocated with an increased size and the old data is copied or moved to the new buffer. The old buffer is then deallocated.
The main buffer is deallocated when the string goes out of scope. If the string object is a local variable in a function (on the stack), it will deallocate at the end of the current code block. If it's a function parameter, when the function exits. If it's a class member, whenever the class is destroyed.
The advantage of strings is flexibility (increases in size automatically) and safety (harder to go over the bounds of an array). A fixed-size char array on the stack is faster as no dynamic allocation is required. But you should worry about that if you have a performance problem, and not before.
well, your question got me thinking, and then i understood that you are talking about syntax differences, because both ways are dynamic allocating char arrays. the only difference is in the need:
if you need to create a string containing a sentence then you can, and
that's fine, not to use malloc
if you want an array and to "play" with it, meaning change or set the cells cording to some method, or changing it's size, then initiating it with malloc would be the appropriate way
the only reason i see to a static allocating char a[17] (for example) is for a single purpose string that you need, meaning only when you know the exact size you'll need and it won't change
and one important point the i found:
In dynamic memory allocation, if the memory is being continually allocated but the one allocated for objects that are not in use, is not released, then it can lead to stack overflow condition or memory leak which is a big disadvantage.
This is part of an assignment however I just asking for clarification:
Load data from ATM.txt and store them in a dynamic array (ATM type,
not STL) when the program starts up.
How exactly do I do dynamic arrays without STL? I thought perhaps the assignment means using pointers, the "ATM Type" threw me off.
It's mentioned again:
file accounts.txt into a dynamic array (Account type, not STL)
--not part of Assignment
I've never understood the use of memory unsafe operations, eg pulling the number of items in a file from the first line:
eg.
5
abc
def
hij
kml
mno
Wouldn't it be smarter to use STL (vectors, or C++11 arrays) and not rely on the number in the file as it may not be accurate causing buffer overflows etc?
//Edit
Define a class Account in a file Account.h which contains the data members: customer
id, BSB number, etc.
I assume Account and ATM types are those classes.
The most basic form of dynamic array is one created using new[], and destroyed using delete[]:
ATM * atms = new ATM[count];
// do stuff with the array
delete [] atms;
However, this brings the danger that the code using the array might throw an exception, return from the function, or otherwise prevent the delete[] from happening. If that happens, then you will lose the only pointer to the allocated memory, and it will remain allocated but inaccessible; this is known as a memory leak. For this reason, it's better to wrap the array in a class, with:
member variables to store a pointer to the array, and (optionally) its size
constructors and/or functions to allocate the array
a destructor to delete the array
(optionally) function(s) to resize the array
Deleting the allocation in an object's destructor uses the principle of RAII to ensure that the array is deleted once it is no longer needed.
This leaves one more danger: if you copy this array object, then you will end up with two objects that both try to delete the same array, which is disasterous. To prevent this, you'll need to consider the Rule of Three. Either write a copy constructor and copy-assignment operator to allocate a new array and copy the contents; or delete them. (If you're learning old-fashioned C++, then you can't delete member functions, so you'll have to declare them private and not implement them instead).
Wouldn't it be smarter to use STL?
Usually, yes. But if you're learning C++, it's a good idea to understand how memory management works, as well as how to get the library to handle it for you. That's probably part of the point of this exercise.
A common approach for this kind of assignment would be to simulate the auto-expanding behavior of a vector on your own. Allocate an array for your data on the heap, and keep track of its length and the number of items you've stored in the array.
Once you've filled the array with your data, expand the maximum size by some amount and allocate a new array. This allows you to copy the data from the old array into the new array, and then continue adding items until you run out of space again.
Basically if you need to implement a dynamic array without using STL, you have to deal explicitely with memory allocation/deallocation.
Basically you have to:
Allocate space with malloc the first time array is constructed or used
Keep track of inserted/removed elements
Use realloc when you have finished the allocated space
Free the allocated space when the array is destroyed
Of course implementing a good STL like container like std::vector isn't an easy task (eventually a nice homework!).
What I can suggest is the following:
Avoid as much as possible reallocation. When the space is finished, allocate some more space in order to avoid to continuosly call realloc (see std::vector::reserve)
Avoid to realloc space when elements are removed. Once you have allocated, unless memory usage is too high, let the allocated space as is, in order to avoid future reallocation.
Wouldn't it be smarter to use STL (vectors, or C++11 arrays) and not
rely on the number in the file as it may not be accurate causing
buffer overflows etc?
The internals of std::vector aren't magic. It's possible to manually do yourself what std::vector does for you.
It sounds like that's what you're supposed to do for this assignment; make your own 'ATM type' that can manage reading data safely from the ATM.txt file, and an 'Account type' that can hold data from the accounts.txt file. You should probably get some clarification from whomever wrote the assignment on how exactly they expect these types to be designed/used. Also looking back at whatever class materials you have should tell you what you need to know in terms of using dynamic arrays.
Since this is homework we don't want to give answers directly, but in general what I suggest is:
Make a class myDynamicArray
Make the class contain an int or long to store array size
Allocate memory for your array using "new". From the assignment, it looks like it might be an array of strings or, if the professor is strictly banning STL (string is now considered STL), it will be an array of character arrays.
Write an insert method which, before inserting, checks the size (see #2) of your array and, if it isn't large enough, makes it bigger. One way to do this without using pre-C++ functions, which I assume is best since this is a C++ class, is to allocate a new array of larger size --> copy data from old array --> Insert any new data. How much larger? You might pick, say, 20% larger with each new allocation. Microsoft's C# allocates "the next largest prime number" of elements, though they have really fast memory allocation routines.
Don't forget to delete() the dynamic array when finished with it (what "finished" means depends on the assignment). Note that once the program exits, technically the memory should be freed up automatically, but it's very bad practice to not free it up yourself (think larger, non-academic programs that aren't regularly shut down).
I'd avoid the template code another user provided. I'm sure it's great code, but it's going to raise an eyebrow to use something that advanced in an early C++ class.
Suggested reading:
http://www.cplusplus.com/doc/tutorial/dynamic/
For example I have an array of 200 integers. What I want to do is convert it to two arrays of 80 integers, removing the 40 integers in between. The goal of course is to use the existing memory block without allocating two new arrays of length 80 integers and copying from the first array, what I want is to cut the initial array from 80 to 120 and treat what is left as two separate arrays.
Move semantics uses a similar low level approach to avoid unnecessary copy of rvalues, so my question is if there is a low level approach to achieve a similar effect but assign the already existing data to multiple objects.
For example assigning pointers to the raw memory addresses of where the cuts are and the new elements begin, forcing them to act as arrays which use the same data, already allocated and filled in by the initial array?
Naturally, I could also delete the initial array and get its address and use that to assign its memory area to a new element, but is it possible to tell C++ on which exact address to allocate a new object? And also, is there a way to guarantee the data of the initial array wont be corrupted in between its deletion and the reallocation of that same memory area to a new object?
Such an approach is absent from any of the books on c++ I've read, but I get the feeling there might very well be some low level trickery to achieve the desired result.
Yes, this is possible using placement-new. However, there is no way to guarantee that the content of a memory-location will not be changed between delete and a reallocation. Anyway, placement-new only enables you to construct an object in memory that you already own. So you'd have to allocate a pool of memory and then do your own memory management inside that pool.
You can use placement new but it's generally not a good idea to go this low-level, unless you are really constrained.
Using it is a good idea when:
You implement a memory pool
You need to write to a precise memory location because the platform requires that (for example, memory mapped I/O on some embedded device).
Disadvantages of using it:
You need to make sure that the allocated memory is large enough to hold the object.
You need to explicitly call the destructor when the object is not needed anymore.
is there a way to guarantee the data of the initial array wont be corrupted in between its deletion and the reallocation of that same memory area to a new object
No, that's up to the OS. It might be possible with a platform-specific API if it exists.
What I'd simply do is pass pointers to the relevant slices:
void i_need_80(int arr[80]); // same as "int*"
std::array<int, 200> the_array;
i_need_80(the_array);
i_need_80(the_array + 120);
In place of std::array<int, 200> you can also have a dynamic std::vector<int> v, in which case you'd say v.data() and v.data() + 120.
Don't use new.
The only thing you can do is call realloc() to resize (and possibly move) a block allocated with malloc() of realloc(). If the block was allocated using new[], even that isn't possible. The things you want require a rewrite (re-invention) of the memory management functions malloc(), free() and realloc().
This may be very simple question,But please help me.
i wanted to know what exactly happens when i call new & delete , For example in below code
char * ptr=new char [10];
delete [] ptr;
call to new returns me memory address. Does it allocate exact 10 bytes on heap, Where information about size is stored.When i call delete on same pointer,i see in debugger that there are a lot of byte get changed before and after the 10 Bytes.
Is there any header for each new which contain information about number of byte allocated by new.
Thanks a lot
Do it allocate exact 10 bytes
That's implementation dependant. The guarantee is "at least 10 chars".
Where information about size is stored?
That's implementation dependant.
Is there any header for each new which contain information about number of byte allocated by new?
That's implementation dependant.
By "that's implementation dependant" I mean it's not defined in the standard.
That's all up to the compiler and your runtime library. It's only exactly defined what effects new and delete have on your program, but how exactly these are acieved is not specified.
In your case it seems like a little more memory than requested is allocated and it will probably store management information like the size of the current chunk of memory, information about adjacent areas of free space or information to help the debugger try to detect buffer overflows and similar problems.
It is completely implementation-dependent. In general case you have to store the number of elements elsewhere. The implementation must allocate enough space for at least the number of elements specified, but it can allocate more.
Is there any header for each new which contain information about number of byte allocated by new.
That's platform dependent but yes, on many platforms there are.
Precisely, according to the standard, new char[10] will alloc at least 10 bytes in the heap.
The internals of new and delete are implementation dependent. So it will vary from compiler to compiler, and platform to platform. Additionally, you can find a variety of allocator algorithms (e.g: TCMalloc).
I'll give you an overview of how it could work internally, but don't take it as absolute truth. It's written for the solely purpose of this explanation.
In short, the new operator internally invokes malloc. The malloc uses a really long linked list of available memory blocks, aka free chain. When malloc is invoked, it lookups this list for the first block that's big enough to hold the requested size. After that, it splits the block in two parts, one with the size you requested, and the other with the rest, which is then added back to the free chain. Finally, it returns the block with the request size.
The inverse occurs in a free call, which is invoked by delete/delete[]. In short, it puts the provided block back to the free chain.
There could be fancy tricks during the processes I described above, like sorting the free chain, rounding the requested size to the next power of two to reduce memory fragmentation, and so on.
char * ptr=new char [10];
You are creating an array of 10 character's in heap and storing the address of 0th element in a pointer.this is similar to doing an malloc in C
delete [] ptr;
You are deleting(freeing the memory) the heap memory which was allocated by the earlier statement.this is similar to doing a free in c.
It is implementation dependent, but mostly the metadata for a block of memory is usually stored in the area before the memory address returned. The change that you observed before the 10 bytes was likely metadata being updated for this block (likely the size of the block being written into the meta data), and after the 10 bytes were metadata being updated for the next block (still unallocated, likely the pointer to the next chunk on the free list).
It is not a good idea to mess with the heap as it is not portable. However, if you want to do such heap magic, I suggest you implement your own memory pools (just get a large chunk of memory from the heap and manage it yourself). A possible place to start would be to look at libmm.
While the specifics are implementation dependent, one piece of information the implementation will need to store is the number of elements in the array. Or if it does not store it directly, it will need to accurately derive it from the block size allocated.
The reason for this because if an array of objects is allocated with new[], when they are deleted with delete[], the destructor of each object in the array will need to be called. delete[] will need to know how many objects to destruct. This is why it is necessary to match new with delete and new[] with delete[].