I'm using auto_ptr<> which uses an array of class pointer type so how do I assign a value to it.
e.g.
auto_ptr<class*> arr[10];
How can I assign a value to the arr array?
You cannot use auto_ptr with array, because it calls delete p, not delete [] p.
You want boost::scoped_array or some other boost::smart_array :)
If you have C++0x (e.g. MSVC10, GCC >= 4.3), I'd strongly advise to use either a std::vector<T> or a std::array<T, n> as your base object type (depending on whether the size is fixed or variable), and if you allocate this guy on the heap and need to pass it around, put it in a std::shared_ptr:
typedef std::array<T, n> mybox_t;
typedef std::shared_ptr<mybox_t> mybox_p;
mybox_p makeBox() { auto bp = std::make_shared<mybox_t>(...); ...; return bp; }
Arrays and auto_ptr<> don't mix.
From the GotW site:
Every delete must match the form of
its new. If you use single-object
new, you must use single-object
delete; if you use the array form of
new, you must use the array form of
delete. Doing otherwise yields
undefined behaviour.
I'm not going to copy the GotW site verbatim; however, I will summarize your options to solve your problem:
Roll your own auto array
1a. Derive from auto_ptr. Few advantages, too difficult.
1b. Clone auto_ptr code. Easy to implement, no significant space/overhead. Hard to maintain.
Use the Adapter Pattern. Easy to implement, hard to use, maintain, understand. Takes more time / overhead.
Replace auto_ptr With Hand-Coded EH Logic. Easy to use, no significant space/time/overhead. Hard to implement, read, brittle.
Use a vector<> Instead Of an Array. Easy to implement, easy to read, less brittle, no significant space, time, overhead. Syntactic changes needed and sometimes usability changes.
So the bottom line is to use a vector<> instead of C-style arrays.
As everyone said here, don't mix arrays with auto_ptr. This must be used only when you've multiple returns where you feel really difficult to release memory, or when you get an allocated pointer from somewhere else and you've the responsibility to clean it up before existing the function.
the other thing is that, in the destructor of auto_ptr it calls delete operator with the stored pointer. Now what you're passing is a single element of an array. Memory manager will try to find and free up the memory blocks allocated starting from the address you're passing. Probably this might not be existing heap where all allocations are maintained. You may experience an undefined behavior like crash, memory corruption etc. upon this operation.
Related
I have an array of (for example) uint8_ts.
Is std::unique_ptr the wrong tool to use to manage this object in memory?
For example,
std::unique_ptr<uint8_t> data(new uint8_t[100]);
Will this produce undefined behaviour?
I want a smart-pointer object to manage some allocated memory for me. std::vector isn't ideal, because it is a dynamic object. std::array is no good either, because the size of allocation is not known at compile time. I cannot use the [currently, 2016-03-06] experimental std::dynarray, as this is not yet available on Visual Studio 2013.
Unfortunately I have to conform to VS2013, because, rules.
The way you're using the unique_ptr will indeed result in undefined behavior because it'll delete the managed pointer, but you want it to be delete[]d instead. unique_ptr has a partial specialization for array types to handle such situations. What you need is
std::unique_ptr<uint8_t[]> data(new uint8_t[100]);
You can also use make_unique for this
auto data = std::make_unique<uint8_t[]>(100);
There is a subtle difference between the two, however. Using make_unique will zero initialize the array, while the first method won't.
Certainly unique_ptr as you've used it is incorrect because it will call delete rather than delete[] to deallocate the memory.
The normal solution to this situation is std::vector: It allocates and manages your memory as well as providing a fairly robust interface. You say that you don't want to use vector because True, but would be risky to expose that functionality to the user. from which I infer you're exposing the implementation detail of your container type to the interface of your class. That then is your real problem.
So use vector to manage the memory but don't expose vector directly in your class interface.
I'm working with a code base that is poorly written and has a lot of memory leaks.
It uses a lot of structs that contains raw pointers, which are mostly used as dynamic arrays.
Although the structs are often passed between functions, the allocation and deallocation of those pointers are placed at random places and cannot be easily tracked/reasoned/understood.
I changed some of them to classes and those pointers to be RAIIed by the classes themselves. They works well and don't look very ugly except that I banned copy-construct and copy-assignment of those classes simply because I don't want to spend time implementing them.
Now I'm thinking, am I re-inventing the wheel? Why don't I replace C-style array with std:array or std::valarray?
I would prefer std::valarray because it uses heap memory and RAIIed. And std::array is not (yet) available in my development environment.
Edit1: Another plus of std::array is that the majority of those dynamic arrays are POD (mostly int16_t, int32_t, and float) arrays, and the numeric API can possibility make life easier.
Is there anything that I need to be aware of before I start?
One I can think of is that there might not be an easy way to convert std::valarray or std::array back to C-style arrays, and part of our code does uses pointer arithmetic and need data to be presented as plain C-style arrays.
Anything else?
EDIT 2
I came across this question recently. A VERY BAD thing about std::valarray is that it's not safely copy-assignable until C++11.
As is quoted in that answer, in C++03 and earlier, it's UB if source and destination are of different sizes.
The standard replacement of C-style array would be std::vector. std::valarray is some "weird" math-vector for doing number-calculation-like stuff. It is not really designed to store an array of arbitrary objects.
That being said, using std::vector is most likely a very good idea. It would fix your leaks, use the heap, is resizable, has great exception-safety and so on.
It also guarantees that the data is stored in one contiguous block of memory. You can get a pointer to said block with the data() member function or, if you are pre-C++11, with &v[0] for a non-empty vector v. You can then do your pointer business with it as usual.
std::unique_ptr<int[]> is close to a drop-in replacement for an owning int*. It has the nice property that it will not implicitly copy itself, but it will implicitly move.
An operation that copies will generate compile time errors, instead of run time inefficiency.
It also has next to no run time overhead over that owning int* other than a null-check at destruction. It uses no more space than an int*.
std::vector<int> stores 3 pointers and implicitly copies (which can be expensive, and does not match your existing code behavior).
I would start with std::unique_ptr<int[]> as a first pass and get it working. I might transition some code over to std::vector<int> after I decide that intelligent buffer management is worth it.
Actually, as a first pass, I'd look for memcpy and memset and similar functions and make sure they aren't operating on the structures in question before I start adding RAII members.
A std::unique_ptr<int[]> means that the default created destructor for a struct will do the RAII cleanup for you without having to write any new code.
I would prefer std::vector as the replacement of c-style arrays. You can have a direct access to the underlying data (something like bare pointers) via .data():
Returns pointer to the underlying array serving as element storage.
Title says it.
Sample of bad practive:
std::vector<Point>* FindPoints()
{
std::vector<Point>* result = new std::vector<Point>();
//...
return result;
}
What's wrong with it if I delete that vector later?
I mostly program in C#, so this problem is not very clear for me in C++ context.
As a rule of thumb, you don't do this because the less you allocate on the heap, the less you risk leaking memory. :)
std::vector is useful also because it automatically manages the memory used for the vector in RAII fashion; by allocating it on the heap now you require an explicit deallocation (with delete result) to avoid leaking its memory. The thing is made complicated because of exceptions, that can alter your return path and skip any delete you put on the way. (In C# you don't have such problems because inaccessible memory is just recalled periodically by the garbage collector)
If you want to return an STL container you have several choices:
just return it by value; in theory you should incur in a copy-penality because of the temporaries that are created in the process of returning result, but newer compilers should be able to elide the copy using NRVO1. There may also be std::vector implementations that implement copy-on-write optimization like many std::string implementations do, but I've never heard about that.
On C++0x compilers, instead, the move semantics should trigger, avoiding any copy.
Store the pointer of result in an ownership-transferring smart pointer like std::auto_ptr (or std::unique_ptr in C++0x), and also change the return type of your function to std::auto_ptr<std::vector<Point > >; in that way, your pointer is always encapsulated in a stack-object, that is automatically destroyed when the function exits (in any way), and destroys the vector if its still owned by it. Also, it's completely clear who owns the returned object.
Make the result vector a parameter passed by reference by the caller, and fill that one instead of returning a new vector.
Hardcore STL option: you would instead provide your data as iterators; the client code would then use std::copy+std::back_inserter or whatever to store such data in whichever container it wants. Not seen much (it can be tricky to code right) but it's worth mentioning.
As #Steve Jessop pointed out in the comments, NRVO works completely only if the return value is used directly to initialize a variable in the calling method; otherwise, it would still be able to elide the construction of the temporary return value, but the assignment operator for the variable to which the return value is assigned could still be called (see #Steve Jessop's comments for details).
Creating anything dynamically is bad practice unless it's really necessary. There's rarely a good reason to create a container dynamically, so it's usually not a good idea.
Edit: Usually, instead of worrying about things like how fast or slow returning a container is, most of the code should deal only with an iterator (or two) into the container.
Creating objects dynamically in general is considered a bad practice in C++. What if an exception is thrown from your "//..." code? You'll never be able to delete the object. It is easier and safer to simply do:
std::vector<Point> FindPoints()
{
std::vector<Point> result;
//...
return result;
}
Shorter, safer, more straghtforward... As for the performance, modern compilers will optimize away the copy on return and if they are not able to, move constructors will get executed so this is still a cheap operation.
Perhaps you're referring to this recent question: C++: vector<string> *args = new vector<string>(); causes SIGABRT
One liner: It's bad practice because it's a pattern that's prone to memory leaks.
You're forcing the caller to accept dynamic allocation and take charge of its lifetime. It's ambiguous from the declaration whether the pointer returned is a static buffer, a buffer owned by some other API (or object), or a buffer that's now owned by the caller. You should avoid this pattern in any language (including plain C) unless it's clear from the function name what's going on (e.g strdup, malloc).
The usual way is to instead do this:
void FindPoints(std::vector<Point>* ret) {
std::vector<Point> result;
//...
ret->swap(result);
}
void caller() {
//...
std::vector<Point> foo;
FindPoints(&foo);
// foo deletes itself
}
All objects are on the stack, and all the deletion is taken care of by the compiler. Or just return by value, if you're running a C++0x compiler+STL, or don't mind the copy.
I like Jerry Coffin's answer. Additionally, if you want to avoid returning a copy, consider passing the result container as a reference, and the swap() method may be needed sometimes.
void FindPoints(std::vector<Point> &points)
{
std::vector<Point> result;
//...
result.swap(points);
}
Programming is the art of finding good compromises. Dynamically allocated memory can have some place of course, and I can even think to problems where a good compromise between code complexity and efficiency is obtained using std::vector<std::vector<T>*>.
However std::vector does a great job of hiding most needs of dynamically allocated arrays, and managed pointers are many times just a perfect solution for dynamically allocated single instances. This means that it's just not so common finding cases where an unmanaged dynamically allocated container (or dynamically allocated whatever, actually) is the best compromise in C++.
This in my opinion doesn't make dynamic allocation "bad", but just "suspect" if you see it in code, because there's an high probability that better solutions could be possile.
In your case for example I see no reason for using dynamic allocation; just making the function returning an std::vector would be efficient and safe. With any decent compiler Return Value Optimization will be used when assigning to a newly declared vector, and if you need to assign the result to an existing vector you can still do something like:
FindPoints().swap(myvector);
that will not do any copying of the data but just some pointer twiddling (note that you cannot use the apparently more natural myvector.swap(FindPoints()) because of a C++ rule that is sometimes annoying that forbids passing temporaries as non-const references).
In my experience the biggest source of needs of dynamically allocated objects are complex data structures where the same instance can be reached using multiple access paths (e.g. instances are at the same time both in a doubly linked list and indexed by a map). In the standard library containers are always the only owner of the contained objects (C++ is a copy semantic language) so it may be difficult to implement those solutions efficiently without the pointer and dynamic allocation concept.
Often you can stil reasonable-enough compromises that just use standard containers however (may be paying some extra O(log N) lookups that you could have avoided) and that, considering the much simpler code, can be IMO the best compromise in most cases.
I am pretty proficient with C, and freeing memory in C is a must.
However, I'm starting my first C++ project, and I've heard some things about how you don't need to free memory, by using shared pointers and other things.
Where should I read about this? Is this a valuable replacement for proper delete C++ functionality? How does it work?
EDIT
I'm confused, some people are saying that I should allocate using new and use smart pointers for the deallocation process.
Other people are saying that I shouldn't allocate dynamic memory in the first place.
Others are saying that if I use new I also have to use delete just like C.
So which method is considered more standard and more-often used?
Where should I read about this?
Herb Sutter's Exceptional C++ and Scott Meyers's More Effective C++ are both excellent books that cover the subject in detail.
There is also a lot of discussion on the web (Google or StackOverflow searches for "RAII" or "smart pointer" will no doubt yield many good results).
Is this a valuable replacement for proper delete C++ functionality?
Absolutely. The ability not to worry about cleaning up resources, especially when an exception is thrown, is one of the most valuable aspects of using RAII and smart pointers.
What I meant in my comment (sorry for being terse - I had to run out to the shops) is that you should be using:
std::string s = "foobar";
rather than:
std::string * s = new std::string( "foobar" );
...
delete s;
and:
vector <Person> p;
p.push_back( Person( "fred" ) );
rather than:
vector <Person *> p;
p.push_back( new Person( "fred" ) );
You should always be using classes that manage memory for you. In C++ the main reason for creating an object using new is that you don't know its type at compile-time. If that isn't the reason, think long and hard before using new and delete, or even smart pointers.
If you allocate dynamic memory (with new), you need to free it (with delete), just like using malloc/free in C. The power of C++ is that it gives you lots of ways of NOT calling new, in which case you don't need to call delete.
You still have to worry about freeing memory in C++, it's just that there are better methods/tools for doing so. One can argue that attention to memory management in C++ is more difficult as well due to the added requirement of writing exception safe code. This makes things such as:
MyClass *y = new MyClass;
doSomething(y);
delete y;
Look completely harmless until you find that doSomething() throws an exception and now you have a memory leak. This becomes even more dangerous as code is maintained as the code above could have been safe prior to someone changing the doSomething() function in a later release.
Following the RAII methodology is a big part of fixing memory management challenges and using auto_ptr's or shared pointers provided by libraries such as Boost make it easier to incorporate these methods into your code.
Note that auto_ptr is not a "shared" pointer. It is an object that takes ownership of the dynamically allocated object and gives that ownership away on assignment and copy. It doesn't count references to the memory. This makes it unsuitable for use within standard containers and many in general prefer the shared_ptr of Boost to the auto_ptr provided by the standard.
It is never safe to put auto_ptrs into
standard containers. Some people will
tell you that their compiler and
library compiles this fine, and others
will tell you that they've seen
exactly this example recommended in
the documentation of a certain popular
compiler; don't listen to them.
The problem is that auto_ptr does not
quite meet the requirements of a type
you can put into containers, because
copies of auto_ptrs are not
equivalent. For one thing, there's
nothing that says a vector can't just
decide to up and make an "extra"
internal copy of some object it
contains. For another, when you call
generic functions that will copy
elements, like sort() does, the
functions have to be able to assume
that copies are going to be
equivalent. At least one popular sort
internally takes a copy of a "pivot"
element, and if you try to make it
work on auto_ptrs it will merrily take
a copy of the pivot auto_ptr object
(thereby taking ownership and putting
it in a temporary auto_ptr on the
side), do the rest of its work on the
sequence (including taking further
copies of the now-non-owning auto_ptr
that was picked as a pivot value), and
when the sort is over the pivot is
destroyed and you have a problem: At
least one auto_ptr in the sequence
(the one that was the pivot value) no
longer owns the pointer it once held,
and in fact the pointer it held has
already been deleted!
Taken From: Using auto_ptr Effectively
Well, of course you need to delete. I would rephrase this as 'what libraries can I use that can automate the deletion of allocated memory?'. I'd recommend you start by reading up the Boost Smart pointers page.
The best answer I can give you is: something needs to call delete for each object created with new. Whether you do it manually, or using a scope-based smart pointer, or a reference-counted smart pointer, or even a non-deterministic garbage collector, it still needs to be done.
Having said that, I have not manually called delete in 10 years or so. Whenever I can I create an automatic object (on the stack); when I need to create an object on the heap for some reason I try using a scope-based smart pointer, and in rare cases when there is a legitimate reason to have shared ownership, I use a reference counted smart pointer.
This is a great question, and actually several in one:
Do I need to worry about Managing Memory?
Yes! There is no garbage collection in C++. Anytime you allocate something with new you need to either call delete in your own code, or delegate that responsibility to something like a smart pointer.
When Should I use dynamic memory allocation?
The reasons you'd want to use dynamic memory allocation (allocating with new). Some of these include:
You don't know the size of the thing you are allocating at compile time
You don't know the type of the thing you are allocating at compile time
You are reusing the same data in different contexts and don't want to pay the performance overhead of copying that data around.
There are lots of other reasons, and these are gross over generalizations, but you get the idea.
What tools can I use to help me with memory management?
Smart pointers are the way to go here. A smart pointer will take ownership of memory that you allocate, and then release that memory automatically for you at a specific time depending on the policy the smart pointer.
For example, a boost::scoped_ptr will deallocate memory for you when it goes out of scope
{
scoped_ptr<MyClass> myVar( new MyClass() );
// do Something with myVar
} // myVar goes out of scope and calls delete on its MyClass
In general you should use smart pointers over raw pointers anytime you can. It will save you years of tracking down memory leaks.
Smart pointers come in many forms including:
std::auto_ptr
Boost Smart Pointers
If you can use Boost smart pointers I would. They rock!
Since C++ does not have a garbage collector built into the language, you need to be aware of what memory you have dynamically allocated and how that memory is being freed.
That said, you can use smart pointers to alleviate the problem of having to manually free memory via delete - for example, see Smart Ponters (boost).
First and foremost, before you get into the business of using auto_ptr's and writing your own RAII classes, learn to use the Standard Template Library. It provides many common container classes that automatically allocate their internal memory when you instantiate them and free it up when they go out of scope - things like vectors, lists, maps, and so forth. When you employ STL, using the new-operator and delete (or malloc and free) is rarely necessary.
Freeing memory in C++ is just as much a must as in C.
What you may be thinking of is a smart pointer library (the standard library's auto_ptr among others) - which will do reference counting for you.
'm confused, some people are saying
that I should allocate using new and
use smart pointers for the
deallocation process.
They're right. Just like in C you still need to manage all your memory one way or another. however there are ways to use the language to automate delete.
Smart pointers are basically local scope wrappers for pointers which use the object .dtor to delete the corresponding pointer once the smart pointer - which is like any other objecton the stack - goes out of scope
The beauty of C++ is that you have explicit control over when things are created and when things are destroyed. Do it right and you will not have issues with memory leaks etc.
Depending on your environment, you may want to create objects on the stack or you may want to dynamically allocated (create them on the 'heap' - heap in quotes because its an overused term but is good enough for now).
Foo x; // created on the stack - automatically destroyed when the program exits that block of code it was created in.
Foo *y = new Foo; // created on the heap - its O.K. to pass this one around since you control when its destroyed
Whenever you use 'new', you should use the corresponding version of delete... somewhere, somehow. If you use new to initialize a smart pointer like:
std::auto_ptr x = new Foo;
You are actually creating two items. An instance of auto_ptr and an instance of Foo. auto_ptr is created on the stack, Foo on the heap.
When the stack 'unwinds', it will automatically call delete on that instance of Foo. Automatically cleaning it up for you.
So, general rule of thumb, use the stack version whenever possible/practical. In most instances it will be faster as well.
In order of preference, you should:
Avoid handling allocation yourself at all. C++'s STL (standard template library) comes with a lot of containers that handle allocation for you. Use vector instead of dynamically allocated arrays. Use string instead of char * for arrays of characters. Try to seek out an appropriate container from the STL rather than designing your own.
If you are designing your own class and honestly need dynamic allocation (and you usually won't if you compose your class using members of the STL), place all instances of new (new[]) in your constructor and all instances of delete (delete[]) in your destructor. You shouldn't need malloc and free, generally.
If you are unable to keep your allocations paired within constructors and destructors, use smart pointers. Really this is not so different from #2; smart pointers are basically just special classes which use destructors to ensure deallocation happens.
I know that when delete [] will cause destruction for all array elements and then releases the memory.
I initially thought that compiler wants it just to call destructor for all elements in the array, but I have also a counter - argument for that which is:
Heap memory allocator must know the size of bytes allocated and using sizeof(Type) its possible to find no of elements and to call appropriate no of destructors for an array to prevent resource leaks.
So my assumption is correct or not and please clear my doubt on it.
So I am not getting the usage of [] in delete [] ?
Scott Meyers says in his Effective C++ book: Item 5: Use the same form in corresponding uses of new and delete.
The big question for delete is this: how many objects reside in the memory being deleted? The answer to that determines how many destructors must be called.
Does the pointer being deleted point to a single object or to an array of objects? The only way for delete to know is for you to tell it. If you don't use brackets in your use of delete, delete assumes a single object is pointed to.
Also, the memory allocator might allocate more space that required to store your objects and in this case dividing the size of the memory block returned by the size of each object won't work.
Depending on the platform, the _msize (windows), malloc_usable_size (linux) or malloc_size (osx) functions will tell you the real length of the block that was allocated. This information can be exploited when designing growing containers.
Another reason why it won't work is that Foo* foo = new Foo[10] calls operator new[] to allocate the memory. Then delete [] foo; calls operator delete[] to deallocate the memory. As those operators can be overloaded, you have to adhere to the convention otherwise delete foo; calls operator delete which may have an incompatible implementation with operator delete []. It's a matter of semantics, not just keeping track of the number of allocated object to later issue the right number of destructor calls.
See also:
[16.14] After p = new Fred[n], how does the compiler know there are n objects to be destructed during delete[] p?
Short answer: Magic.
Long answer: The run-time system stores the number of objects, n, somewhere where it can be retrieved if you only know the pointer, p. There are two popular techniques that do this. Both these techniques are in use by commercial-grade compilers, both have tradeoffs, and neither is perfect. These techniques are:
Over-allocate the array and put n just to the left of the first Fred object.
Use an associative array with p as the key and n as the value.
EDIT: after having read #AndreyT comments, I dug into my copy of Stroustrup's "The Design and Evolution of C++" and excerpted the following:
How do we ensure that an array is correctly deleted? In particular, how do we ensure that the destructor is called for all elements of an array?
...
Plain delete isn't required to handle both individual objects an arrays. This avoids complicating the common case of allocating and deallocating individual objects. It also avoids encumbering individual objects with information necessary for array deallocation.
An intermediate version of delete[] required the programmer to specify the number of elements of the array.
...
That proved too error prone, so the burden of keeping track of the number of elements was placed on the implementation instead.
As #Marcus mentioned, the rational may have been "you don't pay for what you don't use".
EDIT2:
In "The C++ Programming Language, 3rd edition", §10.4.7, Bjarne Stroustrup writes:
Exactly how arrays and individual objects are allocated is implementation-dependent. Therefore, different implementations will react differently to incorrect uses of the delete and delete[] operators. In simple and uninteresting cases like the previous one, a compiler can detect the problem, but generally something nasty will happen at run time.
The special destruction operator for arrays, delete[], isn’t logically necessary. However, suppose the implementation of the free store had been required to hold sufficient information for every object to tell if it was an individual or an array. The user could have been relieved of a burden, but that obligation would have imposed significant time and space overheads on some C++ implementations.
The main reason why it was decided to keep separate delete and delete[] is that these two entities are not as similar as it might seem at the first sight. For a naive observer they might seem to be almost the same: just destruct and deallocate, with the only difference in the potential number of objects to process. In reality, the difference is much more significant.
The most important difference between the two is that delete might perform polymorphic deletion of objects, i.e. the static type of the object in question might be different from its dynamic type. delete[] on the other hand must deal with strictly non-polymorphic deletion of arrays. So, internally these two entities implement logic that is significantly different and non-intersecting between the two. Because of the possibility of polymorphic deletion, the functionality of delete is not even remotely the same as the functionality of delete[] on an array of 1 element, as a naive observer might incorrectly assume initially.
Contrary to the strange claims made in some other answers, it is, of course, perfectly possible to replace delete and delete[] with just a single construct that would branch at the very early stage, i.e. it would determine the type of the memory block (array or not) using the household information that would be stored by new/new[], and then jump to the appropriate functionality, equivalent to either delete or delete[]. However, this would be a rather poor design decision, since, once again, the functionality of the two is too different. Forcing both into a single construct would be akin to creating a Swiss Army Knife of a deallocation function. Also, in order to be able to tell an array from a non-array we'd have to introduce an additional piece of household information even into a single-object memory allocations done with plain new. This might easily result in notable memory overhead in single object allocations.
But, once again, the main reason here is the functional difference between delete and delete[]. These language entities possess only apparent skin-deep similarity that exists only at the level of naive specification ("destruct and free memory"), but once one gets to understand in detail what these entities really have to do one realizes that they are too different to be merged into one.
P.S. This is BTW one of the problems with the suggestion about sizeof(type) you made in the question. Because of the potentially polymorphic nature of delete, you don't know the type in delete, which is why you can't obtain any sizeof(type). There are more problems with this idea, but that one is already enough to explain why it won't fly.
The heap itself knows the size of allocated block - you only need the address. Look like free() works - you only pass the address and it frees memory.
The difference between delete (delete[]) and free() is that the former two first call the destructors, then free memory (possibly using free()). The problem is that delete[] also has only one argument - the address and having only that address it need to know the number of objects to run destructors on. So new[] uses som implementation-defined way of writing somewhere the number of elements - usually it prepends the array with the number of elements. Now delete[] will rely on that implementation-specific data to run destructors and then free memory (again, only using the block address).
delete[] just calls a different implementation (function);
There's no reason an allocator couldn't track it (in fact, it would be easy enough to write your own).
I don't know the reason they did not manage it, or the history of the implementation, if I were to guess: Many of these 'well, why wasn't this slightly simpler?' questions (in C++) came down to one or more of:
compatibility with C
performance
In this case, performance. Using delete vs delete[] is easy enough, I believe it could all be abstracted from the programmer and be reasonably fast (for general use). delete[] only requires only a few additional function calls and operations (omitting destructor calls), but that is per call to delete, and unnecessary because the programmer generally knows the type he/she is dealing with (if not, there's likely a bigger problem at hand). So it just avoids calling through the allocator. Additionally, these single allocations may not need to be tracked by the allocator in as much detail; Treating every allocation as an array would require additional entries for count for trivial allocations, so it is multiple levels of simple allocator implementation simplifications which are actually important for many people, considering it is a very low level domain.
This is more complicated.
The keyword and the convention to use it to delete an array was invented for the convenience of implementations, and some implementations do use it (I don't know which though. MS VC++ does not).
The convenience is this:
In all other cases, you know the exact size to be freed by other means. When you delete a single object, you can have the size from compile-time sizeof(). When you delete a polymorphic object by base pointer and you have a virtual destructor, you can have the size as a separate entry in vtbl. If you delete an array, how would you know the size of memory to be freed, unless you track it separately?
The special syntax would allow tracking such size only for an array - for instance, by putting it before the address that is returned to the user. This takes up additional resources and is not needed for non-arrays.