Is that safe to do something like this:
char* charArray = new char[10];
strcat(charArray, "qwertyuiop");
charArray[3] = '\0';
delete [] charArray;
Will everything be deleted? Or that what is after \0 won't be? I don't know if I'm leaving garbage.
EDIT: should be strcpy
Apart from the fact that new[] for POD-types does not zero-initializes the array, and strcat would write the terminating '\0' past the end of the allocated area, everything looks fine: in particular, delete will delete the entire block.
The reason for writing past the end of the allocated block is that the 10-character string "qwertyuiop" requires 11 bytes to store.
If you wanted to write strcpy instead of strcat, then that is safe and correct. But it seems you've a misconception about delete [] charArray. It doesn't delete characters, it deletes the memory pointed to by charArray. The memory even after delete [] charArray might contain those characters, it is not guaranteed though.
However, if you really wanted to write strcat, and it is not a typo, then your code invokes undefined behavior, because charArray contains garbage to which strcat will attempt to concatenate the second string.
The delete[] releases the memory allocated after destroying the objects within (which diesn't do anything for char). It doesn't care about the content i.e. it will deallocate as many objects as were allocated.
Note that the use of strcat() depends on a null character to find the end of the string and that the memory returned from new char[n] is uninitialized. You want to start of with
*charArray = 0;
... and you might want to consider strncat() or, yet better, not use this at all but rather use std::string.
The delete[] statement does not know anything about what is stored in the buffer (including whether it is a string or not), so it will delete all 10 characters. However, your strcat call is overflowing the end of the array (since C strings have a zero byte as terminator), which might break deletion on your platform and is not safe in general.
No, this is fine, the whole array is deleted. delete doesn't look at what the pointer you give it points to. As long as you match new with delete, and new[] with delete[], the right amount of memory will be freed.
(But do consider using std::string instead of char arrays, that'll avoid a lot of bugs like the one you have there writing past the end of your array.)
Related
I'm using something like this:
char* s = new char;
sprintf(s, "%d", 300);
This works, but my question is why?
When I try to delete it I get an error so this produces a memory leak.
It "works" because sprintf expects a char* as its first argument, and that what's you are giving him.
However, you really just allocated one char so writing more than one char to it is undefined behavior. It could be that more than one byte gets allocated depending on... the compiler, the host architecture, and so on but you can't rely on it. Actually, anything could happen and you don't want to base your code on such assumptions.
Either allocate more space for your string buffer, or better use a more "modern" approach. In your case it could be something like:
std::string s = std::to_string(300);
// Or, C++03
std::string s = boost::lexical_cast<std::string>(300);
(We are not talking about performance here, but the initial code being incorrect, we could hardly compare anyway).
As an added bonus, this code doesn't leak any memory because std::string takes care of freeing its internal memory upon destruction (s being allocated on the stack, this will happen automatically when it goes out of scope).
You're creating a one char buffer, then write to it with four chars ("300\0") [don't forget the string termination as I did] so the '3' goes into your allocated memory and the '0' and '0' and the '\0' goes to the next memory positions... that belong to someone else.
C++ does not do buffer overrun checking... so it works until it doesn't...
I have the following code:
#include <iostream>`
using namespace std;
int main() {
char* data = new char;
cin >> data;
cout << data << endl;
return 1;
}
When I type in a char* of 26 ones as a string literal, it compiles and prints it. But when I do 27 ones as data, it aborts. I want to know why.
Why is it 27?
Does it have a special meaning to it?
You're only allocating one character's worth of space. So, reading in any data more than that is overwriting memory you don't own, so that's undefined behavior. Which is what you're seeing in the result.
You'd have to look into specific details under the hood of your C++ implementation. Probably the implementation of malloc, and so on. Your code writes past the end of your buffer, which is UB according to the C++ standard. To get any idea at all of why it behaves as it does, you'd need to know what is supposed to be stored in the 27 or 28 bytes you overwrote, that you shouldn't have done.
Most likely, 27 ones just so happens to be the point at which you started damaging the data structures used by the memory allocator to track allocated and free blocks. But with UB you might find that the behavior isn't as consistent as it first appears. As a C++ programmer you aren't really "entitled" to know about such details, because if you knew about them then you might start relying on them, and then they might change without notice.
Your dynamically allocating one byte of storage. To allocate multiples, do this:
char* data = new char[how_many_bytes];
When you use a string literal, that much stack space is allocated automatically. When you allocate dynamically, you have to get the number of bytes right or you will get a segfault.
This is just Undefined Behavior, a.k.a. "UB". The program can do anything or nothing. Any effect you see is non-reproducable.
Why is it UB?
Because you allocate space for a single char value, and you treat that as a zero-terminated string. Since the zero takes up one char value there is no (guaranteed) space for real data. However, since C++ implementations generally do not add inefficient checking of things, you can get away with storing data in parts of memory that you don't own – until it crashes or produces invalid results or has other ungood effect, because of the UB.
To do this correctly, use std::string instead of char*, and don't new or delete (a std::string does that automatically for you).
Then use std::getline to read one line of input into the string.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
delete[] supplied a modified new-ed pointer. Undefined Behaviour?
Let's say I've allocated a handful of characters using new char[number].
Will it be possible to delete only a few end characters (something like delete[] (charArray + 4);, which will supposedly de-allocate all of characters except for the first four)?
I read that some implementations' new[] store the number of objects allocated before the array of objects so that delete[] knows how many objects to de-allocate, so it's probably unsafe to do what I'm asking...
Thanks.
EDIT:
Is manually deleting the unwanted end bytes using separate delete statements a safe way to do what I'm asking?
No, you have to delete[] exactly the same pointer you get back with new[].
You can't delete a specific segment of an array.
If you're just operating on C strings, you can set a character to \0 and any functions that operate on the string will stop there, as C strings are null-terminated.
If you actually need to free up that memory, the closest you can get would be to make a new, smaller array, copy the elements over, and delete[] the old array.
char* TenCharactersLong = new char[10];
// ...
char* FirstFiveCharacters = new char[5];
for (std::size_t Index = 0; Index < 5; Index++)
{
FirstFiveCharacters[Index] = TenCharactersLong[Index];
}
delete[] TenCharactersLong;
The only "safe" way to do it is to allocate a new array, copy the data, and delete the old one. Or you could just follow the std::vector way and differentiate between "capacity" (the size of the array) and "size" (the amount of elements in it).
Nope, sorry. Memory allocation is complex enough with fragmentation, alignment, padding, overheads, whatnot. A feature like this would only further amplify these problems.
Is manually deleting the unwanted end bytes using separate delete statements a safe way to do what I'm asking?
Don't even try. This can be dangerous. Always deallocate arrays with delete [].
I highly recommend reading the C++ FAQ Lite on this topic: 16.13, 16.14
No, you can only delete the entire array.
No, you can't do that. I'd suggest using an std::vector for this. This way you can pinpoint the indexes that you want to remove and just let the libraries handle it for you.
Is it possible to incrementally increase the amount of allocated memory on a free store that a pointer points to? For example, I know that this is possible.
char* p = new char; // allocates one char to free store
char* p = new char[10]; // allocates 10 chars to free store
but what if I wanted to do something like increase the amount of memory that a pointer points to. Something like...
char input;
char*p = 0;
while(cin >> input) // store input chars into an array in the free store
char* p = new char(input);
obviously this will just make p point to the new input allocated, but hopefully you understand that the objective is to add a new char allocation to the address that p points to, and store the latest input there. Is this possible? Or am I just stuck with allocating a set number.
The C solution is to use malloc instead of new -- this makes realloc available. The C++ solution is to use std::vector and other nice containers that take care of these low-level problems and let you work at a much higher, much nicer level of abstraction!-)
You can do this using the function realloc(), though that may only work for memory allocated with malloc() rather than "new"
having said that, you probably don't want to allocate more memory a byte at a time. For efficiency's sake you should allocate in blocks substantially larger than a single byte and keep track of how much you've actually used.
realloc
You appear to be using C++. While you can use realloc, C++ makes it possible to avoid explict memory management, which is safer, easier, and likely more efficient than doing it yourself.
In your example, you want to use std::vector as a container class for chars. std::vector will automatically grow as needed.
In fact, in your case you could use a std::istreambuf_iterator and std:push_back to std::copy the input into a std::vector.
Is there any difference between the below two snippets?
One is a char array, whereas the other is a character array pointer, but they do behave the same, don't they?
Example 1:
char * transport_layer_header;
// Memory allocation for char * - allocate memory for a 2 character string
char * transport_layer_header = (char *)malloc(2 * sizeof(char));
sprintf(transport_layer_header,"%d%d",1,2);
Example 2:
char transport_layer_header[2];
sprintf(transport_layer_header,"%d%d",1,2);
Yes, there is a difference. In the first example, you dynamically allocate a two-element char array on the heap. In the second example you have a local two-element char array on the stack.
In the first example, since you don't free the pointer returned by malloc, you also have a memory leak.
They can often be used in the same way, for example using sprintf as you demonstrate, but they are fundamentally different under the hood.
The other difference is that your first example will corrupt data on the heap, while the second will corrupt data on the stack. Neither allocates room for the trailing \0.
The most important difference, IMO, is that in the second option transport_layer_header is a const pointer (you can't make it point to a different place), where as in the first option - you can.
This is ofcourse in addition to the previous answers.
Assuming you correct the "no room for the null" problem, i.e. allocate 3 bytes instead of 2, you would normally only use malloc() if you need dynamic memory. For example, if you don't know how big the array is going to be, you might use malloc.
As pointed out, if you malloc() and don't later free the memory then you have a memory leak.
One more point: you really should check the return value of malloc() to ensure you got the memory. I know that in Solaris malloc() never fails (thought it may sleep -- a good reason not to call it if you don't want your process going to sleep, as noted above). I assume that on Linux malloc() could fail (i.e. if there is not enough memory available). [Please correct me if I'm wrong.]