I have a piece of code that deals with C++ array.
using namespace std;
#include <iostream>
int main(){
int *p;
p = new int[3];
for(int i = 0; i < 3; i++){
p[i] = i;
}
//delete[] p;
for(int i = 0;i <3; i++){
std::cout << *(p+i) << std::endl;
}
}
How does this code work? How does the memory location *(p+i) work? How is it different from using p[i]. What are the differences on the code if we uncomment the line delete[] p.
1) When you do this:
p = new int[3];
Now, p points to the first element of the dynamically allocated array.
When you do, *(p + i) will lead to simple pointer arithmetic. It will boil down to: value of (<address pointed by p> + <size of type pointed by p> * i) which is equivalent to doing p[i].
That's why it works.
2) In C++, unlike java, you have to explicitly clear the dynamically allocated memory using delete, as there is no GC in C++ (and will never be, as per Bjarne Stroustrup). Otherwise, the memory area will remain acquired for the application lifetime, thereby causing memory leak.
Suggestion:
Place your delete at the end of the program. Otherwise, the loop below it may give SIGSEGV.
Also, Avoid using new and delete as much as you can.
Related
I have a function like this:
int fun(){
int* arr = new int[10];
for(int i = 0; i < 10; i++){
arr[i] = 5;
}
delete[] arr; //
return arr[6];
}
int main(){
std::cout << fun();
return 0;
}
What am i going to do is to free the memory whick is pointed to by the pointer arr. But the function is not returning pointer arr. So i tryed to free it inside the function.
It won't print anything if delete[] arr is above return arr[6] (Using Visual Studio 2019).
But if return arr[6] is above delete[] arr , would the memory be freed or this sentence be skipped?
Or should i declare arr inside main() then free it in main()?
Unless it's for academic purposes, you rarely see a C++ program using manual memory allocation, you don't need to do it since you have a set of containers in the STL containers library that do this memory management reliably for you. In your particular example, a std::vector is recommended.
That said, to answer your question:
It won't print anything if delete[] arr is above return arr[6] (Using Visual Studio 2019).
If you delete the array before accessing the data stored in it the behavior is undefined. It's only natural that it doesn't print anything. It could also print the expected result, that's one of the features of undefined behavior. Using Visual Studio or not, it's the same.
But if return arr[6] is above delete[] arr, would the memory be freed or this sentence be skipped?
Yes it would be skipped, or more accurately, all code after the return statement will not be executed. The memory will not be freed.
Or should I declare arr inside main() then free it in main()?
If the data should belong in the main's scope you should definitely declare it there, you can pass it to the function as an argument:
#include <cassert>
#include <iostream>
int fun(int* arr) {
assert(arr);
for (int i = 0; i < 10; i++) {
arr[i] = 5;
}
return arr[6];
}
int main() {
int* arr = new int[10];
std::cout << fun(arr);
delete[] arr;
return 0;
}
The following code will cause core dump when deletion. But if comment out "memset", it is good to run.
So, it looks like memset does something wrong. What is the problem of the following code?
#include <iostream>
#include <cstring>
using namespace std;
int main()
{
int n= 5;
int **p = new int* [n];
for (int i=0; i<n; i++) {
p[i] = new int [n];
}
// if comment out this line, it is good to run.
memset(&p[0][0], 0, n*n*sizeof(int));
// core dump here
for (int i=0; i<n; i++) {
delete [] p[i];
}
delete [] p;
return 0;
}
Allocated memories in p[i]s are not necessarily contiguous. So calling memset to clear the whole of the allocated memory in p[i]s will touch a part of memory which is not for you (the main reason for the segmentation fault). If you want to set them all to zero, you have to iterate through them:
for (int i=0; i<n; i++) {
memset(p[i], 0, n*sizeof(int));
}
What you had created is array of pointers to non-contiguous areas of memory
int **p = new int* [n];
for (int i=0; i<n; i++) {
p[i] = new int [n];
}
Here p[0] points at first area, p[1] at second, p[n] at last. They aren't same object, so from point of view for language lawyer such memset call is Undefined Behavior.
memset(&p[0][0], 0, n*n*sizeof(int)); // Out of bound
&p[0][0] points at first element of array object of n elements ( size n*sizeof(int)). Anything odd allowed to happen after you broke the rules, the broken delete[] call is a typical reaction to such "memory corruption".
Note, you don't need memset for zero-initialization with arrays in C++, all you need is initialize them on creation:
int **p = new int* [n];
for (int i=0; i<n; i++) {
p[i] = new int [n]();
}
IF you want your array to be a continuous two-dimensional array, where every sub-array is adjacent to next one (none of standard tools offer such), you may use the placement new approach.
int **p = new int* [n];
int *pool = new int [n*n]; // the whole array will be here
for (int i=0; i<n; i++) {
p[i] = new (pool + i*n) int [n]();
// Creating subobject arrays using placement parameter
// In this case parameter is a pointer to memory storage
// where new expression would create an object.
// No memory allocation is happening here.
}
....
delete [] p; // deleting array of pointers
delete [] pool; // deleting memory pool
Or even better, avoid naked pointers if possible or exposure of user to such code. Use encapsulation, either standard library types or your own types to hide that "code gore". The problem with such exposed code is that there is no procedure which would deallocate memory if something will interrupt execution, e.g. an exception.
memset
Converts the value ch to unsigned char and copies it into each of the first count characters of the object pointed to by dest. If the object is a potentially-overlapping subobject or is not TriviallyCopyable (e.g., scalar, C-compatible struct, or an array of trivially copyable type), the behavior is undefined. If count is greater than the size of the object pointed to by dest, the behavior is undefined.
Every time new is called, a new block of memory is returned. It does not have to be exactly behind the previous block.
So the memory does not have to be continious, as is required by memset, and memset is writting in memory not assigned to the program, so a crash occurs.
To correctly zero initialize memory, add paranthesis after new:
int **p = new int* [n] (); // With C++11 and later can also be {}
for (int i=0; i<n; i++)
{
p[i] = new int [n] (); // With C++11 and later can also be {}
}
here if I use delete or delete[] the output is still 70. Can I know why?
#include<iostream>
using namespace std;
int main()
{
int* c = new int[100];
for(int i=0; i<98; i++)
{
c[i] = i;
}
cout<<c[70]<<endl;
delete[] c;
or
delete c;
cout<<c[70]<<endl; //outputs 70 even after delete[] or delete
return 0;
}
Accessing deleted memory is undefined behavior. Deleting with the wrong delete is also UB. Any further discussion is pointless in the sense that you cannot reliably expect any outcome.
In many cases, UB will just do the "correct" thing, but you need to be aware that this is completely "by chance" and could change with another compiler, another version of the same compiler, the weather... To get correct code, you need to avoid all cases of UB, even those that seemingly work.
Using new will just allocate some memory to your program and return a pointer pointing at the said memory address, reserving as much memory as needed for the datatype. When you use delete later, it "frees" the memory, but doesn't delete it's content. If you had an int with the value 70 stored at that address, it will still contain 70, until another application wants some memory, gets said address and puts another value in there.
If you use new to allocate memory for an array, you will reserve following blocks of memory until there are enough blocks for your specified array length.
Let's say you do the following:
int main() {
int* array = new int[10]; // array now points to the first block of the allocated memory
delete array; // since array points to the first block of the array, it will only free that block, but nothing else, causing a memory leak
delete[] array; // will free all memory allocated by the previous new
// Note that you should never free allocated memory twice, like in this code sample. Using delete on already freed memory is undefined behaviour!
]
Always use delete for single variables and delete[] for arrays.
A demonstration of your problem:
int main() {
int* c = new int[10]; // We allocate memory for an array of 10 ints
c[0] = 1; // We set the value of the first int inside the array to 1
delete[] c;
/*
* We free the previously allocated memory.
* Note that this does not delete the CONTENT of the memory!
* c does still point towards the first block of the array!
*/
std::cout << c[0];
/*
* Firstly, this is undefined behaviour (Accessing deallocated memory).
* However, this will output 1,
* unless some other process allocated the memory at the address
* and filled it with another value already. (Very unlikely)
*/
return 0;
}
If you want to delete / overwrite the content of the deleted memory, you can use std::memset.
Example:
#include <cstring>
int main() {
std::size_t length = 10;
int* c = new int[length];
c[0] = 1;
std::cout << c[0] << std::endl; // Will output 1
std::memset( c, 0, length ); // Fill the memory with 0 bytes
delete[] c; // Now we free the array's memory
std::cout << c[0] << std::endl; // Will output 0
}
As others pointed its undefined behaviour and anything can happen.
These can be easily caught with the help of tools like valgrind.
Here is the code:
#include <iostream>
#include <string>
#include <vector>
using namespace std;
int main()
{
int *p = new int[2];
p[0] = 1;
p[1] = 2;
cout << *p++ << endl;
delete p;
return 0;
}
It can be compiled, but got a runtime error "free(): invalid pointer", followed by a memory map.
operating system ubuntu 10.10
compiler: g++ 4.4.3
You need to call the array-version of delete:
delete[] p;
EDIT: However, your real problem is that you're incrementing p.
The delete operators only work on the original pointers that were allocated.
Your call to delete must use the same pointer returned by your call to new.
In your code, you increment p with *p++, which due to operator precedence is interpreted as *(p++). Due to this, you are providing delete a pointer that the memory manager has never heard of.
Additionally, you should use delete[] to match a call to new[].
That being said, here is a guess of what you may have intended:
int *p = new int[2];
p[0] = 1;
p[1] = 2;
for(int i = 0; i < 2; i++)
cout << p[i] << endl;
delete[] p;
return 0;
Look a the code:
int *p1 = new int;
int *p2 = new int[2];
int *p3 = p1;
int *p1 = p2;
delete p3;
delete[] p1;
Pointers p1 and p2 have the same representation and the same set of allowed operations. For the compiler they are both int*.
Althougth the objects that they are pointing at are significantly different. The object from new int[2] at hegative offset has a header that contains the number of items in the array.
Working with array object is the same as if it were a scalar object. Only when this object is released, it is necessary to tell the compiler what sort of object is it. A simple object or an array. This is known source of bugs and confusion. Neverthless the language was defined in this way decades ago and we cannot change this.
It is also important to delete exactly the same pointer that was returned with new. This rule applies both to simple new and new[].
In a program I allocate a huge multidimensional array, do some number-crunching, then only the first part of that array is of further interest and I'd like to free just a part of the array and continue to work with the data in the first part. I tried using realloc, but I am not sure whether this is the correct way, given I must preserve the data in the array and preferably avoid copying of that chunk in memory.
#include <cstring>
#include <cassert>
#include <iostream>
using namespace std;
void FillArrayThenTruncate(char* my_array, const int old_size, int* new_size);
int main() {
const int initial_size = 1024*1024*1024;
char* my_array = static_cast<char*>(malloc(initial_size));
assert(my_array);
int new_size;
FillArrayThenTruncate(my_array, initial_size, &new_size);
for(int i = 0; i < new_size; ++i) cout << static_cast<int>(my_array[i]) << endl;
}
void FillArrayThenTruncate(char* my_array, const int old_size, int* new_size) {
//do something with my_array:
memset(my_array, 0, old_size);
for(int i = 0; i < 10; ++i) my_array[i] = i % 3;
//cut the initial array
*new_size = 10;
void* new_array = realloc(my_array, *new_size);
cout << "Old array pointer: " << static_cast<void*>(my_array) << endl;
cout << "New array pointer: " << new_array << endl;
my_array = static_cast<char*>(new_array);
assert(my_array != NULL);
}
UPDATE:
* Please do not bother to suggest to use STL. The question is about C array.
* Thanks to "R Samuel Klatchko" for pointing out the bug in the code above.
I assume you're using this for learning... otherwise I'd recommend you look into std::vector and the other STL containers.
The answer to the title question is No. You have to either compact the existing elements, or you need to allocate new space and copy the data you want. realloc will either extend/contract from the end or allocate new space and copy the existing data.
If you're working with such a large data set, you might as well just have a collection of chunks rather than a monolithic set. Maybe avoid loading the whole thing into ram to begin with if you only need certain parts.
For C++, use STL containers instead of handling your memory manually. For C, there is realloc().
Yes, if you allocate with malloc, you can resize with realloc.
That said, realloc is allowed to move your memory so you should be prepared for that:
// Only valid when shrinking memory
my_array = realloc(my_array, *new_size);
Note that if you are growing memory, the above snippet is dangerous as realloc can fail and return NULL in which case you will have lost your original pointer to my_array. But for shrinking memory it should always work.