I can't really understand the difference between Dynamic and static allocation,they say Dynamic allocation happens while executing the program and static only while compiling and we can't allocate manually while execution but,
#include <iostream>
using namespace std;
int main()
{
int size , a = 0;
cout << "Enter the size of Array: ";
cin >> size;
int A[size][size];
for(int i = 0 ; i < size ; i++)
{
for(int j = 0 ; j < size ; j++)
cout << a++ << '\t';
cout << endl;
}
system("pause");
return 0;
}
This program will allocate the Array size while execution.
The real point of dynamic allocation is that you control the lifetime of the objects being allocated. Dynamically allocated objects exist until you deallocate them. It's not really anything to do with arrays, although that is often the context in which beginners are first taught about allocation.
Consider these two functions
int* bad()
{
int x = 123;
return &x;
}
int* good()
{
int* x = new int(123);
return x;
}
Both functions create an int and return a pointer to that int.
The bad function is incorrect because the x variable is destroyed when the function exits, so it returns a pointer to an object which has been destroyed.
The good function creates an int dynamically, that object will never be destroyed (unless the program deletes it). So this function is correct.
Incidentally int size; ... int A[size][size]; is not legal C++. Some compilers allow it, but other compilers would not.
Related
So I have two programs here,
the first one is using dynamic allocation
and the second one is using fixed sized array.
Now the question is, by using dynamic allocation, the program runs fine AND outputs are correctly printed as expected.
However, when using fixed sized array (the second program), the program runs without errors BUT the outputs are not what I wanted.
The programs are almost same except how the arrays are created..but still both the arrays are same so shouldn't outputs be the same?
What are the reasons?? Please help me understand..
First Program Example:
input1 output1
1 1
2 2
3 3
4 4
5 5
Second Program Example:
input1 output1
1 1
2 5
3 2058618480
4 32766
5 5
// Using Dynamic Allocation
#include <iostream>
int *readNumbers(int n) {
int *a ;
a = new int[n];
for (int i=0; i<n; i++) {
std::cout << "enter for a["<<i<<"]: ";
std::cin >> a[i];
}
int *ptr;
ptr= &a[0];
return ptr;
}
void printNumbers(int *numbers,int length){
for (int i=0; i<length; i++) {
std::cout << *(numbers+i) << "\n";
}
}
int main(){
int n;
std::cout << "enter for n: " ;
std::cin >> n;
int *ptr;
ptr = readNumbers(n);
printNumbers(ptr,n);
delete [] ptr;
ptr = NULL;
return 0;
}
And another one is
// Using fixed size array
#include <iostream>
int *readNumbers(int n) {
int a[5]={};
for (int i=0; i<5; i++) {
std::cout << "enter for a["<<i<<"]: ";
std::cin >> a[i];
}
int *ptr;
ptr = &a[0];
return ptr;
}
void printNumbers(int *numbers,int length){
for (int i=0; i<length; i++) {
std::cout << *(numbers+i) << "\n";
}
}
int main(){
int *ptr;
ptr = readNumbers(5);
printNumbers(ptr,5);
return 0;
}
In your second piece of code your array is allocated on the stack inside the readNumbers function. Then you return a pointer to that stack memory to the calling function. This memory is no longer valid when printNumbers is run. It has likely been overwritten by locals in printNumbers.
Allocate the array in main and then the second example should also work.
I feel in first case, when you call new operator to allocate memory for storing multiple int values, heap memory is allocated. Now this memory is available when you pass it around functions and this memory is valid till programming is running until someone calls delete operator. So you could pass this pointer from readNumbers, main and printNumber and it is valid.
For second case you created array of int as local variable in function, so it is created in stack. Scope of the local variable is only till the function is running. In your example readNumbers created array and once the function is over the stack is cleared. That is all the local variables created in function are no longer valid.
Hence when you use this memory location in other functions like main and printNumbers it will give undefined behaviour. Sometime the result will be expected sometimes invalid result. So you need to be careful what are you passing or returning from one function to another.
If you still want to get expected result in second case, declare arrray as static.
Hope this helps.
I was looking at this link about returning reference to a pointer. According to this, we have to return reference to a static or global variable. My question here is, in case we create memory block inside a function using new, why application will crash since memory allocated using new is permanent until deleted? I wrote below code to test this and it crashes and if I make ptr static inside the function, there is no issue.
int* &returnPtrByRef(int numElements)
{
int *ptr = new int(numElements);
return ptr;
}
int main (void)
{
int num=5;
int *&ptrRef = returnPtrByRef(num);
for(int cnt = 0; cnt < num; cnt++)
*(ptrRef + cnt) = cnt * 2;
for(int cnt = 0; cnt < num; cnt++)
cout << *(ptrRef + cnt) << '\t';
return 0;
}
It's useful to think of references as syntactic sugar for pointers. So let's rewrite your function using a double pointer, instead of a reference to a pointer:
int** returnPtrByRef(int numElements) {
int *ptr = new int(numElements);
return &ptr;
}
Here, we can see that we're actually referring the pointer to the piece of stack allocated memory that points to the memory we allocated on the heap. Once the function returns, this stack allocated memory (8 bytes to hold the pointer), no longer exists.
I'm eradicating std::string in favor of C-strings, which I'm new to. How do I get the following to compile? g++ complains: cannot convert char(*)[16] to char**
#include <iostream>
void print(char** s, int n)
{
for (int i = 0; i < n; ++i)
{
std::cout << s[i] << '\n';
}
}
int main()
{
constexpr int n = 3;
char s[n][16]{ "Hello", "Bye", "Sky"};
print(s, n);
}
You created a multidimensional array, not an array of pointers. Usually an array can be said to be equivalent to a pointer, however in this case c++ needs to know the size of the second dimension of your array. The function would be as follows
void print(char s[][16], int n)`{
for (int i = 0; i < n; ++i)
{
std::cout << s[i] << std::endl;
}
}
Understandably you may want to pass the function using pointers as to not make an entire copy of the 2-d array. I saw you mentioned you were okay with variable length strings. That functionality is supported in the string library. You are dealing with c-strings which are not strings at all but static arrays of type character. Defining these c-strings using dynamic memory happens to give you the desired behavior as you create in the simplest terms an array of pointers.
void print(char** s, int n)
{
for (int i = 0; i < n; ++i)
{
std::cout << s[i] << std::endl;
}
}
int main()
{
int n = 3, i;
char** s = new char*[n];
for (i = 0; i < 3; i++) {
s[i] = new char[16];
}
s[0] = "Hello";
s[1] = "Bye";
s[2] = "Sky";
print(s, n);
for (i = 0; i < 3; i++) {
delete [] s[i];
}
delete [] s;
s = NULL;
return 0;
}
Since you are using dynamic memory now you need to free your memory which is what the last loop serves to do. As you can see using all this dynamic memory is quite taxing and it would be easier to use the string library that has been optimized to do a much better job then you can. If you're still not convinced you should at least make your own string class to handle the dynamic memory that contains a char * as its private member. In either case you avoid this mess and just make an array of zed class objects and not deal at all with multidimensional nonsense. No one likes seg faults and memory leaks.
Given any type T, T arr[N]; declares a variable arr of type T[N], which is an array and not a pointer. When you use arr in almost all contexts, array to pointer conversions happen, giving the incorrect illusion that arr is a pointer of type T*.
char s[n][16] = { "Hello", "Bye", "Sky" };
declares s as an array of n elements of type char[16]. Now, when array to pointer conversion happens, s decays into a pointer of type char (*)[16]. Hence, your function needs to have the signature
void print(char (*s)[16], int n);
Which is equivalent to
void print(char s[][16], int n);
the [] is interpreted as a pointer by the compiler.
To make these complex types more readable, a type alias may be used.
using T = char[16];
void print(T s[], int n);
Addressing some concerns
As pointed out in the comments, std::string should almost always be preferred over a char array. If you have performance concerns, benchmark before doing this. I really doubt much performance gains can be observed in most cases.
Declaring an array with length n which is an int is not standard C++. It is an extension provided by your compiler, it is not portable and in most cases not necessary.
int n = 3;
char vla[n]; // this is a variable length array
char arr[3]; // this is just an array
char* darr = new char[3]; // this is a pointer pointing to dynamically allocated memory
std::string str; // but instead, this is just better
The compiler cannot extract from char ** the infomation about char[16]. You need to define a type char[16] and pass the pointer to this type to your print function.
#include <iostream>
typedef char str_t[16];
void print(str_t* s, int n)
{
for (int i = 0; i < n; ++i)
{
std::cout << s[i] << std::endl;
}
}
int main()
{
int n = 3;
char s[n][16]{ "Hello", "Bye", "Sky"};
print(s, 3);
}
This question already has answers here:
Operator new initializes memory to zero
(4 answers)
Closed 8 years ago.
So I started learning C++ last week and naturally, I want to become familiar with the whole pointer and object-oriented business and so on and so forth.
To do that, I'm writing a very simple program for some basic matrix calculations:
# include <iostream>
using std::cout;
using std::cin;
class Matrix {
int columns; // x
int rows; // y
double* matrix;
public:
Matrix (int*);
void printMatrix();
void free() {
delete[] matrix;
return;
};
};
Matrix::Matrix(int* dim){
rows = *dim;
columns = *(dim + 1);
matrix = new double [columns*rows];
}
void Matrix::printMatrix(){
int i, j;
for(i = 0; i < columns; i++){
for(j=0; j < rows; j++){
cout << matrix[columns*i + j] << " ";
}
cout << "\n";
}
return;
}
int* getMatrix ();
int main () {
Matrix matrix (getMatrix());
matrix.printMatrix();
matrix.free();
return 0;
}
int* getMatrix (){
int* dim = new int [2];
cout << "(m x n)-Matrix, m? ";
cin >> dim[0];
cout << "n? ";
cin >> dim[1];
return dim;
}
The problem (as I see it) occurs when I choose a (4,2) matrix. As I understand from various tutorials,
matrix = new double [columns*rows];
should allocate this much memory: columns*rows times sizeof(double). Also, every 'cell' should be initialized with a 0.
But, choosing a (4,2) matrix, I get the following output, of the function printMatrix():
0 0
0 0
0 6.6727e-319
0 0
Why is the (3,2) entry not initialized with 0?
Thanks!
Also, every 'cell' should be initialized with a 0.
Nope. The language does not do that for you, when you write new double[N].
Why is the (3,2) entry not initialized with 0?
It will if, you write new double[N]() instead!
[C++11: 5.3.4/15]: A new-expression that creates an object of type T initializes that object as follows:
If the new-initializer is omitted, the object is default-initialized (8.5); if no initialization is performed, the object has indeterminate value.
Otherwise, the new-initializer is interpreted according to the initialization rules of 8.5 for direct-initialization.
Granted, this is slightly ambiguous in that it would seem to be talking about the non-array versions of new, but in fact it means both; T is double[4].
In fact, we can see that the same section of wording talks about "object" in both the array and non-array cases, setting the perfect precedent:
[C++11: 5.3.4/1]: [..] If the entity is a non-array object, the new-expression returns a pointer to the object created. If it is an array, the new-expression
returns a pointer to the initial element of the array.
Now, it's essentially impossible to prove this rule, because you can strike unlucky and get all-zeroes even when those values are in fact indeterminate, but the following code entirely unconvincingly makes a good start:
#include <iostream>
#include <vector>
int main() {
const std::size_t n = 4;
{
std::vector<double> hack;
hack.push_back(5);
hack.push_back(6);
hack.push_back(7);
hack.push_back(8);
hack.push_back(9);
hack.push_back(10);
hack.push_back(11);
hack.push_back(12);
}
double* a = new double [n];
double* b = new double [n]();
for (std::size_t i = 0; i < n; i++)
std::cout << a[i] << '/' << b[i] << ' ';
std::cout << '\n';
delete[] a;
delete[] b;
}
I managed to get 0/0 6/0 7/0 8/0 from it, thanks to some heap hackery, but it's still only just pure chance and doesn't really demonstrate anything (live demo).
Unfortunately, new double[4](316) isn't valid (providing a value inside the () is explicitly banned for arrays during direct-initialization, per [C++11: 8.5/16] ) so we can't suggest that new double[4](0) would be reliable and use the example with 316 to convince you of it.
Only static variables are initialized to 0 in C++.
Auto and dynamic variables should be initialized by you.
Let's say I have a dynamic array:
int* p;
ifstream inFile("pop.txt");
int x;
while (inFile >> x)
{
// ????
}
How do I resize p so I am able to to fit x in as like an array. I don't want to use a vector or static array as I am trying to learn the language. I need to use pointers because I don't know the initial size. Any attempt is appreciated.
The simplest answer is that you should use higher level components than raw arrays and raw memory for the reading. That way the library will handle this for you. A simple way of reading a set of numbers into an application (without error handling) could be done with this simple code:
std::vector<int> data;
std::copy(std::istream_iterator<int>(inFile), std::istream_iterator<int>(),
std::back_inserter(data));
The code creates a couple of input iterators out of the stream to read int values, and uses a back_inserter iterator that will push_back onto the vector. The vector itself will manage growing the memory buffer as needed.
If you want to do this manually you can, you just need to allocate a larger chunk of memory, copy the first N elements from the old buffer, release the old buffer and continue reading until the larger buffer gets filled, at which point you follow the same procedure: allocate, copy, deallocate old, continue inserting.
You can't resize it. All you can do is allocate a new bigger array, copy everything over from the old array to the new array, then free the old array.
For instance (untested code)
int array_size = 10;
int* array = new int[array_size];
int array_in_use = 0;
int x;
while (in >> x)
{
if (array_in_use == array_size)
{
int* new_array = new int[2*array_size];
for (int i = 0; i < array_size; ++i)
new_array[i] = array[i];
delete[] array;
array = new_array;
array_size *= 2;
}
array[array_in_use++] = x;
}
It's tedious, and I'm not convinced it's a good thing for a beginner to be doing. You'd learn more useful stuff if you learned how to use vectors properly.
You could always use realloc(). It's a part of the C Standard Library, and the C Standard Library is a part of the C++ Standard Library. No need for tedious news and deletes.
#include <cstdlib>
#include <iostream>
#include <fstream>
int main(void)
{
int* array = nullptr;
unsigned int array_size = 0;
std::ifstream input("pop.txt");
for(int x; input >> x;)
{
++array_size;
int* array_failsafe = array;
array = static_cast<int*>(realloc(array, sizeof(x) * array_size));
if(array == nullptr)
{
std::cerr << "realloc() failed!" << std::endl;
free(array_failsafe);
return EXIT_FAILURE;
}
array[array_size-1] = x;
}
for(unsigned int i = 0; i < array_size; ++i)
{
std::cout << "array[" << i << "] = " << array[i] << std::endl;
}
free(array); // Don't forget!
return EXIT_SUCCESS;
}