I have a problem with passing 2d dynamic array to a function of my class.
void s::LoadData(long int &Num_Of_InputDataId,
long int **PresentData,
long int **InputDataId,
long int **InputData)
{
long int b;
for (long int i=0;i<Num_Of_InputDataId;i++)
{
b = InputDataId[i][0];
for(long int j=0;j<Num_Of_InputDataId;j++)
{
InputData[i][j]=PresentData[b][j]; //error occur here
} // end of internal for
} //end of external for
}
main:
long int Num_Of_InputDataId=10;
long int **PresentData;
PresentData = new long int *[Num_Of_InputDataId];
for (long int ii = 0; ii < Num_Of_InputDataId; ++ii)
PresentData[ii] = new long int[Num_Of_InputDataId];
long int ** InputDataId;
InputDataId = new long int *[Num_Of_InputDataId];
for (long int ii = 0; ii < Num_Of_InputDataId; ++ii)
InputDataId[ii] = new long int[2];
long int ** InputData;
InputData = new long int *[Num_Of_InputDataId];
for (long int ii = 0; ii < Num_Of_InputDataId; ++ii)
InputData[ii] = new long int[Num_Of_InputDataId];
Load.LoadData(Num_Of_InputDataId, PresentData, InputDataId, InputData);
Each of Num_Of_InputDataId, PresentData and InputDataId come from different functions.
For some i, InputDataId[i][0] is greater than or equal to Num_of_InputDataId.
Therefore when you call:
b = InputDataId[i][0];
... = PresentData[b][...];
the array index to PresentData is out-of-bounds, causing a memory error and crash.
As you are not giving the precise error I will make a common sense guess as to the problem. The value of b is an element in your Id array. Given that this is a long int and that your code implies it is some sort of ID code (rather than an index into another array) it is highly likely that the value of it exceeds the index range of your other array PresentData and which could be causing you a memory error from the out of bounds access attempt. You are treating b in code as if it were an index when it seems likely that it is not.
The simplest way to check would be to step through the code using a debugger, (which should automatically be your first attempt at identifying what your problem actually is) rather than guessing from the program error report. In fact using a debugger would probably have taken less time than pasting your code here and asking the question so if you are not using a debugging tool already you really should do so as it will help you no end.
And as the comment from Superman suggests, given that you use C++ take advantage of all the nice things that the STL can do for you and use vectors.
Just as #Superman has commented on your question, here is a simple snippet to make life easy.
#define V(x) std::vector< x>
typedef V(int) VI;
typedef V(VI) VII;
void functionDoSomething(VII & arr) { }
(Passing the vector by reference). By using such typedefs in your code,readability of the code will be better,and you can define nested structures easily.
Related
I just started learning C++. I learned the easy way of declaring arrays and now I'm confused about the usage of
int* foo = new int[n];
and how it is different from
int foo [n];
I tried testing with code but couldn't find any difference. I read from sources that using "new" requires me to manually de-allocate the memory after I don't need it anymore. In that case, there is no advantage in using "new" or dynamic memory allocation at all. Am I missing something here?
I tried running this:
#include <iostream>
int main() {
int n;
std::cout << "array size" ;
std::cin >> n ;
std::cout << n ;
int foo [n]; //line A
// int* foo = new int[n]; //line B
foo[6] = 30;
std::cout<<foo[6]<<std::endl;
}
Commenting out line B to run line A, or vice versa, gave the exact same result.
There are several ways these are different. First, let's talk about this one:
int n = 10;
int array[n];
This is not part of the ANSI C++ standard and may not be supported by all compilers. You shouldn't count on it. Imagine this code:
int n = 10;
int array[n];
n = 20;
How big is array?
Now, this is the way you can do it (but it's still problematic):
int n = 10;
int * array = new int[n];
Now, that is legal. But you have to remember later to:
delete [] array;
array = nullptr;
Now, there are two other differences. The first one allocates space on the stack. The second allocates space on the heap, and it's persistent until you delete it (give it back). So the second one could return the array from a function, but the first one can't, as it disappears when the function exits.
HOWEVER... You are strongly, strongly discouraged from doing either. You should instead use a container class.
#include <array>
std::array<int, n> array;
The advantages of this:
It's standard and you can count on it
You don't have to remember to free it
It gives you range checking
So, i was doing some harcker rank exercisies, and i found this exercise.
In short it is an Interval Schedulling problem, but my doubt is about pointers and data structure.
This code below is a simple version of my doubt.
My doubt is in the initialize function. When the program finishes it the ptr pointer variable has only one instance of arrayOfA, the first entry only and i wanted to have the size of N.
So what i did get wrong about this data structure and it`s pointers?
I don't want to use another lib as vector and stuff because i think there is no need of it.
#include <iostream>
struct A
{
unsigned int startTime;
unsigned int duration;
unsigned int endTime;
};
struct B
{
int size;
A* arrayOfA = new A[size];
};
B* initialize(int start_time[], int duration[], int n)
{
B* pointer = new B();
pointer->size = n;
for (int i = 0; i < n; i++)
{
pointer->arrayOfA[i].startTime = start_time[i];
pointer->arrayOfA[i].duration = duration[i];
pointer->arrayOfA[i].endTime = start_time[i] + duration[i];
}
return pointer;
}
int main()
{
//initialization
int n = 6;
int arrayOfStart[] = { 1, 3, 0, 5, 5, 8 };
int arrayOfDuration[] = { 1, 1, 6, 2, 4, 1 };
B* ptr;
ptr = initialize(arrayOfStart, arrayOfDuration, n);
for (int i = 0; i < n; i++)
{
std::cout << ptr->arrayOfA[i].startTime << std::endl;
}
}
I don't want to use another lib as vector and stuff because i think there is no need of it
The fact that you got this simple example wrong - and that this simple example is actually not that simple to implement correctly - is solid evidence that you are wrong.
The whole point of having standard libraries is that there is rarely a good reason to implement these things by hand, and it's easy to get them wrong. And std::vector is not "another lib", it is provided by the standard library for a reason.
the ptr pointer variable has only one instance of arrayOfA
it's only supposed to have one array. You mean, presumably, that the array is the wrong size? How did you tell? What happened when you tried to read all 6 elements?
Anyway, the immediate problem is
A* arrayOfA = new A[size];
... this should go in a constructor. You have to defer evaluation until after you know the value of size.
Then you should also write a destructor, and then you should write the copy constructor, copy assignment operator, and move equivalents.
But if you learned, and used, std::vector instead - you would have finished the problem in the time it took to debug your array handling code.
The whole point of providing libraries of common tools is that you can learn them once and re-use your knowledge. If you write your own bare array code in every hackerrank problem, you can easily encounter different bugs in each one, and you're not accumulating any knowledge you can re-use in the next.
Given this struct:
struct B
{
int size;
A* arrayOfA = new A[size];
};
a default B will have a pointer pointing to an array of size elements. Since size is not initialized, this invokes undefined behavior.
Instead, you can do:
struct B
{
int size;
A* arrayOfA;
};
and after you set the size member, you can allocate the appropriate memory:
pointer->size = n;
pointer->arrayOfA = new A[pointer->size];
Also, don't forget to delete this memory when it's no longer needed by the program.
#include <iostream>
char** make2D(const int dim1, const int dim2)
{
char* toAlloc;
const int size = (dim1 * dim2) + dim2;
toAlloc = new char[size];
for(int i = 0; i < dim2; i++)
{
toAlloc[i] = reinterpret_cast<char>(&toAlloc[(dim2 + (dim1 * i))]);
}
return reinterpret_cast<char**>(toAlloc);
}
int main(void)
{
int dim1 = 8;
int dim2 = 10;
char** array2D = make2D(dim1, dim2);
for (int i = 0; i < dim2; ++i)
{
array2D[i][i % dim1] = i + 100; // << Crash
}
return 0;
}
I was trying to allocate two dimensional array by a single allocation.
So, my algorithm was, first 10(which is dim2 in this code) items has pointer to first item of each rows.
When I was try this by pointer to 'int',
int** make2D(const int dim1, const int dim2)
{
int* toAlloc;
const int size = (dim1 * dim2) + dim2;
toAlloc = new int[size];
for(int i = 0; i < dim2; i++)
{
toAlloc[i] = reinterpret_cast<int>(&toAlloc[(dim2 + (dim1 * i))]);
}
return reinterpret_cast<int**>(toAlloc);
}
int main(void)
{
int dim1 = 8;
int dim2 = 10;
int** array2D = make2D(dim1, dim2);
for (int i = 0; i < dim2; ++i)
{
array2D[i][i % dim1] = i + 100;
}
return 0;
}
it works fine but when I do this in char, it crashes in commented line in above code.
My thought of crashing was when I do reinterpret_cast, something happens because of memory size gap between pointer(8byte) and char(1byte).
So like, sounds like ridiculous... changing pointer(8byte) to int(4byte) was fine, but when I do cast more dramatically(8byte to 1byte), it causes some problems...
I have no idea why char doesn't work but int works.
Could you give some advice to make char case works?
To answer the question yes there is a difference, a huge one, on many platforms a pointer might fit into an int, on very few platforms it will fit into a char. On modern PCs which are 64-bit none are safe ways to store a pointer.
Use containers such as vector or array if the size is static.
Try something like:
array<array<T, dim2>, dim1> variable{};
if you actually want a 2-dimensional array of type T; since you seem to need an array of pointers try something like:
array<array<T *, dim2>, dim1> variable{};
This will take care to make an array of the appropriate type to store pointers in for your platform, no matter how big pointers actually are, obviously you should replace T with the proper type of the data you want to point to, this will ensure pointer math is done properly for you.
Array types will have their size calculated at compile time, if you need dynamic sizes you should use vector, after allocation call resize on the vector and all sub-vectors to make sure you allocate all the memory in as few passes as possible.
Please also don't use reinterpret_cast, or c-style casts, it's a recipe for disaster unless you know very well what you're doing.
Don't know what book you're reading or who is teaching you C++ but please change your knowledge source.
Using raw owning pointers is discouraged and the way you're using them is wrong in so many ways.
Never store a pointer in anything but a pointer type. Even in plain C you should cast to at least void * if you need to cast at all.
Please read about unique_ptr or shared_ptr if you really want to store/pass pointers around directly.
If you insist on using raw pointers for containers please try building your code with sanitizers such as address sanitizer, memory sanitizer (these are supported at least by clang and gcc, possibly more compilers these days)
Issue is "incompatibility" of size of objects:
sizeof(char) is 1
sizeof(int) is generally 4 or 8 (but at least 2).
sizeof(T*) which is generally 4 or 8, std::uintp_t can hold void* value, which is not necessary the case with int (and even less with char).
You cannot store safely void* into char or int. It happens it's working for you for int, but it is not portable.
reinterpret_cast is generally the wrong tool.
Simpler would be to create a class Matrix, with std::vector<T> and accessor to fix indexing. (You might even have proxy to allow m[2][3] syntax).
With owning raw pointer, you need need placements new, and provide correct deleter...
This question already has answers here:
Can a local variable's memory be accessed outside its scope?
(20 answers)
Closed 7 years ago.
#include<iostream>
using namespace std;
int *Arr(int y,int size){
int arg[size];
for(int i=size-1;i>=0;i--){
arg[i]=y%10;
y=y/10;
}
return arg;
}
int main(){
int *p=Arr(2587,4);
for(int j=0;j<4;j++){
cout<<p[j]<<" ";
}
return 0;
}
> Blockquote
I dont why this isn't working ...I'm trying to back an array but the problem is in the second digits.Can somebody help ;) thanks
The problem is you are putting your result into a local array that is destroyed when the function ends. You need to dynamicaly allocate the array so that its life-span is not limited to the function it was created in:
#include<iostream>
using namespace std;
int *Arr(int y, int size)
{
// This local array will be destroyed when the function ends
// int arg[size];
// Do this instead: allocate non-local memory
int* arg = new int[size];
for(int i = size - 1; i >= 0; i--)
{
arg[i] = y % 10;
y = y / 10;
}
return arg;
}
int main()
{
int *p = Arr(2587, 4);
for(int j = 0; j < 4; j++)
{
cout << p[j] << " ";
}
// You need to manually free the non-local memory
delete[] p; // free memory
return 0;
}
NOTE:
Allocating dynamic memory using new is to be avoided if possible. You may want to study up on smart pointers for managing it.
Also, in real C++ code, you would use a container like std::vector<int> rather than a builtin array
Of course it is not working.
At best, the behaviour is undefined, since Arg() is returning the address of a local variable (arg) that no longer exists for main(). main() uses that returned address when it is not the address of anything that exists as far as your program is concerned.
There is also the incidental problem that int arg[size], where size is not fixed at compile time, is not valid C++. Depending on how exacting your compiler is (some C++ compilers reject constructs that are not valid C++, but others accept extensions like this) your code will not even compile successfully.
To fix the problem, have your function return a std::vector<int> (vector is templated container defined in the standard header <vector>). Then all your function needs to do is add the values to a local vector, which CAN be returned safely by value to the caller.
If you do it right, you won't even need to use a pointer anywhere in your code.
#include<iostream>
#include <intrin.h>
using namespace std;
unsigned __int64 TimeValue=0;
unsigned __int64 rdtsc(void)
{
return __rdtsc();
};
void time_start() { TimeValue=rdtsc(); }
long long time_stop() {
return (rdtsc()-TimeValue);
}
int main()
{
long x[262144],i,k,r;
int j;
x[0] = 0;
for (i=1; i<262144; i++)
{
long r = rand()%i;
x[i] = x[r];
x[r] = i;
}
time_start();
for (j=0; j<1000; j++)
for (k=0, i=0; i<262144; i++)
k = x[k];
cout<<time_stop()/1000/262144;
}
In the program I need to create an array size of 1 megabyte. When debugging a program on the line long x [262144], an error occurs: An unhandled exception "0x00ff1997" in the "dgdxgdrfy.exe": 0xC00000FD: Stack overflow. Why is this and how to fix it?
Try to declare it as a global variable - outside the main method(). Otherwise it will be allocated on the stack which is far smaller than the heap. Another solution is to use dynamic allocation with new, but this is more error prone.
Local variables get allocated on the stack, but the stack is limited. You can probably increase the limit with a switch on the compiler.
The problem is the very large array you've declared. One simple fix will change it from being on the stack to being dynamically allocated:
std::vector<long> x(262144);
This happens because a local array is allocated on the stack. You can avoid this by using a dynamic array(one created with new), a vector or by declaring the array in the global scope.
You can use static long x[262144]; It does move the allocation outside the stack and you don't modify your code at all.
Basically, you should dynamically allocate the x array, as is seen in this article. The below example was pulled out from the text, if you change it to be appropriate for your case, it should work fine.
double *num;
num = (double *) malloc (BUFSZ* sizeof(double))