Overflow or Memory error c++ - c++

This could be considered a homework question.
This problem is wel known: "you have a triangle of numbers and you have to find the greatest sum"
well no problem, I've made a solution in python some time ago, works flawless.
But now in c++, the solution is 75256, my answer is 9729.
So the problem is that the type short overflows.
So to fix this, I assumed changing my array to type int would solve everything.. but then, when declaring an array a[1001][1001] it freezes (i guess memory error).
anyone know what to do?
I tried something with another int, and whenever the value in a got bigger than 32767 it would increment, but my solution then still is off 300? (the code works - tested on many smaller ones)
#include <iostream>
#include <fstream>
int main() {
std::ofstream fout ("numtri.out");
std::ifstream fin ("numtri.in");
short trifield[1001][1001] = {0};
int Rows, tmp=0;
fin >> Rows;
for (int x = 0; x<Rows;x++)
for (int nr = 0; nr<=x;nr++){
fin >> tmp;
trifield[x][nr] = tmp;}
for (int y = (Rows-2); y > -1; y--)
for (int x = 0; x <= y+1; x++) {
int a = trifield[y+1][x];
int b = trifield[y+1][x+1];
if (a > b) trifield[y][x] += a;
else trifield[y][x] += b;
}
fout << trifield[0][0] << std::endl;
return 0;
}
note: I'm not looking for the solution, just a good way to deal with overflows, examples appreciated!

If you have issues with memory try to assign your array dynamically:
short** trifield = new short[1001][1001];

You have an array of 1001x1001 shorts... that's 1002001*2 bytes. That's all going on your local stack. Depending on your system that could be TOO big. Try allocating the space for your 'trifield' with a malloc instead. See what that does for you

You get a stack overflow instead of a numeric overflow!
Move the array to static memory outside of main, so it doesn't use the stack.

The way I check for overflow is to check for an obviously bogus result. For instance,
if (myInt + 1 < myInt) {
// Overflow condition
cerr << "Overflow" << endl;
}
else {
myInt++;
}

Overflowing an int is an UB. Overflowing an unsigned int value is defined in the standard.
So, the only way is to manually check the values before doing the operation, and make sure it doesn't overflow.

Related

How to deal with large sizes of data such as array or just number that causing stack in Cpp?

its my first time dealing with large numbers or arrays and i cant avoid over stacking i tried to use long long to try to avoid it but it shows me that the error is int main line :
CODE:
#include <iostream>
using namespace std;
int main()
{
long long n=0, city[100000], min[100000] = {10^9}, max[100000] = { 0 };
cin >> n;
for (int i = 0; i < n; i++) {
cin >> city[i];
}
for (int i = 0; i < n; i++)
{//min
for (int s = 0; s < n; s++)
{
if (city[i] != city[s])
{
if (min[i] >= abs(city[i] - city[s]))
{
min[i] = abs(city[i] - city[s]);
}
}
}
}
for (int i = 0; i < n; i++)
{//max
for (int s = 0; s < n; s++)
{
if (city[i] != city[s])
{
if (max[i] <= abs(city[i] - city[s]))
{
max[i] = abs(city[i] - city[s]);
}
}
}
}
for (int i = 0; i < n; i++) {
cout << min[i] << " " << max[i] << endl;
}
}
**ERROR:**
Severity Code Description Project File Line Suppression State
Warning C6262 Function uses '2400032' bytes of stack: exceeds /analyze:stacksize '16384'. Consider moving some data to heap.
then it opens chkstk.asm and shows error in :
test dword ptr [eax],eax ; probe page.
Small optimistic remark:
100,000 is not a large number for your computer! (you're also not dealing with that many arrays, but arrays of that size)
Error message describes what goes wrong pretty well:
You're creating arrays on your current function's "scratchpad" (the stack). That has very limited size!
This is C++, so you really should do things the (modern-ish) C++ way and avoid manually handling large data objects when you can.
So, replace
long long n=0, city[100000], min[100000] = {10^9}, max[100000] = { 0 };
with (I don't see any case where you'd want to use long long; presumably, you want a 64bit variable?)
(10^9 is "10 XOR 9", not "10 to the power of 9")
constexpr size_t size = 100000;
constexpr int64_t default_min = 1'000'000'000;
uint64_t n = 0;
std::vector<int64_t> city(size);
std::vector<int64_t> min_value(size, default_min);
std::vector<int64_t> max_value(size, 0);
Additional remarks:
Notice how I took your 100000 and your 10⁹ and made them constexpr constants? Do that! Whenever some non-zero "magic constant" appears in your code, it's a good time to ask yourself "will I ever need that value somewhere else, too?" and "Would it make sense to give this number a name explaining what it is?". And if you answer one of them with "yes": make a new constexpr constant, even just directly above where you use it! The compiler will just deal with that as if you had the literal number where you use it, it's not any extra memory, or CPU cycles, that this will cost.
Matter of fact, that's even bad! You pre-allocating not-really-large-but-still-unneccesarily-large arrays is just a bad idea. Instead, read n first, then use that n to make std::vectors of that size.
Don not using namespace std;, for multiple reasons, chief among them that now your min and max variables would shadow std::min and std::max, and if you call something, you never know whether you're actually calling what you mean to, or just the function of the same name from the std:: namespace. Instead using std::cout; using std::cin; would do for you here!
This might be beyond your current learning level (that's fine!), but
for (int i = 0; i < n; i++) {
cin >> city[i];
}
is inelegant, and with the std::vector approach, if you make your std::vector really have length n, can be written nicely as:
for (auto &value: city) {
cin >> value;
}
This will also make sure you're not accidentally reading more values than you mean when changing the length of that city storage one day.
It looks as if you're trying to find the minimum and maximum absolute distance between city values. But you do it in an incredibly inefficient way, needing multiple loops over 10⁵·10⁵=10¹⁰ iterations.
Start with the maximum distance: assume your city vector, array (whatever!) were sorted. What are the two elements with the greatest absolute distance?
If you had a sorted array/vector: how would you find the two elements with the smallest distance?

c++ runtime error when 2d array assigning another array value

#include<iostream>
#include<string>
#include <cstring>
using namespace std;
bool b[200][200];
int a[46];
int test_cases;
int n;
int m;
int first;
int second;
int main()
{
cin>>test_cases;
while(test_cases--){
cin>>n;
cin>>m;
for (int i=0;i<2*m;i++){
cin>>a[i];
}
for (int j=0;j<m;j++){
first=a[2*j];
second=a[2*j+1];
b[first][second]=true;
}
}
return 0;
}
Hello. Runtime error seems to occur on the last code 'b[first][second]=true;'
I tried couple of changes and i found if i turn 'b[first][second]=true;' into 'b[second][first]=true;' error doesn't occur, which is simply to change the order of indices.
There is not a possibility of "out of range error" because memory size of b is [200][200] and range of results of a[*] is from 0 to 10.
I can't figure out where the problem is coming from and i need help. Thank you.
There is not a possibility of "out of range error" because memory size of b is [200][200] and range of results of a[*] is from 0 to 10.
The world is littered with buggy code because people made assumptions like this. The first thing you should do is prove this correct. That's as simple as placing something like:
if (first < 0 || first > 199 || second < 0 || second > 199) {
cerr << "Violation, first = " << first << ", second = " << second << "\n";
exit(1);
}
immediately before your line that sets the b[][] element to true.
As an aside, it would also be prudent to check other array accesses as well. Since we don't have your test data, we have no idea what value will be input for n or m but, since those values can result in undefined behaviour (by accessing beyond array bounds), they should also be scrutinised.
If you wanted to be sure that those didn't cause problems, you could dynamically allocate to the correct size as necessary. For example, once you've gotten m from the user:
int *a = new int[m*2];
// Use it as you wish, elements <0..m*2-1> inclusive.
delete [] a;

C++ - Allocating Larger Array on the Heap

I downloaded Visual Studio and started C++ yesterday. I have now run in to a problem though. I have a super simple program that fills a large array with booleans and then counts the number of true elements. I now want to run my program for extremely large arrays (lengths of 2^33 or 2^34 preferably). I have understood that this will generate a problem with the stack overflowing and that I should allocate the array on the heap instead. I do not understand how to do this though. I have also heard that it is customary to use vectors instead of arrays but I figured that these might be slower so I stuck to arrays. What do I do to make my program run as fast as possible for large array lengths?
void makeB(bool *a, long double length)
{
for (long x = 0; x < sqrtl(length/2)+1; ++x)
{
for (long y = x; y < sqrtl(length)+1; ++y)
{
long int z = x * x + y * y;
if (z < length)
{
a[z] = true;
}
}
}
}
int main()
{
const long length = 268435457;
static bool a[length] = {};
long b = 0;
makeB(a, length);
for (long i = 0; i < length; ++i)
{
if (a[i])
{
b += 1;
}
}
printf("%ld: ", length - 1);
printf("%ld.\n", b);
char input;
cin >> input;
return 0;
}
To clarify I want to be able to increase the length variable to larger values without getting a Stack Overflow error. Fast code is also preferrable. I m also fine with using vectors if that truly is better. If it will make the cause of my confusion clearer I come from Java.
Thanks!
EDIT: First I need to point out that indeed 2^40 is way too much to expect from my system as pointed out by almost everyone, sorry about that. I feel that I could expect maybe 2^33. And secondly thanks for all the answers but what is the final consensus? Should I have a look at std::make_unique? Thanks again!

finding even numbers in the array issue (C++)

My code is to extract odd number and even number in an 1D array.
#include <iostream>
using namespace std;
int main() {
int a[6] = {1,6,3,8,5,10};
int odd[]={};
int even[]={};
for (int i=0; i < 6; i++) {
cin >> a[i];
}
for (int i=0; i < 6; i++) {
if (a[i] % 2 == 1) {
odd[i] = a[i];
cout << odd[i] << endl;
}
}
cout << " " << endl;
for (int i=0; i < 6; i++) {
if (a[i] % 2 == 0) {
even[i] = a[i];
cout << even[i] << endl;
}
}
return 0;
}
the output is:
1
3
5
2
1
6
It shows that it successfully extract odd numbers but the same method applied to the even number. It comes with an issue while the even number is 4.
Could anyone help me find the cause here? Thanks.
You've got an Undefined Behavior, so result may be any, even random, even formatted hard drive.
int odd[] = {} is the same as int odd[/*count of elements inside {}*/] = {/*nothing*/}, so it's int odd[0];
Result is not defined when you're accessing elements besides the end of array.
You probably have to think about correct odd/even arrays size, or use another auto-sizeable data structure.
First, although not causing a problem, you initialize an array with data and then overwrite it. The code
int a[6] = {1,6,3,8,5,10};
can be replaced with
int a[6];
Also, as stated in the comments,
int odd[]={};
isn't valid. You should either allocate a buffer as big as the main buffer (6 ints) or use a vector (although I personally prefer c-style arrays for small sizes, because they avoid heap allocations and extra complexity). With the full-size buffer technique, you need a value like -1 (assuming you intend to only input positive numbers) to store after the list of values in the arrays to tell your output code to stop reading, or store the sizes somewhere. This is to prevent reading values that haven't been set.
I don't understand your problem when 4 is in the input. Your code looks fine except for your arrays.
You can use std::vector< int > odd; and then call only odd.push_back(elem) whem elem is odd.

segmentation fault when calling a function

Segmentation fault when calling the Update_Multiplier and gdb debugger shows these:
Program received signal SIGSEGV, Segmentation fault.
0x080b74e8 in Update_Multiplier() ()
double upperbound = 116325;
double objective = 1.1707e+07;
int main()
{
Update_Multiplier();
}
void Update_Multiplier()
{
cout << "function 0" << endl;
// Determine subgradient vectors
double gra[1000][1000];
double grb[1000][1000];
double dumX = 0;
double stepsize[1000][1000];
double tuning=2;
double LRADum[1000][1000];
double LRBDum[1000][1000];
cout << "function 1" << endl;
// update subgradient vectors
for (int i=1; i<=noOfNodes; i++)
{
for (int j=1; j<=noOfNodes; j++)
{
if (C[i][j] != 0)
{
dumX=0;
for (int p=1; p<=noOfCommodity; p++)
{
dumX += X[i][j][p];
}
gra[i][j]=dumX-U[i][j]*Y[i][j]-Q[i][j];
grb[i][j]=Q[i][j]-B[i][j]*Y[i][j];
}
}
}
// update stepsize
cout << "function 2" << endl;
for (int i=1; i<=noOfNodes; i++)
{
for (int j=1; j<=noOfNodes; j++)
{
if (C[i][j] != 0)
{
stepsize[i][j]=(tuning*(UpperBound-Objective))/sqrt((gra[i][j]*gra[i][j])*(grb[i][j]*grb[i][j]));
LRADum[i][j]=LRA[i][j]+stepsize[i][j]*gra[i][j];
LRA[i][j]=LRADum[i][j];
LRBDum[i][j]=LRB[i][j]+stepsize[i][j]*grb[i][j];
LRB[i][j]=LRBDum[i][j];
}
}
}
}
I see two suspicious things in your code.
First, you are taking too much stack space (about ~40 MB).
Second, you are starting the index of the array at 1, where it should be 0:
for (int i=1; i<=noOfNodes; i++)
Change it to:
for (int i=0; i<noOfNodes; i++)
At a guess, you have a stack overflow! You cannot reliably create gigantic arrays on the stack. You need to create them dynamically or statically.
Where did you define noOfNodes? What is the initial value of this? Or, do you read this in from the console? If this is uninitialized, it probably has junk data -- which may or may not explain the crash.
You need a stack of at least 40 megabytes to run this function because you're allocating five arrays of one million eight-byte doubles each.
Change the function to allocate the double arrays from the heap using new.
You should really give us the whole code, e.g. noOfNodes is not defined anywhere.
Just a stab in the dark: are you possibly overflowing C since your indices (i and j) go from 1 to noOfNodes?
First, what Neil said is true.
Second, C and C++ arrays start from index zero. If you declare
int a[100]; // 100 elements, from zeroth to ninety-ninth.
Then its elements are a[0], a[1] ... a[99].
I can not see anything wrong with this code as given, BUT: You might have an off-by-one error if noOfNodes is 1000.
Remember that Arrays are 0-indexed so you have to access indexes 0 - 999 instead of 1 - 1000 as you are doing
me too i have this problem and my function returned std::string now i just do reference an dreturn type void like that:
void readOnDicoFile(std::ifstream file/*If you have "using namespace std;" instruction, put ifstream, not std::ifstream (you can put both)*/)
and before:
std::string readOnDicoFile(std::string fileName/*If you have "using namespace std;" instruction, put ifstream, not std::ifstream (you can put both)*/)