C++ - Allocating Larger Array on the Heap - c++

I downloaded Visual Studio and started C++ yesterday. I have now run in to a problem though. I have a super simple program that fills a large array with booleans and then counts the number of true elements. I now want to run my program for extremely large arrays (lengths of 2^33 or 2^34 preferably). I have understood that this will generate a problem with the stack overflowing and that I should allocate the array on the heap instead. I do not understand how to do this though. I have also heard that it is customary to use vectors instead of arrays but I figured that these might be slower so I stuck to arrays. What do I do to make my program run as fast as possible for large array lengths?
void makeB(bool *a, long double length)
{
for (long x = 0; x < sqrtl(length/2)+1; ++x)
{
for (long y = x; y < sqrtl(length)+1; ++y)
{
long int z = x * x + y * y;
if (z < length)
{
a[z] = true;
}
}
}
}
int main()
{
const long length = 268435457;
static bool a[length] = {};
long b = 0;
makeB(a, length);
for (long i = 0; i < length; ++i)
{
if (a[i])
{
b += 1;
}
}
printf("%ld: ", length - 1);
printf("%ld.\n", b);
char input;
cin >> input;
return 0;
}
To clarify I want to be able to increase the length variable to larger values without getting a Stack Overflow error. Fast code is also preferrable. I m also fine with using vectors if that truly is better. If it will make the cause of my confusion clearer I come from Java.
Thanks!
EDIT: First I need to point out that indeed 2^40 is way too much to expect from my system as pointed out by almost everyone, sorry about that. I feel that I could expect maybe 2^33. And secondly thanks for all the answers but what is the final consensus? Should I have a look at std::make_unique? Thanks again!

Related

C++ Functions return or global

im struggleling with some "best practice" idea's
only posting a small piece of the code the original is very and complicated.
See below a litte test function
TEST1 runs in 5ms
TEST2 runs in 1405ms
to me TEST2 feels like the best practice but the performace diffence are so big!
in the my full code the Functions are in the Header file and the main in the source
only the function will ever be writing to the "TEST123" the Main will only read it right after it is called, the code is not running 100000 times in the full code but around 24 times, but the faster the better (inverse kinematics of a 6 Axis Robot)
What is the best way to do this?
Or is there even better ways?
Any advice is appreciated
double TEST123[12];
void TESTFUNTC1A(int ID) {
for (int x = 0; x < 12; x++) {
TEST123[x] = 1.123;
}
}
void TESTFUNTC1A() {
int64 starttimetest2 = timeSinceEpochMillisec();
vector<double> TEST125(12);
double test123456;
for (int y = 0; y < 100000; ++y) {
TESTFUNTC1A(0);
for (int x = 0; x < 12; x++) {
test123456 = TEST123[x];
}
}
std::cout << "TEST1 " << timeSinceEpochMillisec() - starttimetest2 << endl;
}
vector<double> TESTFUNTC2A(int ID) {
vector<double> TEST124(12);
for (int x = 0; x < 12; x++) {
TEST124[x] = 1.123;
}
return TEST124;
}
void TESTFUNTC2A() {
int64 starttimetest2 = timeSinceEpochMillisec();
vector<double> TEST125(12);
double test123456;
for (int y = 0; y < 100000; ++y) {
TEST125 = TESTFUNTC2A(0);
for (int x = 0; x < 12; x++) {
test123456 = TEST125[x];
}
}
std::cout << "TEST2 " << timeSinceEpochMillisec()- starttimetest2 << endl;
}
int main()
{
TESTFUNTC1A();
TESTFUNTC2A();
}
Your code has some code smell:
vector<double> TESTFUNTC2A(int ID) {
vector<double> TEST124(12);
for (int x = 0; x < 12; x++) {
TEST124[x] = 1.123;
}
return TEST124;
}
You create TEST123 with size 12 and initialize every element to 0.0. Then you overwrite it with 1.123. Why not just write vector<double> TEST124(12, 1.123);? If you need to initialize the elements to different values then you should create an empty vector, reserve the right size and then emplace_back().
void TESTFUNTC2A() {
int64 starttimetest2 = timeSinceEpochMillisec();
vector<double> TEST125(12);
double test123456;
for (int y = 0; y < 100000; ++y) {
TEST125 = TESTFUNTC2A(0);
for (int x = 0; x < 12; x++) {
test123456 = TEST125[x];
}
}
std::cout << "TEST2 " << timeSinceEpochMillisec()- starttimetest2 << endl;
}
You create TEST125 as vector with 12 elements initialized to 0.0. But then you assign it a new vector. The initial vector should be empty so you don't needlessly allocate heap memory. Also you could create a new vector inside the loop so the vector is created in-place instead of move assigned.
The size of the vector is known at compile time. Why not use std::array?
Note that the difference in time is a failure of the compiler to properly optimize the code. It's easier in TEST1 for the compiler to see that the code has no side effect so you probably get an empty loop there but not in the TEST2 case. Did you try using clang? It's a bit better at eliminating unused std::vector.
When you write your tests you should always run your loops for different number of iterations to see if the time spend increases with the number of iterations as expected. Other than plain looking at the compiler output that's the simplest way to see if your testcase is optimized away or not.
First of all, best approach does not make much sense; it normally calls for personal opinion.
Secondly, there are no eternal rules; each project must be considered individually and handled based on its constraints. If you need to work on a limited MCU - with outdated toolset and ineffective toolchain - things can be different.
But:
Having all functions defined in header means either they are template, or inline or the toolchain cannot handle multiple TU per project, or you are just doing wrong.
Global objects(more generally, static duration objects) are supposed to be avoided; they create lots of maintenance issues due to implicit couplings generated. I can not name and count them, but implicit shared reference is one of them; initialization order is another.
Next:
If you need an array with compile-time known size, std::array can be a good choice. If the size is supposed to be immutable but relatively large, const std::unique_ptr<T[]> can be considered. std::vector is good when the size is supposed to mutate during runtime; such size mutations can be runtime costly. Raw arrays are not value types; they can't be directly transferred by-value across function boundaries: you'll need to wrap them in struct to make them by-value and that is part of what std::array does.

How to deal with large sizes of data such as array or just number that causing stack in Cpp?

its my first time dealing with large numbers or arrays and i cant avoid over stacking i tried to use long long to try to avoid it but it shows me that the error is int main line :
CODE:
#include <iostream>
using namespace std;
int main()
{
long long n=0, city[100000], min[100000] = {10^9}, max[100000] = { 0 };
cin >> n;
for (int i = 0; i < n; i++) {
cin >> city[i];
}
for (int i = 0; i < n; i++)
{//min
for (int s = 0; s < n; s++)
{
if (city[i] != city[s])
{
if (min[i] >= abs(city[i] - city[s]))
{
min[i] = abs(city[i] - city[s]);
}
}
}
}
for (int i = 0; i < n; i++)
{//max
for (int s = 0; s < n; s++)
{
if (city[i] != city[s])
{
if (max[i] <= abs(city[i] - city[s]))
{
max[i] = abs(city[i] - city[s]);
}
}
}
}
for (int i = 0; i < n; i++) {
cout << min[i] << " " << max[i] << endl;
}
}
**ERROR:**
Severity Code Description Project File Line Suppression State
Warning C6262 Function uses '2400032' bytes of stack: exceeds /analyze:stacksize '16384'. Consider moving some data to heap.
then it opens chkstk.asm and shows error in :
test dword ptr [eax],eax ; probe page.
Small optimistic remark:
100,000 is not a large number for your computer! (you're also not dealing with that many arrays, but arrays of that size)
Error message describes what goes wrong pretty well:
You're creating arrays on your current function's "scratchpad" (the stack). That has very limited size!
This is C++, so you really should do things the (modern-ish) C++ way and avoid manually handling large data objects when you can.
So, replace
long long n=0, city[100000], min[100000] = {10^9}, max[100000] = { 0 };
with (I don't see any case where you'd want to use long long; presumably, you want a 64bit variable?)
(10^9 is "10 XOR 9", not "10 to the power of 9")
constexpr size_t size = 100000;
constexpr int64_t default_min = 1'000'000'000;
uint64_t n = 0;
std::vector<int64_t> city(size);
std::vector<int64_t> min_value(size, default_min);
std::vector<int64_t> max_value(size, 0);
Additional remarks:
Notice how I took your 100000 and your 10⁹ and made them constexpr constants? Do that! Whenever some non-zero "magic constant" appears in your code, it's a good time to ask yourself "will I ever need that value somewhere else, too?" and "Would it make sense to give this number a name explaining what it is?". And if you answer one of them with "yes": make a new constexpr constant, even just directly above where you use it! The compiler will just deal with that as if you had the literal number where you use it, it's not any extra memory, or CPU cycles, that this will cost.
Matter of fact, that's even bad! You pre-allocating not-really-large-but-still-unneccesarily-large arrays is just a bad idea. Instead, read n first, then use that n to make std::vectors of that size.
Don not using namespace std;, for multiple reasons, chief among them that now your min and max variables would shadow std::min and std::max, and if you call something, you never know whether you're actually calling what you mean to, or just the function of the same name from the std:: namespace. Instead using std::cout; using std::cin; would do for you here!
This might be beyond your current learning level (that's fine!), but
for (int i = 0; i < n; i++) {
cin >> city[i];
}
is inelegant, and with the std::vector approach, if you make your std::vector really have length n, can be written nicely as:
for (auto &value: city) {
cin >> value;
}
This will also make sure you're not accidentally reading more values than you mean when changing the length of that city storage one day.
It looks as if you're trying to find the minimum and maximum absolute distance between city values. But you do it in an incredibly inefficient way, needing multiple loops over 10⁵·10⁵=10¹⁰ iterations.
Start with the maximum distance: assume your city vector, array (whatever!) were sorted. What are the two elements with the greatest absolute distance?
If you had a sorted array/vector: how would you find the two elements with the smallest distance?

which is a better approach for fibonacci series generation

The two general approaches for Fibonacci series generation are:
The traditional approach, i.e., running through a for loop inside a function.
Recursion
I came across another solution
#include <iostream>
using namespace std;
void fibo() {
static int y = 0;
static int x = 1;
cout << y << endl;
y = x + y;
x = y - x;
}
int main() {
for (int i = 1; i <= 1; i++) {
fibo();
}
return 0;
}
This solution looks to be working fine in the initial runs, but when compared to the traditional and recursion approach, does this hold any significant disadvantages?
I am sure static variables would add to space complexity, but at least we are not building a function table stack using recursion, correct?
Disadvantages I can immediately see:
By essentially making the state global, it's not thread-safe
You can only ever run through the sequence once, as there's no way to reset
I would favour an approach which keeps the state within an object which you can ask for the next value of - an iterator, basically. (I've never been certain how easily the Fibonacci sequence maps to C++ iterators; it works fine with C# and Java IEnumerable<T> and Iterable<T> though.)
The solution you found is decent for when you need to store the state (for example, when you calculate a Fibonacci number, do something based on it, and then calculate another), but using this from two places in your code will likely give funny results. This is because the static variables will always be the same, no matter from where you call it. I would instead suggest:
class FiboNumbers {
public:
FiboNumbers() :
x_(1), y_() {}
int getNext() {
x_ += y_;
y_ = x_ - y_;
return x_;
}
private:
int x_, y_;
};
This offers the same keeping-of-state, but allows you to create multiple instances of the class, therefore allowing you to have different parts of the code that calculate their own Fibonacci series.
Minor note: the code I posted will produce the same series as the example you posted, but it will produce the real Fibonacci sequence, which starts with 0 1 1 2...
I am a C++ student (1.5 months into it).
Give feedback to this different way I have thought of for Fibonacci series.
#include<iostream>
using namespace std;
void fibseries(long int n)
{
double x=0;double y=1;
for (long int i=1;i<=n;i++)
{
if(i%2==1)
{
cout<<x<<" ";
x=x+y;
}
else
{
cout<<y<<" ";
y=x+y;
}
}
}
main()
{
long int n=0;
cout<<"The number of terms ";
cin>>n;
fibseries(n);
return 0;
}
I'm not sure what this function is really supposed to do. It
only works in the exact loop you present, and as others have
pointed out, it only works once. (And there's probably a typo
in your loop, since your complete program outputs "0", and
nothing else.) What advantage does it offer over:
int y = 0;
int x = 1;
for ( int i = 0; i < count; ++ i ) {
std::cout << y <<std::endl;
y = x + y;
x = y - x;
}
? It's more complex, far less robust, and far less useful.
As was said before, the advantage of the static variables is, in principle, that it's cheaper to calculate the n -th Element of a sequence where the n - 1 -th has already been evaluated.
The big drawback, apart from the problems inherent to static variables, is that you don't have any way to get back to an earlier point in the sequence, nor do you really have a good control over where in the sequence you are at a given time.
Using a class, as recommended by Sevis, is certainly the better way of implementing such a static-like approach: this makes everything safer, gives you an easy way to get back to the sequence start (by simply reinitializing the object) and also makes it possible to implement further functionality, like going back k steps, looking up the present position, etc..
I think this pointer approach would be more useful for you.
void main()
{
int i,p, *no,factorial,summ;
int fib(int p);
clrscr();
printf("\n Enter The Number:");
scanf("%d", no);
printf("\n The Fibonnacci series: \n");
for(i=0; i < *no; i++)
printf("%d\n", fib(i));
getch();
}
int fib(int p)
{
if(p == 0)
return(0);
if(p >= 1 && p <= 2)
return(1);
else
return(fib(p - 1) + fib(p - 2));
}

Overflow or Memory error c++

This could be considered a homework question.
This problem is wel known: "you have a triangle of numbers and you have to find the greatest sum"
well no problem, I've made a solution in python some time ago, works flawless.
But now in c++, the solution is 75256, my answer is 9729.
So the problem is that the type short overflows.
So to fix this, I assumed changing my array to type int would solve everything.. but then, when declaring an array a[1001][1001] it freezes (i guess memory error).
anyone know what to do?
I tried something with another int, and whenever the value in a got bigger than 32767 it would increment, but my solution then still is off 300? (the code works - tested on many smaller ones)
#include <iostream>
#include <fstream>
int main() {
std::ofstream fout ("numtri.out");
std::ifstream fin ("numtri.in");
short trifield[1001][1001] = {0};
int Rows, tmp=0;
fin >> Rows;
for (int x = 0; x<Rows;x++)
for (int nr = 0; nr<=x;nr++){
fin >> tmp;
trifield[x][nr] = tmp;}
for (int y = (Rows-2); y > -1; y--)
for (int x = 0; x <= y+1; x++) {
int a = trifield[y+1][x];
int b = trifield[y+1][x+1];
if (a > b) trifield[y][x] += a;
else trifield[y][x] += b;
}
fout << trifield[0][0] << std::endl;
return 0;
}
note: I'm not looking for the solution, just a good way to deal with overflows, examples appreciated!
If you have issues with memory try to assign your array dynamically:
short** trifield = new short[1001][1001];
You have an array of 1001x1001 shorts... that's 1002001*2 bytes. That's all going on your local stack. Depending on your system that could be TOO big. Try allocating the space for your 'trifield' with a malloc instead. See what that does for you
You get a stack overflow instead of a numeric overflow!
Move the array to static memory outside of main, so it doesn't use the stack.
The way I check for overflow is to check for an obviously bogus result. For instance,
if (myInt + 1 < myInt) {
// Overflow condition
cerr << "Overflow" << endl;
}
else {
myInt++;
}
Overflowing an int is an UB. Overflowing an unsigned int value is defined in the standard.
So, the only way is to manually check the values before doing the operation, and make sure it doesn't overflow.

Sum of all primes under 2 million

I made a program that returns the sum of all primes under 2 million. I really have no idea what's going on with this one, I get 142891895587 as my answer when the correct answer is 142913828922. It seems like its missing a few primes in there. I'm pretty sure the getPrime function works as it is supposed to. I used it a couple times before and worked correctly than. The code is as follows:
vector<int> getPrimes(int number);
int main()
{
unsigned long int sum = 0;
vector<int> primes = getPrimes(2000000);
for(int i = 0; i < primes.size(); i++)
{
sum += primes[i];
}
cout << sum;
return 0;
}
vector<int> getPrimes(int number)
{
vector<bool> sieve(number+1,false);
vector<int> primes;
sieve[0] = true;
sieve[1] = true;
for(int i = 2; i <= number; i++)
{
if(sieve[i]==false)
{
primes.push_back(i);
unsigned long int temp = i*i;
while(temp <= number)
{
sieve[temp] = true;
temp = temp + i;
}
}
}
return primes;
}
The expression i*i overflows because i is an int. It is truncated before being assigned to temp. To avoid the overflow, cast it: static_cast<unsigned long>( i ) * i.
Even better, terminate iteration before that condition occurs: for(int i = 2; i*i <= number; i++).
Tested fixed.
Incidentally, you're kinda (un)lucky that this doesn't produce extra primes as well as missing some: the int value is signed, and could be negative upon overflow, and by my reading of §4.7/2, that would cause the inner loop to skip.
You may be running into datatype limits: http://en.wikipedia.org/wiki/Long_integer.
This line is the problem:
unsigned long int temp = i*i;
I'll give you a hint. Take a closer look at the initial value you give to temp. What's the first value you exclude from sieve? Are there any other smaller multiples of i that should also be excluded? What different initial value could you use to make sure all the right numbers get skipped?
There are some techniques you can use to help figure this out yourself. One is to try to get your program working using a lower limit. Instead of 2 million, try, say, 30. It's small enough that you can calculate the correct answer quickly by hand, and even walk through your program on paper one line at a time. That will let you discover where things start to go wrong.
Another option is to use a debugger to walk through your program line-by-line. Debuggers are powerful tools, although they're not always easy to learn.
Instead of using a debugger to trace your program, you could print out messages as your program progressed. Say, have it print out each number in the result of getPrimes instead of just printing the sum. (That's another reason you'd want to try a lower limit first — to avoid being overwhelmed by the volume of output.)
Your platform must have 64-bit longs. This line:
unsigned long int temp = i * i;
does not compute correctly because i is declared int and the multiplication result is also int (32-bit). Force the multiplication to be long:
unsigned long int temp = (unsigned long int) i * i;
On my system, long is 32-bit, so I had to change both temp and sum to be unsigned long long.