std::bad_array_new_length for large numbers - c++

I try to run following code:
#include <stdio.h>
#include <stdlib.h>
int find_next(int act, unsigned long long survivors[], int n)
{
int i = act;
while (survivors[i] == 0)
{
i++;
i = i % n;
}
i = (i + 1) % n; // found first one, but need to skip
while (survivors[i] == 0)
{
i++;
i = i % n;
}// thats the guy
return i;
}
int main()
{
long long int lines;
long long int* results;
scanf_s("%llu", &lines);
results = new long long int[lines];
for (long long int i = 0; i < lines; i++) {
unsigned long long n, k;
scanf_s("%llu", &n);
scanf_s("%llu", &k);
unsigned long long* survivors;
survivors = new unsigned long long[n];
for (int m = 0; m < n; m++) {
survivors[m] = 1;
}
int* order;
order = new int[n];
int p = 0;
int act = 0;
while (p < n - 1)
{
act = find_next(act, survivors, n);
order[p] = act;
survivors[act] = 0; // dies;
p++;
}
order[p] = find_next(act, survivors, n);
if (k > 0)
{
results[i] = order[k - 1] + 1;
}
else
{
results[i] = order[n + k] + 1;
}
delete[] survivors;
delete[] order;
}
for (long long int i = 0; i < lines; i++) {
printf("%llu\n", results[i]);
}
delete[] results;
return 0;
}
My inputs are:
1
1111111111
-1
I am getting an exeption:
std::bad_array_new_length for large numbers
At line:
survivors = new unsigned long long[n];
How should I fix it that it wont show for such large numbers?
So far I tried all numeric types of n -> long long int, unsigned long and so on but everytime I was failing. Or maybe there is no way around that?

How should I fix it that it wont show for such large numbers?
Run the program on a 64 bit CPU.
Use a 64 bit operating system.
Compile the program in 64 bit mode.
Install sufficient amount of memory. That array alone uses over 8 gigabytes.
Configure operating system to allow that much memory be allocated for the process.
P.S. Avoid owning bare pointers. In this case, I recommend using a RAII container such as std::vector.

Related

Heap corruption on dynamic programming problem

I'm getting an heap corruption error and I can't figure out where it is.
The problem is a coin change problem using dynamic programming. C is the array with the coin values, n is the size of the array, T is the target change, usedCoins is an array where the number of used coins should be mapped (i.e if C[1] = 2 and 3 2-coins are used, usedCoins[2] = 2 with all other indexes to 0.
Here's the code:
bool changeMakingUnlimitedDP(unsigned int C[], unsigned int n, unsigned int T, unsigned int usedCoins[]) {
static auto minCoins = new unsigned int[T+1]{UINT_MAX};
minCoins[0] = 0;
static auto lastCoin = new unsigned int[T+1]{0};
for(int i = 0; i < n; i++)
usedCoins[i] = 0;
for(int i = 0; i < n; i++){
for(int j = 1; j <= T; j++){
if(j >= C[i]){
minCoins[j-C[i]] == 0? minCoins[j] = 1 : minCoins[j] = std::min(1 + minCoins[j - C[i]], minCoins[j-C[i]]);
lastCoin[j] = i;
}
}
}
while(T > 0){
unsigned int last = lastCoin[T];
if(last == UINT_MAX || last < 0) return false;
usedCoins[last]++;
T -= C[last];
}
free(minCoins);
free(lastCoin);
return true;
}
When running on debug mode it goes through.

Why this code is not showing output for n around 10^5

I have the following code used to calculate primes of the form x^2+ny^2 whihc are not exceeding N. This code runs fine when N is around 80000 but when N is around 10^5 the code breaks down. Why this happens and how to fix this ?
#include <iostream>
#include<iostream>
#include<vector>
const int N = 100000; //Change N in this line
using namespace std;
typedef long long int ll;
bool isprime[N] = {};
bool zprime[N] = {};
vector<int> primes;
vector<int> zprimes;
void calcprime(){
for (int i = 2; i < N; i+=1){isprime[i] = true;}
for (int i = 2; i < N; i+=1){
if (isprime[i]){
primes.push_back(i);
for (int j = 2; i*j < N; j+=1){
isprime[i*j] = false;
}
}
}
}
void zcalc(){
int sqrt = 0; for (int i = 0; i < N; i+=1){if(i*i >= N){break;} sqrt = i;}
for (int i = 0; i <= sqrt; i +=1){
for (int j = 0; j <= sqrt; j+=1){
ll q = (i*i)+(j*j);
if (isprime[q] && !zprime[q] && (q < N)){
zprimes.push_back(q);
zprime[q] = true;
}
}
}
}
int main(){
calcprime();
zcalc();
cout<<zprimes.size();
return 0;
}
Why the code breaks
Out of bounds access. This code breaks because you're doing out of bounds memory accesses on this line here:
if (isprime[q] && !zprime[q] && (q < N)) {
If q is bigger than N, you're accessing memory that technically doesn't belong to you. This invokes undefined behavior, which causes the code to break if N is big enough.
If we change the order so that it checks that q < N before doing the other checks, we don't have this problem:
// Does check first
if((q < N) && isprime[q] && !zprime[q]) {
It's not recommended to have very large c-arrays as global variables. It can cause problems and increase executable size.
(Potentially) very large global arrays. You define isprime and zprime as c-arrays:
bool isprime[N] = {};
bool zprime[N] = {};
This could cause problems down the line for very big values of N, because c-arrays allocate memory statically.
If you change isprime and zprime to be vectors, the program compiles and runs even for values of N greater than ten million. This is because using vector makes the allocation dynamic, and the heap is a better place to store large amounts of data.
std::vector<bool> isprime(N);
std::vector<bool> zprime(N);
Updated code
Here's the fully updated code! I also made i and j to be long long values, so you don't have to worry about integer overflow, and I used the standard library sqrt function to compute the sqrt of N.
#include <iostream>
#include <vector>
#include <cmath>
using namespace std;
typedef long long int ll;
constexpr long long N = 10000000; //Change N in this line
std::vector<bool> isprime(N);
std::vector<bool> zprime(N);
vector<int> primes;
vector<int> zprimes;
void calcprime() {
isprime[0] = false;
isprime[1] = false;
for (ll i = 2; i < N; i+=1) {
isprime[i] = true;
}
for (ll i = 2; i < N; i+=1) {
if (isprime[i]) {
primes.push_back(i);
for (ll j = 2; i*j < N; j+=1){
isprime[i*j] = false;
}
}
}
}
void zcalc(){
ll sqrtN = sqrt(N);
for (ll i = 0; i <= sqrtN; i++) {
for (ll j = 0; j <= sqrtN; j++) {
ll q = (i*i)+(j*j);
if ((q < N) && isprime[q] && !zprime[q]) {
zprimes.push_back(q);
zprime[q] = true;
}
}
}
}
int main(){
calcprime();
zcalc();
cout << zprimes.size();
return 0;
}
The value of q can exceed the value of N in your code and can cause a segmentation fault when zprime[q],isprime[q] is accessed. You're iterating i, j till sqrt(N) and have allocated zprime,isprime with N booleans. The value of q can vary from 0 to 2N.
ll q = (i*i)+(j*j);
You can replace bool zprime[N] = {}; and bool isprime[N] = {}; with
bool zprime[N * 2 + 1] = {};
and
bool isprime[N * 2 + 1] = {};
respectively.
The program will no longer segfault. Or, you could check for q < N before accessing isprime[q] and zprime[q].
Also, as has already been pointed out in the comments, (i*i)+(j*j) is an int. It is useless to assign that value to a long long. If you intend to prevent overflow, replace it with ((ll)i*i)+(j*j).
Moreover, for large sized arrays, you should prefer to allocate it on the heap.

Why does loop runs 1 or 2 times only when I use rand() function in c++

I want to generate random test cases for my program, but it crashes after running 1 or 2 times.
I have used rand() function to generate random numbers for random test cases
but it is not running after one or sometimes two times.. and does not generate any random number. The program simply exits.
#include<bits/stdc++.h>
#include<ctime>
#include <time.h>
using namespace std;
long long int naive(long long int arr[],long long int n){
long long int max=-1;
for(long long int i=0;i<n;i++)
{
for(long long int j=0;j<n;j++){
if(arr[i]%arr[j] > max){
max = arr[i]%arr[j];
}
}
}
return max;
}
long long int efficent(long long int arr[],long long int n){
long long int max1=0,max2=0;
for(long long int i=0;i<n;i++){
if (arr[i] > max1)
{
max2 = max1;
max1 = arr[i];
}
else if (arr[i] > max2 && arr[i] != max1)
max2 = arr[i];
}
return max2%max1;
}
int main(){
srand(time(0));
long long int count=0;
int t=10;
while(t--){
long long int n;
n = rand()%10;
long long int arr[n];
for(long long int i=0;i<n;i++){
arr[i] = rand()%10;
}
long long int a,b;
a = naive(arr,n);
b = efficent(arr,n);
if(a == b)
cout<<"Naive : "<<a<<"\tEfficent : "<<b<<"\n";
else{
cout<<"\nNot Equal."<<"\nCount : "<<++count<<endl;
cout<<"Naive : "<<a<<"\tEfficent : "<<b<<"\n";
}
}
return 0;
}
In addition to the memory leaks and not declaring a variable sized array correctly mentioned in the other answer, the issue is that you are performing the mod operation on values that could be 0. This will force your program to exit. To fix this, change
arr[i] = rand()%10;
to something like
arr[i] = rand()%10+1;
to prevent division by 0.
EDIT: as mentioned by #Michael Dorgan, you should probably do the same for n. Change
n = rand()%10;
to
n = rand()%10+1;
to prevent 0 length arrays from being allocated.
This code is problematic:
while(t--){
long long int n;
n = rand()%10;
long long int arr[n];
for(long long int i=0;i<n;i++){
arr[i] = rand()%10;
}
If you need a variable size array, you should use long long int * arr = new long long int[n]; and delete[] arr; before the last closing brace of the while block.

Saving unknown amount of integers without taking too much time/memory

#include <iostream>
using namespace std;
unsigned long long divsum(unsigned long long x);
int main()
{
unsigned long long x;
cin >> x;
unsigned long long y[200000];
for (unsigned int i = 0; i < x; i++)
cin >> y[i];
for (unsigned int i = 0; i < x; i++){
cout << divsum(y[i]) << endl;
}
return 0;
}
unsigned long long divsum(unsigned long long x){
int sum = 0;
for(unsigned int i = 1; i <= x/2; i++){
if(x % i == 0)
sum += i;
}
return sum;
}
I'm doing an online exercise and it says there are possible 2000000 cases in the first line, so I made an array of that amount, however, when I submit the solution it exceeds the time.. so I was wondering what is an alternative and faster way to do this? The program works fine right now, except it exceeds the time limit of the website.
You can allocate the array dynamically, so it will work better in cases where x < 200000
int main()
{
unsigned long long x;
cin >> x;
unsigned long long *y = new unsigned long long[x];
// you can check if new didn't throw an exception here
for (unsigned int i = 0; i < x; i++)
cin >> y[i];
for (unsigned int i = 0; i < x; i++){
cout << divsum(y[i]) << endl;
}
delete[] y;
return 0;
}
Since you know the size of the array, try vector and reserve.
int main()
{
unsigned long long x;
cin >> x;
unsigned long long var;
vector<unsigned long long> y;
y.reserve(x);
for (unsigned int i = 0; i < x; i++){
cin >> y[i];
}for (unsigned int i = 0; i < x; i++){
cout << divsum(var) << endl;
}
return 0;
}
And deal in const &
const unsigned long long & divsum(const unsigned long long & x){
int sum = 0;
unsigned long long x2 = x/2
for(unsigned int i = 1; i <= x2; i++){
if(x % i == 0)
sum += i;
}
return sum;
}
I think your assignment is more complex than you think. Your divsum(x) function should return sum of all divisors of x, right? In this case it's better to factorize the x, and calculate this sum using all the prime numbers (with powers), product of which is equal to the x. Take a look at:
http://en.wikipedia.org/wiki/Divisor_function
There are methods for recursive factorization - for example, if you have factorized all numbers until n, you can quickly find factorization for (n + 1). You also need to generate primes, Erathosphene sieve for first 2000000 numbers would be fine.

Pthread_create segmentation faults when more than 12 threads?

I have a C++ program that sums numbers from 0 to n, using t threads. N and T are passed as command line args. I am using a for loop that creates the pthreads and a second for loop that rejoins the main to them. The program executes fine when I use less than 11 or 12 threads. For example, on input 100 10, it returns 5050. When I use more than 11-12 threads, it causes a segmentation fault and crashes. I cant seem to figure out why. There are some lines in my code I was using for debugging, such as printing to the prompt, etc. Any tips are appreciated!
int n = 0;
int t = 0;
unsigned long gsum = 0;
pthread_mutex_t mutexsum;
void *sum(void *Index)
{
int index = (int)(int *) Index;
int threadSum = 0;
int k;
int lowerBound, upperBound; //used to find range of numbers to sum
//printf("I am here: %d \n",index);
if (index == t - 1) {
lowerBound = (n/t)*(t-1);
upperBound = n;
} else {
lowerBound = (n/t)*index;
upperBound = (n/t)*(index+1)-1;
}
for (k = lowerBound; k < upperBound + 1; k++) {
threadSum = threadSum + k;
}
// Critical Section
pthread_mutex_lock(&mutexsum);
gsum = gsum + threadSum;
pthread_mutex_unlock(&mutexsum);
pthread_exit((void*) 0);
}
int main(int argc, char* argv[]){
int i, k, j;
pthread_t sumThreads [t];
for(i = 1; i < argc; i++) {
if(i == 1)
n = atoi(argv[i]);
if(i == 2)
t = atoi(argv[i]);
}
if (n < 0 || t <= 0 || argc != 3) {
printf("Invalid or missing parameters! \n");
exit(0);
}
for (k = 0; k < t; k++) {
int nt = -1;
nt = pthread_create(&sumThreads[k], NULL, sum, (void*)k);
printf("%d \n", nt);
}
for (j = 0; j < t; j++) {
int rj = -1;
rj = pthread_join (sumThreads[j], NULL);
printf("%d \n", rj);
}
printf("Total Sum: %lu \n",gsum);
return 0;
You have initialized t to be zero at the top of your program, so this line:
pthread_t sumThreads [t];
is not allocating an array large enough to hold the thread identifiers. Thus, you have buffer overrun when storing the identifiers, and your are reading past the buffer in your thread_join loop.
You are using a feature called variable length array (or VLA) which became part of the C language in the 1999 revision to the standard. C++ has not adopted VLA, so you are using a compiler extension. If you want your code to be compliant with C++, you should use a vector instead.
std::vector<pthread_t> sumThreads;
// ...after t gets initialized
sumThreads.resize(t);
In c/c++ this type of code does not work:
int i=10:
int arr[i];
Avoid doing this. If this type of code was valid we would no longer need malloc..
This is exactly what you are trying to achieve.