I wrote this program using Sieve of Eratosthenes. It is supposed to output prime numbers up to 2'500'000, but it crashes when trying to create array bigger than ~2'100'000. Any ideas what might be broken?
Compiling with gcc in Code::Blocks (Windows 8.1, shame on me).
PS It works flawless for N <= 2'000'000
#include <stdio.h>
int main() {
// Input
long n;
scanf("%ld", &n);
// Initialize vars
bool number[n+1];
for(long i = 0; i < n; i++)
number[i] = false;
// Main loop
for(long i = 2; i*i <= n; i++) {
if(number[i]) // If number is already removed
continue; // Do next number
// Remove x * i
for(long j = i*2; j <= n; j += i)
number[j] = true;
}
// Print
for(long i = 2; i <= n; i++)
if(!number[i]) printf("%ld ", i);
}
This is not valid C++, if n is not a constant integral expression (yours isn't):
bool number[n+1];
It is a g++ extension, and puts the array on the call stack, which has limited size. You're overflowing it, causing an immediate program crash (no exception to recover from) so this is a bad idea even in g++.
Try
std::vector<bool> number(n+1);
(Note you'll need #include <vector> to make that work)
Also note that vector<bool> is a weird beast. Should work just fine for your usage, but to get something closer to bool[], you can also try
std::vector<char> number(n+1);
This looks wrong:
bool number[n+1];
Try either std::vector<bool> number(n+1) or bool* number = new bool[n+1]
You are trying to allocate array of n bools on stack, which might be simply to small. Try allocating on heap with std::vector or new operator.
Related
I have just started coding in C++ and I am using codeblocks. My build log is giving me 0 errors and 0 warning but I do not know why when I run it, it is giving me no result in the terminal.
Terminal Window Result:
Process returned -1073741571 (0xC00000FD) execution time : 1.252 s
Press any key to continue.
my code:
#include <iostream>
#include<math.h>
using namespace std;
int main() {
int n;
cin>>n;
int a[n];
for(int i = 0; i <n ; i++){
cin>>a[i];
}
const int N = pow(10, 6);
int idx[N];
for(int i = 0; i< N; i++){
idx[i] = -1;
}
int minidx = INT_MAX;
for(int i = 0; i<n; i++){
if(idx[a[i]] != -1){
minidx = min(minidx, idx[a[i]]);
}
else{
idx[a[i]] = i;
}
}
if (minidx == INT_MAX){
cout<<"-1"<<endl;
}
else{
cout<<minidx+1<<endl;
}
return 0;
}
Please help me in finding my mistake in the code.
This:
int n;
std::cin >> n;
int a [n];
for (int i = 0; i < n ; i++) {
std::cin >> a [i];
}
is bad practice. Don't use VLAs whose size you don't know at compile time. Instead, if I guess correctly that this is some Competitive Programming problem, you'll probably know what the max size will be as stated in the problem. So, do it this way instead:
int n;
std::cin >> n;
constexpr int max_size = 1000000;
int a [max_size];
for (int i = 0; i < n; i++) {
std::cin >> a [i];
}
However, even doing it this way will crash your program anyway. This is simply because of stack overflow when you declare an array that size inside a function. For slightly smaller sizes however, that would be okay. Just don't use VLAs the way you're using them.
One solution is to use a standard container like std::vector as the allocation takes place on the heap. Note that using std::array will crash too as the allocation is not on the heap.
Another solution is to make your array a global. This way you can increase to sizes well over 1e6. Not really recommended though.
In your code above, irrespective of what the size n for array a is (even if it's a fairly small size to fit on the stack), your code will definitely crash when you declare the array idx [1000000]. Reason is the same, stack overflow.
Also, please post indented code and use good indentation practices.
Task
Given n gold bars, find the maximum weight of gold that fits into bag of capacity W
Input
first line contains the capacity W of the knapsack and the number n of bars of gold. The next line contains n integers
Output
The max weight of gold that fits into a knapsack of capacity W.
Constraints
1 <= W <= 10000; 1<= n <= 300; 0 <= w0, w1, w2, ... , w(n-1) <= 100000
Code
#include <iostream>
#include <vector>
using std::vector;
int optimal_weight(int W, vector<int> w) {
int n = w.size() + 1;
int wt = W + 1;
int array [n][wt];
int val = 0;
for(int i = 0; i < wt; i++) array [0][i] = 0;
for(int i = 0; i < n; i++) array [i][0] = 0;
for(int i = 1; i< n; i++) {
for(int j = 1; j < wt; j++ ){
array[i][j] = array [i-1][j];
if (w[i-1] <= j) {
val = array[i-1][j - w[i-1]] + w[i-1];
if(array[i][j] < val) array[i][j] = val;
}
}
}
//printing the grid
// for(int i=0; i < n; i++) {
// for(int j=0; j < wt; j++) {
// cout<<array[i][j]<<" ";
// }
// cout<<endl;
// }
// cout<<endl;
return array [n-1][wt-1];
}
int main() {
int n, W;
std::cin >> W >> n;
vector<int> w(n);
for (int i = 0; i < n; i++) {
std::cin >> w[i];
}
std::cout << optimal_weight(W, w) << '\n';
}
The above code works fine for smaller inputs, but gives an unknown signal 11 error on the platform I wish to submit to. My best guess is of a possible segmentation fault, but I have been unable to debug it since quite some time now. Any help is much appreciated!
First note that your code doesn't work. That is, it doesn't compile when you adhere strictly to the C++ language standard, as C++ does not support variable-length arrays. (as noted by #Evg in a comment; some compilers offer this as an extension.)
The main reason for excluding those from C++ is probably why you're experiencing issues for larger problem sizes: the danger of stack overflows, the namesake of this website (as noted by #huseyinturgulbuyukisik in a comment). Variable-length arrays are allocated on the stack, whose size is limited. When you exceed it, you might attempt to write to a segment of memory that is not allocated to your process, triggering Linux signal 11, also known as SIGSEGV - the segmentation violation signal.
Instead of stack-based allocation, you should allocate your memory on the heap. A straightforward way to do so would be using the std::vector container (whose default allocator does indeed allocate on the heap). Thus, you would write:
std::vector<int> vec(n * wt);
and instead of array[i][j] you'd use vec[i * wt + j].
Now, this is not as convenient as using array[x][y]; for the extra convenience you can, for example, write a helper lambda, to access individual elements, e.g.
auto array_element = [&vec, wt](int x, int y) { return vec[x * wt + y]; }
with this lambda function available, you can now write statements such as array_element(i,j) = array_element(i-1,j);
or use a multi-dimensional container (std::vector<std::vector<int>> would work but it's ugly and wasteful IMHO; unfortunately, the standard library doesn't have a single-allocation multi-dimensional equivalent of that).
Other suggestions, not regarding a solution to your signal 11 issue:
Use more descriptive variable names, e.g. weight instead of wt and capacity instead of W. I'd also considersub_solutions_table or solutions_table instead of array, and might also rename i and j according to the semantics of the dynamic solution table.
You never actually need more than 2 rows of the solutions table; why not just allocate one row for the current iteration and one row for the previous iteration, and have appropriate pointers switch between them?
Replace
vector< vector< int> > k(n + 1,vector< int>(W + 1));
with
int array[n][w];
I'm in a linux server and when I try to execute the program it's returning a segmentation fault. when i use gdb to try and find out why, it returns..
Starting program: /home/cups/k
Program received signal SIGSEGV, Segmentation fault.
0x0000000000401128 in search(int) ()
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.192.el6.x86_64 libgcc-4.4.7-17.el6.x86_64 libstdc++-4.4.7-17.el6.x86_64
I couldn't quite interpret this. In my program i have a function called "search()" but i don't see anything that would cause a seg fault. here's the function def:
int search (int bit_type) { // SEARCH FOR A CONSEC NUMBER (of type BIT_TYPE) TO SEE IF ALREADY ENCOUNTERED
for (int i = 1; i <= MAX[bit_type]; i++) { //GO THRU ALL ENCOUNTERED CONSEC NUMBERS SO FAR (for type BIT_TYPE)
if (consec == r[bit_type][i]) // IF: FOUND
return i; // -----> RETURN INDEX OF RECORDED CONSEC_NUM
}
// IF: NOT FOUND
r[bit_type][++MAX[bit_type]] = consec; // -----> INCREMENT MAX[bit_type] & RECORD NEW CONSEC_NUM -------> ARRAY[MAX]
n[bit_type][MAX[bit_type]] = 1;
return (MAX[bit_prev]); // -----> RETURN THE NEWLY FILLED INDEX
}
global functions:
int MAX[2];
int r[2][200];
int n[2][200];
The comments are pretty useless to you guys since you don't have the rest of the program.. but you can just ignore them.
But do you guys see anything I missed?
From the link to your code here, here is just one error:
int *tmp = new int[MAX[0]];
for (int y = 0; y <= MAX[0]; y++) {
tmp[y] = 1;
}
You are going out-of-bounds on the last iteration. You allocated an array with MAX[0] items, and on the last iteration you're accessing tmp[MAX[0]].
That loop should be:
int *tmp = new int[MAX[0]];
for (int y = 0; y < MAX[0]; y++) {
tmp[y] = 1;
}
or better yet:
#include <algorithm>
//...
std::fill(tmp, tmp + MAX[0], 1); // no loop needed
or skip the dynamic allocation using new[] and use std::vector:
#include <vector>
//...
std::vector<int> tmp(MAX[0], 1);
In general, you have multiple loops that do this:
for (int i = 1; i <= number_of_items_in_array; ++i )
and then you access your arrays with array[i]. It is the <= in that for loop condition that is suspicious since it will try to access the array with an out-of-bounds index on the last iteration.
Another example is this:
long sum(int arr_r[], int arr_n[], int limit)
{
long tot = 0;
for (int i = 1; i <= limit; i++)
{
tot += (arr_r[i])*(arr_n[i]);
}
return tot;
}
Here, limit is the number of elements in the array, and you access arr_r[i] on the last iteration, causing undefined behavior.
Arrays are indexed starting from 0 and up to n - 1, where n is the total number of elements. Trying to fake 1-based arrays as you're attempting to do almost always results in these types of errors somewhere inside of the code base.
#include <cstdio>
#include <algorithm>
#include <cmath>
using namespace std;
int main() {
int t,m,n;
scanf("%d",&t);
while(t--)
{
scanf("%d %d",&m,&n);
int rootn=sqrt(double(n));
bool p[10000]; //finding prime numbers from 1 to square_root(n)
for(int j=0;j<=rootn;j++)
p[j]=true;
p[0]=false;
p[1]=false;
int i=rootn;
while(i--)
{
if(p[i]==true)
{
int c=i;
do
{
c=c+i;
p[c]=false;
}while(c+p[i]<=rootn);
}
};
i=0;
bool rangep[10000]; //used for finding prime numbers between m and n by eliminating multiple of primes in between 1 and squareroot(n)
for(int j=0;j<=n-m+1;j++)
rangep[j]=true;
i=rootn;
do
{
if(p[i]==true)
{
for(int j=m;j<=n;j++)
{
if(j%i==0&&j!=i)
rangep[j-m]=false;
}
}
}while(i--);
i=n-m;
do
{
if(rangep[i]==true)
printf("%d\n",i+m);
}while(i--);
printf("\n");
}
return 0;
system("PAUSE");
}
Hello I'm trying to use the sieve of Eratosthenes to find prime numbers in a range between m to n where m>=1 and n<=100000000. When I give input of 1 to 10000, the result is correct. But for a wider range, the stack is overflowed even if I increase the array sizes.
A simple and more readable implementation
void Sieve(int n) {
int sqrtn = (int)sqrt((double)n);
std::vector<bool> sieve(n + 1, false);
for (int m = 2; m <= sqrtn; ++m) {
if (!sieve[m]) {
cout << m << " ";
for (int k = m * m; k <= n; k += m)
sieve[k] = true;
}
}
for (int m = sqrtn; m <= n; ++m)
if (!sieve[m])
cout << m << " ";
}
Reason of getting error
You are declaring an enormous array as a local variable. That's why when the stack frame of main is pushed it needs so much memory that stack overflow exception is generated. Visual studio is tricky enough to analyze the code for projected run-time stack usage and generate exception when needed.
Use this compact implementation. Moreover you can have bs declared in the function if you want. Don't make implementations complex.
Implementation
typedef long long ll;
typedef vector<int> vi;
vi primes;
bitset<100000000> bs;
void sieve(ll upperbound) {
_sieve_size = upperbound + 1;
bs.set();
bs[0] = bs[1] = 0;
for (ll i = 2; i <= _sieve_size; i++)
if (bs[i]) { //if not marked
for (ll j = i * i; j <= _sieve_size; j += i) //check all the multiples
bs[j] = 0; // they are surely not prime :-)
primes.push_back((int)i); // this is prime
} }
call from main() sieve(10000);. You have primes list in vector primes.
Note: As mentioned in comment--stackoverflow is quite unexpected error here. You are implementing sieve but it will be more efficient if you use bistet instead of bool.
Few things like if n=10^8 then sqrt(n)=10^4. And your bool array is p[10000]. So there is a chance of accessing array out of bound.
I agree with the other answers,
saying that you should basically just start over.
Do you even care why your code doesn’t work? (You didn’t actually ask.)
I’m not sure that the problem in your code
has been identified accurately yet.
First of all, I’ll add this comment to help set the context:
// For any int aardvark;
// p[aardvark] = false means that aardvark is composite (i.e., not prime).
// p[aardvark] = true means that aardvark might be prime, or maybe we just don’t know yet.
Now let me draw your attention to this code:
int i=rootn;
while(i--)
{
if(p[i]==true)
{
int c=i;
do
{
c=c+i;
p[c]=false;
}while(c+p[i]<=rootn);
}
};
You say that n≤100000000 (although your code doesn’t check that), so,
presumably, rootn≤10000, which is the dimensionality (size) of p[].
The above code is saying that, for every integer i
(no matter whether it’s prime or composite),
2×i, 3×i, 4×i, etc., are, by definition, composite.
So, for c equal to 2×i, 3×i, 4×i, …,
we set p[c]=false because we know that c is composite.
But look closely at the code.
It sets c=c+i and says p[c]=false
before checking whether c is still in range
to be a valid index into p[].
Now, if n≤25000000, then rootn≤5000.
If i≤ rootn, then i≤5000, and, as long as c≤5000, then c+i≤10000.
But, if n>25000000, then rootn>5000,†
and the sequence i=rootn;, c=i;, c=c+i;
can set c to a value greater than 10000.
And then you use that value to index into p[].
That’s probably where the stack overflow occurs.
Oh, BTW; you don’t need to say if(p[i]==true); if(p[i]) is good enough.
To add insult to injury, there’s a second error in the same block:
while(c+p[i]<=rootn).
c and i are ints,
and p is an array of bools, so p[i] is a bool —
and yet you are adding c + p[i].
We know from the if that p[i] is true,
which is numerically equal to 1 —
so your loop termination condition is while (c+1<=rootn);
i.e., while c≤rootn-1.
I think you meant to say while(c+i<=rootn).
Oh, also, why do you have executable code
immediately after an unconditional return statement?
The system("PAUSE"); statement cannot possibly be reached.
(I’m not saying that those are the only errors;
they are just what jumped out at me.)
______________
† OK, splitting hairs, n has to be ≥ 25010001
(i.e., 50012) before rootn>5000.
I am getting a SIGABRT error when I compile the following code.(PRIME1 problem of spoj).
Link of the problem is http://www.spoj.com/problems/PRIME1/. It runs well on codeblocks but spoj returns SIGABRT error. Can someone explain the reason?
int main()
{
long long k,x,j=0,size,l=0,p=0,q=0,r=0,s;
cin>>size;
int a[(2*size)];
cout<<endl;
for(int i=0; i< (2*size); i++)
{
cin>>a[i];
}
if( size == 1)
{
p=a[1];
}
else
{
do
{
if(a[l+3]>a[l+1])
{
p=a[l+3];
}
else
{
p=a[l+1];
}
l=l+2;
}while(l<2*(size-1));
}
cout<<p;
long * b = new long [p-1];
for(long long i=0;i<p-1;i++)
{
b[i]=1;
}
b[0]=b[1]=0;
s=sqrt(p)
for(long long i = 2; i <= s; i++)
{
if(b[i] == 1)
{
for(long long j = i*i; j <= p; j = j + i)
{
b[j] = 0;
}
}
}
while(r<(2*size))
{
for(long long i = a[r];i < a[r+1];i++)
{
if(b[i] == 1 )
{
cout << i << "\n";
}
}
cout<<endl;
r=r+2;
}
delete [] b;
}
You are accessing array element accessing outside bounds
Array size 2*size-1 So elements from 0 to 2*size-2
But in your for loop you are going upto 2*size thus accessing 2*size-1 which is outside bounds
int a[(2*size)-1];
This is not legal C++ code (it's using a GCC extension), but it obviously compiled, so we'll let that slide. You are accessing your array out of bounds in the following loop and all over the place later on, which is undefined behavior - you need an array of size 2 * size to read in all the supplied parameters. Although given that they guarantee that size <= 10, you might as well just declare it as int a[20];
But that probably didn't cause the crash. What caused the crash is probably this line:
long * b = new long [p-1];
What's p? Well, let's just consider the easy case of size = 1 where you set p to a[1], or the second number you read in. What's the bounds on that number?
The questions says that the bound is n <= 1000000000, or 109. Your new can be requesting as much as 8GB of memory, depending on the value of sizeof(long) in the system you are using. The allocation is almost certainly going to fail, throwing a std::bad_alloc exception that causes std::abort() to be called as you don't have any exception handling code.
You initialize a to 2 * size - 1 elements...
int a[(2*size)-1];
Yet you write 2 * size elements.
for(int i=0; i< (2*size); i++)
// ...
Your loop should be:
for(int i=0; i< (2*size-1); i++)
Next...
if(size == 1)
{
p=a[1];
}
If size == 1 then you allocated an array of 2 * 1 - 1 = 1 element, so a[1] is an invalid access (you only have a[0] as arrays are 0-indexed).
You then have stuff like this:
if(a[l+3]>a[l+1])
Which loops until l == 2*size-1, so l+3 is invalid as soon as you hit 2 * size - 1 - 3.
Basically you just have a lot of places where you're reading or writing past the end of an array or not ensuring proper initialization and invoking undefined behavior.