Task
Given n gold bars, find the maximum weight of gold that fits into bag of capacity W
Input
first line contains the capacity W of the knapsack and the number n of bars of gold. The next line contains n integers
Output
The max weight of gold that fits into a knapsack of capacity W.
Constraints
1 <= W <= 10000; 1<= n <= 300; 0 <= w0, w1, w2, ... , w(n-1) <= 100000
Code
#include <iostream>
#include <vector>
using std::vector;
int optimal_weight(int W, vector<int> w) {
int n = w.size() + 1;
int wt = W + 1;
int array [n][wt];
int val = 0;
for(int i = 0; i < wt; i++) array [0][i] = 0;
for(int i = 0; i < n; i++) array [i][0] = 0;
for(int i = 1; i< n; i++) {
for(int j = 1; j < wt; j++ ){
array[i][j] = array [i-1][j];
if (w[i-1] <= j) {
val = array[i-1][j - w[i-1]] + w[i-1];
if(array[i][j] < val) array[i][j] = val;
}
}
}
//printing the grid
// for(int i=0; i < n; i++) {
// for(int j=0; j < wt; j++) {
// cout<<array[i][j]<<" ";
// }
// cout<<endl;
// }
// cout<<endl;
return array [n-1][wt-1];
}
int main() {
int n, W;
std::cin >> W >> n;
vector<int> w(n);
for (int i = 0; i < n; i++) {
std::cin >> w[i];
}
std::cout << optimal_weight(W, w) << '\n';
}
The above code works fine for smaller inputs, but gives an unknown signal 11 error on the platform I wish to submit to. My best guess is of a possible segmentation fault, but I have been unable to debug it since quite some time now. Any help is much appreciated!
First note that your code doesn't work. That is, it doesn't compile when you adhere strictly to the C++ language standard, as C++ does not support variable-length arrays. (as noted by #Evg in a comment; some compilers offer this as an extension.)
The main reason for excluding those from C++ is probably why you're experiencing issues for larger problem sizes: the danger of stack overflows, the namesake of this website (as noted by #huseyinturgulbuyukisik in a comment). Variable-length arrays are allocated on the stack, whose size is limited. When you exceed it, you might attempt to write to a segment of memory that is not allocated to your process, triggering Linux signal 11, also known as SIGSEGV - the segmentation violation signal.
Instead of stack-based allocation, you should allocate your memory on the heap. A straightforward way to do so would be using the std::vector container (whose default allocator does indeed allocate on the heap). Thus, you would write:
std::vector<int> vec(n * wt);
and instead of array[i][j] you'd use vec[i * wt + j].
Now, this is not as convenient as using array[x][y]; for the extra convenience you can, for example, write a helper lambda, to access individual elements, e.g.
auto array_element = [&vec, wt](int x, int y) { return vec[x * wt + y]; }
with this lambda function available, you can now write statements such as array_element(i,j) = array_element(i-1,j);
or use a multi-dimensional container (std::vector<std::vector<int>> would work but it's ugly and wasteful IMHO; unfortunately, the standard library doesn't have a single-allocation multi-dimensional equivalent of that).
Other suggestions, not regarding a solution to your signal 11 issue:
Use more descriptive variable names, e.g. weight instead of wt and capacity instead of W. I'd also considersub_solutions_table or solutions_table instead of array, and might also rename i and j according to the semantics of the dynamic solution table.
You never actually need more than 2 rows of the solutions table; why not just allocate one row for the current iteration and one row for the previous iteration, and have appropriate pointers switch between them?
Replace
vector< vector< int> > k(n + 1,vector< int>(W + 1));
with
int array[n][w];
Related
I have the following piece of C++ code. The scale of the problem is N and M. Running the code takes about two minutes on my machine. (after g++ -O3 compilation). Is there anyway to further accelerate it, on the same machine? Any kind of option, choosing a better data structure, library, GPU or parallelism, etc, is on the table.
void demo() {
int N = 1000000;
int M=3000;
vector<vector<int> > res(M);
for (int i =0; i <N;i++) {
for (int j=1; j < M; j++){
res[j].push_back(i);
}
}
}
int main() {
demo();
return 0;
}
An additional info: The second loop above for (int j=1; j < M; j++) is a simplified version of the real problem. In fact, j could be in a different range for each i (of the outer loop), but the number of iterations is about 3000.
With the exact code as shown when writing this answer, you could create the inner vector once, with the specific size, and call iota to initialize it. Then just pass this vector along to the outer vector constructor to use it for each element.
Then you don't need any explicit loops at all, and instead use the (highly optimized, hopefully) standard library to do all the work for you.
Perhaps something like this:
void demo()
{
static int const N = 1000000;
static int const M = 3000;
std::vector<int> data(N);
std::iota(begin(data), end(data), 0);
std::vector<std::vector<int>> res(M, data);
}
Alternatively you could try to initialize just one vector with that elements, and then create the other vectors just by copying that part of the memory using std::memcpy or std::copy.
Another optimization would be to allocate the memory in advance (e.g. array.reserve(3000)).
Also if you're sure that all the members of the vector are similar vectors, you could do a hack by just creating a single vector with 3000 elements, and in the other res just put the same reference of that 3000-element vector million times.
On my machine which has enough memory to avoid swapping your original code took 86 seconds.
Adding reserve:
for (auto& v : res)
{
v.reserve(N);
}
made basically no difference (85 seconds but I only ran each version once).
Swapping the loop order:
for (int j = 1; j < M; j++) {
for (int i = 0; i < N; i++) {
res[j].push_back(i);
}
}
reduced the time to 10 seconds, this is likely due to a combination of allowing the compiler to use SIMD optimisations and improving cache coherency by accessing memory in sequential order.
Creating one vector and copying it into the others:
for (int i = 0; i < N; i++) {
res[1].push_back(i);
}
for (int j = 2; j < M; j++) {
res[j] = res[1];
}
reduced the time to 4 seconds.
Using a single vector:
void demo() {
size_t N = 1000000;
size_t M = 3000;
vector<int> res(M*N);
size_t offset = N;
for (size_t i = 0; i < N; i++) {
res[offset++] = i;
}
for (size_t j = 2; j < M; j++) {
std::copy(res.begin() + N, res.begin() + N * 2, res.begin() + offset);
offset += N;
}
}
also took 4 seconds, there probably isn't much improvement because you have 3,000 4 MB vectors, there would likely be more difference if N was smaller or M was larger.
I wrote this knapsack problem solution in c++ however when I run it, it gives me segmentation fault
I have tried everything and my compiler will always give me the segmentation fault error.
#include<iostream>
#include<algorithm>
int knapsack(int v[],int w[],int n,int W)
{
int V[n][W];
for(int i = 0; i<=W;i++)
{
V[0][i] = 0;
}
for(int i = 0; i <= n; i++){
for(int j = 1; j<=W; j++)
{
if(w[i]<=W)
{
V[i][j] = std::max(V[i-1][j], v[i]+V[i-1][j-w[i]]);
}
else
{
V[i][j] = V[i-1][j];
}
}
}
return V[n][W];
}
int main()
{
int v[4] = {10,40,30,50};
int w[4] = {5,4,6,3};
int n = 3;
int W = 10;
std::cout<<"item value:"<<knapsack(v,w,n,W);
}
Don't use VLAs. The size of an array must be known at compile time, else it's not standard C++. Those are compiler extensions that are not portable and introduce some hidden costs.
Array indices go from 0 to length-1. in you loop
for(int i = 0; i<=W;i++)
i can reach W, then V[0][W] is out of bounds which causes the seg fault. You have to use < instead of <=:
for(int i = 0; i < W; i++)
n should probably be 4, if it's meant to represent the size of the array, a std::vector would make your life easier here, because a vector knows it's size
In general don't use C-style arrays or raw pointers at all in this day and age, use std::vector instead.
int V[n][W];
for(int i = 0; i<=W;i++)
{
V[0][i] = 0;
}
Note that V's indexes go from V[0][0] to V[0][W-1]. Your for loop will try to read V[0][W].
The same error is repeated in other places. Your end condition in your for loops should be < (strictly less) instead of <= (less or equal than).
I have created a 3D array with size a[100][100][100]. In the beginning, i was getting some error while compiling the code because of this huge array. As such after a little digging, I declared this array as global and static in my C++ program. Now my requirement is that I have to calculate the sum of all the element of that array. While doing so, I tried looping through the array, and after some time i get a segment fragment error at runtime.
Can anyone advice as to how i should go about it? Is there a better approach for calculating the sum?
Thanks
My code is given here
for(int m=0;m<dimension;m++){
for(int j=0;j<dimension;j++){
for(int k=0;k<dimension;k++){
a[m][j][k]=0;
}
}
}
And this is how i am calculating the sum
int sum=0;
for(int i=x1;i<=x2;i++){
for(int j=y1;j<=y2;j++){
for(int k=z1;k<=z2;k++){
sum=sum+a[i][j][k];
}
}
}
where x1,x2, y1, y2, z1, z2 are taken as user input.
I dont get any error in the first part but in the second part of the code where segmentation fault error is thrown.
In your second code fragment, you use user input as your array dimensions without bounds checking. Always, always, always bounds-check your array accesses.
A fast-and-dirty way of doing so in this case is:
assert(x1 >= 0); // Unless x1 is unsigned.
assert(y1 >= 0); // Unless y1 is unsigned.
assert(z1 >= 0); // Unless z1 is unsigned.
assert(x2 < dimension);
assert(y2 < dimension);
assert(z2 < dimension);
Also check the initialization of a. If it’s dynamically-allocated, check the return code.
Or replace with code to recover from the error. The runtime cost is negligible, as you are doing this outside your loop. Also, if your dimension is declared as constexpr size_t dimension = 100; and a is declared as static int a[dimension][dimension][dimension]; (or a reference to an array with known bounds), you can replace the first loop with,
memset( &a, 0, sizeof(a) );
Otherwise, for a dynamic array, you can use:
memset( &a, 0, sizeof(int)*dimension*dimension*dimension );
However, a statically-allocated global array will be initialized to zeroes already, when the program starts.
If you use std::array, it will do the bounds-checking for you with no extra memory overhead.
It's generally not a great idea to use 3D-arrays
You could try to sum with only one for loop, as follows, though, to avoid nesting:
int D = dimension;
int sum = 0;
for (int i = 0; i < D*D*D; ++i)
sum += A[i / (D * D)][(i / D) % D][i % D];
Although probably not the answer you we're hoping for. I would switch to boost multi_array. With declaring a standard array of size 1000 or even 100 I can reproduce your problem. With the boost multi-array I do not have that problem. See below the code:
//Boost:ublas
#include <boost/numeric/ublas/matrix.hpp>
#include "boost/multi_array.hpp"
#include <boost/date_time/posix_time/posix_time.hpp>
#include <cassert>
//Using declarations
using namespace boost::numeric::ublas;
int main() {
cout << "Start Time: " << time(0) << endl;
time_t t0 = time(0);
//Example taken and adjusted from: http://www.boost.org/doc/libs/1_56_0/libs/multi_array/doc/user.html
// Create a 3D array that is 3 x 4 x 2
typedef boost::multi_array<int, 3> array_type;
typedef array_type::index index;
int const size =1000;
array_type A(boost::extents[size][size][size]);
// Assign values to the elements
for (index i = 0; i != size; ++i)
for (index j = 0; j != size; ++j)
for (index k = 0; k != size; ++k)
A[i][j][k] = 1;
// Verify values
int sum = 0;
for (index i = 0; i != size; ++i)
for (index j = 0; j != size; ++j)
for (index k = 0; k != size; ++k)
sum += A[i][j][k];
std::cout << "Sum: " << sum << endl;
cout << "End Time: " << time(0) << endl;
time_t t1 = time(0);
return 0;
}
Hope this can solve your problem.
As mentioned earlier in my comment, I think your problem is that you are claiming more memory than your operating systems allows without special considerations, but I can't confirm this readily.
Just in case you don't know boost, the Boost libraries are a great toolbox with lots of functionality and have contributed a lot to the C++ standard over the years. So I would recommend using it.
I'm writing simple ANN (neural network) for functions' approximation. I got crash with message: "Heap corrupted". I found few advices how to resolve it, but nothing help.
I got error at first line of this function:
void LU(double** A, double** &L, double** &U, int s){
U = new double*[s];
L = new double*[s];
for (int i = 0; i < s; i++){
U[i] = new double[s];
L[i] = new double[s];
for (int j = 0; j < s; j++)
U[i][j] = A[i][j];
}
for (int i = 0, j = 0; i < s; i = ++j){
L[i][j] = 1;
for (int k = i + 1; k < s - 1; k++){
L[k][j] = U[k][j] / U[i][j];
double* vec_t = mul(U[i], L[k][j], s);
for (int z = 0; z < s; z++)
U[k][z] = U[k][z] - vec_t[z];
delete[] vec_t;
}
}
};
As I understood from debagger's information: two arrays (U and L) has been passed to function with some addresses in memory. And it's quite strange because I didn't initialize it. I call this function two times and first time it works nicely (ok, at least it works), but at second call it crashes. I have no idea how to resolve it.
There is link to whole project: CLICK
I'm working in MS Visual Studio 2013 under Windows 7 x64.
UPDATE
According to some commentaries below I should provide some additive information.
First of all, sorry for quality of code. I wrote it only for myself for 2 days.
Second, when I said "at second call", I mean that first I call LU when I need to get determinant of S (I use LU decomposition fot this) and it working without any crashes. Second call it's when I trying to get inverse of matrix (the same, S). And when I call detLU at [0, 0] point of matrix (to get cofactor) I got this crash.
Third, if I get information from debagger correctly, arrays L and U passes in function at second call with already defined memory's addresses. I can't understand why, becouse before LU call I have just wrote "double** L; double** U;" without any initialization.
I can try provide some additional debug information or some tests, if somebody explain me what exactly I have to do.
The point you get a heap corruption error/crash is typically just the symptom of an actual heap overflow/underflow or other memory error at some other time/point in the past. This is why heap corruptions can be difficult to track down.
You have a lot of code and all the double-pointers are difficult to track but I did notice one potential issue:
double** initInWeights(double f, int h, int w) {
double** W = new double*[h];
for (int i = 0; i < 10; i++) {
W[i] = new double[w];
The loop will overflow W[] if h is less than 10. Chances are that somewhere in your code you have a buffer overflow/underflow or are using memory after it is freed. The complexity and design of your code makes it difficult to pinpoint at a glance.
Is there a reason you are using raw double-pointers instead of simply std::vector<std::vector<double>>? This would remove all your manual memory management code, making your code shorter, simpler, and more importantly remove the heap corruption issue.
Barring that you should double-check that all manually allocated memory is the correct size and access loops can never go out-of-bounds.
Update -- I think your problem may lie with a buffer overflow in the extract() function in matrix.cpp:
double** extract(double** mat, int s, int col, int row)
{
double** ext = new double*[s - 1];
for (int i = 0; i < s - 1; i++)
{
ext[i] = new double[s - 1];
}
int ext_c = 0, ext_r = 0;
for (int i = 0; i < s; i++)
{
if (i != row)
{
for (int j = 0; j < s; j++)
{ // Overflow on ext_c here
if (j != col) ext[ext_r][ext_c++] = mat[i][j];
}
ext_r++;
}
}
return ext;
};
You never reset ext_c so it simply keeps increasing in size up to (s-1)*(s-1) which obviously overflows the ext[] array. To fix this you simply need to change the inner loop definition to:
for (int j = 0, ext_c = 0; j < s; j++)
At least that one change lets me run your project without any heap corruption errors.
I have Array A[9]= {1,2,3,4,5,6,7,8,9} and I need to delete the numbers which are not dividing by 2. The code I tried to do:
int main()
{
int n;
ifstream fd(Cdf);
fd>>n; // read how many numbers are in the file.
int A[n];
for(int i = 0; i < n; i++)
{
fd >> A[i]; //read the numbers from file
}
for(int i = 0; i < n; i ++) // moving the numbers.
{
if(A[i] % 2 !=0)
{
for(int j = i; j < n; j++)
{
A[i] = A[i+1];
}
}
}
fd.close();
return 0;
}
But I get numbers like 224466888. what I need to do to get 2,4,6,8?
I need to delete numbers in the same array.
First you should use std::vector for dynamic size arrays.
Second, for removing numbers that are even in a vector, you can do :
std::vector<int> inf = {12,0,5,6,8};
auto func = [](int i){return i % 2 != 0;};
inf.erase(std::remove_if(inf.begin(),inf.end(),func), inf.end());
EDIT :
Ok, so you can still do this without std::vectors, but it will be uglier :
#include <algorithm>
int res[] = {2,5,9,8,6,7};
int size = 6;
auto func = [](int i){return i % 2 != 0;};
int new_size = std::remove_if(res,res + size, func) - res;
All the data you want is in [0, new_size[ range, the other part of your array is now garbage.
Your removal loop is indexing with the wrong variable:
for(int j = i; j < n; j++)
{
A[i] = A[i+1];
}
You're using i, which doesn't change in the loop.
Change it to j. You also need to subtract one from the upper limit, as you'd step outside of the array otherwise when accessing A[j + 1].
for(int j = i; j < n - 1; j++)
{
A[j] = A[j + 1];
}
An array can't be used for your purpose. It is allocated on stack and its size can't be changed dynamically (you can't change the size of an array in general, not only when it is allocated on stack).
You could allocate a second array and keep reallocating it with realloc everytime you add a new element but that's not the good way to do it. You are working with C++ so just use a std::vector<int> and your problems will be solved:
std::vector<int> evenArray;
evenArray.reserve(sizeof(A)/sizeof(A[0])/2);
if (number is even) {
evenArray.pushBack(number);
}
Mind that vector stores elements contiguously so this is legal:
int *evenA = &evenArray[0];
For your inner for loop you should be referencing j, not i.
for(int j = i; j < n - 1; j++)
{
A[j] = A[j+1];
}
Otherwise, what's the point of creating j?
Of course, this also means if you read the whole array back you will display all the characters that were shifted (which will just be equal to the last number). So, you should probably keep track of the new length of the array and just iterate to that instead of the end of the array.
EDIT:
In the inner for loop you need to loop to n - 1 otherwise when you have A[j + 1] it will go off the end of the array when you to change it, which may or may not give you a runtime error.