C++ dynamic array allocation and strange use of memset - c++

I recently ran into this code in C++:
int m=5;
int n=4;
int *realfoo = new int[m+n+3];
int *foo;
foo = realfoo + n + 1;
memset(realfoo, -1, (m+n+2)*sizeof(int));
Only the variable "foo" is used in the rest of the code, "realfoo" is never used (just freed at the very end).
I can't understand what that means.
What kind of operation is foo = realfoo + n + 1;? How is it possible to assign an array plus an int?
The memset sets every value of "realfoo" to -1. How does this affect "foo"?
EDIT
Since many have asked for the entire code. Here it is:
int Wu_Alg(char *A, char *B, int m, int n)
{
int *realfp = new int[m+n+3];
int *fp, p, delta;
fp = realfp + n + 1;
memset(realfp, -1, (m+n+2)*sizeof(int));
delta = n - m;
p = -1;
while(fp[delta] != n){
p=p+1;
for(int k = -p; k <= delta-1; k++){
fp[k]=snake(A, B, m, n, k, Max(fp[k-1]+1, fp[k+1]));
}
for(int k = delta+p; k >= delta+1; k--){
fp[k] = snake(A, B, m, n, k, Max(fp[k-1]+1, fp[k+1]));
}
fp[delta] = snake(A, B, m, n, delta, Max(fp[delta-1]+1, fp[delta+1]));
}
delete [] realfp;
return delta+2*p;
}
int snake(char *A, char *B, int m, int n, int k, int j)
{
int i=j-k;
while(i < m && j < n && A[i+1] == B[j+1]){
i++;
j++;
}
return j;
}
Source: http://par.cse.nsysu.edu.tw/~lcs/Wu%20Algorithm.php
The algorithm is: https://publications.mpi-cbg.de/Wu_1990_6334.pdf

This:
foo = realfoo + n + 1;
Assigns foo to point to element n + 1 of realfoo. Using array indexing / pointer arithmetic equivalency, it's the same as:
foo = &realfoo[n + 1];

memset is not setting the value to -1. It is used to every byte to -1
You should create a loop to iterate every element to assign correctly.
for(size_t i= 0; i< m+n+3; i++){
realfoo[i] = -1;
}

What kind of operation is foo = realfoo + n + 1;?
This is an assignment operation. The left hand operand, the variable foo, is assigned a new value. The right hand operand realfoo + n + 1 provides that value.
How is it possible to assign an array plus an int?
Because the array decays to a pointer.
The memset sets every value of "realfoo" to -1.
Not quite. All except the last value is set. The last one is left uninitialised.
Note that technically each byte is set to -1. If the system uses one's complement representation of signed integers, then the value of the resulting integer will not be -1 (it would be -16'843'009 assuming a 32 bit integer and 8 bit byte).
How does this affect "foo"?
foo itself is not affected. But foo points to an object that is affected.
Bonus advice: The example program leaks memory. I recommend avoiding owning bare pointers.

Related

Array won't be passed to function C++

The program is stuck in a loop. I checked it with the debugger, and found out, that my array isn't being passed from my main function to the "binarysearch" function, and from there, it won't be passed to the "check" function.
smallestwrh = binarysearch(h, n, contentmax);
won't pass to
long long binarysearch(long long *h1, int n, long long contentmax)
The Visual Studio Debugger tells me that in the "binarysearch" function h1 is: +h1 0x0171c2bc {-3689348814741910324} __int64 *
I also get this stack related error message, but I'm not sure if this would be the cause of my problem: "Function uses '800048' bytes of stack: exceeds /analyze:stacksize '16384'. Consider moving some data to heap."
Full code:
#include <iostream>
#include <fstream>
using namespace std;
ifstream be("buldo.in");
ofstream ki("buldo.out");
bool check(long long *h2, int n, long long H) {
//igazat küld vissza amennyiben a H magasságra le lehet lapítani a földet
int i; long long content = 0;
for (i = 1; i <= n; i++) {
if (h2[i] > H)
content += h2[i] - H;
else {
if (content >= H - h2[i])
content -= H - h2[i];
else
return false;
}
}
return true;
}
long long binarysearch(long long *h1, int n, long long contentmax) {
long long left, right, middle;
left = 1; right = contentmax; middle = (left + right) / 2;
while (left < right) {
if (check(h1, n, middle) == true) {
left = middle + 1;
}
else {
right = middle;
}
}
return right;
}
int main() {
int n, i; long long h[100001], H, sum = 0, contentmax, smallestwrh;
be >> n;
for (i = 1; i <= n; i++) {
be >> h[i];
}
for (i = 1; i <= n; i++)
sum += h[i];
contentmax = (sum / n) + 1; //a legnagyobb földtartalom ami elfér a buldózernél a magasságok számtani átlaga
smallestwrh = binarysearch(h, n, contentmax); //megkeresi a legkisebb magasságot amire nem működik
ki << smallestwrh - 1;
return 0;
}```
Your code is correctly passing the complete array to binarysearch (or at least the start address and the length), you only have a debugging display issue.
In main you have h, an array of 100001 long long, so the debugger displays h as an array. When calling binarysearch you do an array to pointer decay, you are only passing the address of the array and losing the information that it is an array and how long it is. So in binarysearch the debugger cannot know that the function was called with an array, it must assume that only a pointer to one element was passed. Therefor the debugger only shows the address in h1 and the value that is stored at *h1 (which is the first element of h).
If you want to see the complete array, you can go to the "Watch" window and add a "h1,[n]". This tells the debugger to display h1 as an array containing n elements. Another less comfortable option is to open the "Memory" window and set the address to h1.
For your case another option might be to not do the array to pointer decay. You could create a std::vector<long long> h; and pass this by reference to long long binarysearch(std::vector<long long>& h1, long long contentmax). Two advantages: a vector can grow dynamically, so you don't have a buffer overflow when the user enters a number n higher than 100001, and a vector always is data + length, so you just pass it around as one parameter and the length is always correct (vector_variable.size() is then the replacement for n in binarysearch).

Big integer numbers & C

I am writing a program which generates big integer numbers, saves them in an array, and does some basic operations such as multiply or add.
I'm really worried about the performance of the actual code and would like tips or improvements to make it faster. Any suggestion is welcome, even if it changes my whole program or data types.
I will add below some piece of code, in order that you can see the structures that I am using and how I'm trying to deal with this B.I.N.:
unsigned int seed;
void initCharArray( char *L, unsigned N )
{
for ( int i=0; i< N; i++ )
{
L[i] = i%50;
}
}
char Addition( char *Vin1, char *Vin2, char *Vout, unsigned N )
{
char CARRY = 0;
for ( int i=0; i< N; i++ )
{
char R = Vin1[i] + Vin2[i] + CARRY;
if ( R <= 9 )
{
Vout[i] = R; CARRY = 0;
}
else
{
Vout[i] = R-10; CARRY = 1;
}
}
return CARRY;
}
int main(int argc, char **argv)
{
int N=10000;
unsigned char *V1=new int[N];
unsigned char *V2=new int[N];
unsigned char *V3=new int[N];
initCharArray(V1,N); initCharArray(V2,N);
Addition(V1,V2,V3,N);
}
Since modern possessors are highly efficient when dealing with fixed bit length numbers why don't you have an array of them?
Suppose you use unsigned long long. They should be 64 bits width, so max possible unsigned long long should be 2^64 - 1. Lets represent any number as a collection of numbers as:
-big_num = ( n_s, n_0, n_1, ...)
-n_s will take only 0 and 1 to represent + and - sign
-n_0 will represent number between 0 and 10^a -1 (exponent a to be determent)
-n_1 will represent number between 10^a and 10^(a+1) -1
and so on, and so on ...
DETERMINING a:
All n_ MUST be bounded by 10^a-1. Thus when adding two big_num this means we need to add the n_ as follow:
// A + B = ( (to be determent later),
// bound(n_A_1 + n_B_1) and carry to next,
// bound(n_A_2 + n_B_2 + carry) and carry to next,
// ...)
The bounding can be done as:
bound(n_A_i + n_B_i + carry) = (n_A_i + n_B_i + carry)%(10^a)
Therefore the carry to i+1 is determined as:
// carry (to be used in i+1) = (n_A_i + n_B_i + carry)/(10^a)
// (division of unsigned in c++ will floor the result by construction)
This tell us that the worst case is carry = 10^a -1, and thus the worst addition (n_A_i + n_B_i + carry) is:
(worst case) (10^a-1) + (10^a-1) + (10^a-1) = 3*(10^a-1)
Since type is unsigned long long if we don't want to have overflow on this addition we must bound our exponent a such that:
// 3*(10^a-1) <= 2^64 - 1, and a an positive integer
// => a <= floor( Log10((2^64 - 1)/3 + 1) )
// => a <= 18
So this has now fixed are maximum possible a=18 and thus the biggest possible n_ represented with unsigned long long is 10^18 -1 = 999,999,999,999,999,999. With this basic set up lets now get to some actual code. For now I will use std::vector to hold the big_num we discussed, but this can change:
// Example code with unsigned long long
#include <cstdlib>
#include <vector>
//
// FOR NOW BigNum WILL BE REPRESENTED
// BY std::vector. YOU CAN CHANGE THIS LATTER
// DEPENDING ON WHAT OPTIMIZATIONS YOU WANT
//
using BigNum = std::vector<unsigned long long>;
// suffix ULL garanties number be interpeted as unsigned long long
#define MAX_BASE_10 999999999999999999ULL
// random generate big number
void randomize_BigNum(BigNum &a){
// assuming MyRandom() returns a random number
// of type unsigned long long
for(size_t i=1; i<a.size(); i++)
a[i] = MyRandom()%(MAX_NUM_BASE_10+1); // cap the numbers
}
// wrapper functions
void add(const BigNum &a, const BigNum &b, BigNum &c); // c = a + b
void add(const BigNum &a, BigNum &b); // b = a + b
// actual work done here
void add_equal_size(const BigNum &a, const BigNum &b, BigNum &c, size_t &N);
void add_equal_size(const BigNum &a, const BigNum &b, size_t &N);
void blindly_add_one(BigNum &c);
// Missing cases
// void add_equal_size(BigNum &a, BigNum &b, BigNum &c, size_t &Na, size_t &Nb);
// void add_equal_size(BigNum &a, BigNum &b, size_t &Na, size_t &Nb);
int main(){
size_t n=10;
BigNum a(n), b(n), c(n);
randomize_BigNum(a);
randomize_BigNum(b);
add(a,b,c);
return;
}
The wrapper functions should look as follows. They will safe guard against incorrect size of array calls:
// To do: add support for when size of a,b,c not equal
// c = a + b
void add(const BigNum &a, const BigNum &b, BigNum &c){
c.resize(std::max(a.size(),b.size()));
if(a.size()==b.size())
add_equal_size(a,b,c,a.size());
else
// To do: add_unequal_size(a,b,c,a.size(),b.size());
return;
};
// b = a + b
void add(const BigNum &a, const BigNum &b){
if(a.size()==b.size())
add_equal_size(a,b,a.size());
else{
b.resize(a.size());
// To do: add_unequal_size(a,b,a.size());
}
return;
};
The main grunt of the work will be done here (which you can call directly and skip a function call, if you know what you are doing):
// If a,b,c same size array
// c = a + b
void add_equal_size(const BigNum &a, const BigNum &b, BigNum &c, const size_t &N){
// start with sign of c is sign of a
// Specific details follow on whether I need to flip the
// sign or not
c[0] = a[0];
unsigned long long carry=0;
// DISTINGUISH TWO GRAND CASES:
//
// a and b have the same sign
// a and b have oposite sign
// no need to check which has which sign (details follow)
//
if(a[0]==b[0]){// if a and b have the same sign
//
// This means that either +a+b or -a-b=-(a+b)
// In both cases I just need to add two numbers a and b
// and I already got the sign of the result c correct form the
// start
//
for(size_t i=1; i<N;i++){
c[i] = (a[i] + b[i] + carry)%(MAX_BASE_10+1);
carry = c[i]/(MAX_BASE_10+1);
}
if(carry){// if carry>0 then I need to extend my array to fit the final carry
c.resize(N+1);
c[N]=carry;
}
}
else{// if a and b have opposite sign
//
// If I have opposite sign then I am subtracting the
// numbers. The following is inspired by how
// you can subtract two numbers with bitwise operations.
for(size_t i=1; i<N;i++){
c[i] = (a[i] + (MAX_BASE_10 - b[i]) + carry)%(MAX_BASE_10+1);
carry = c[i]/(MAX_BASE_10+1);
}
if(carry){ // I carried? then I got the sign right from the start
// just add 1 and I am done
blindly_add_one(c);
}
else{ // I didn't carry? then I got the sign wrong from the start
// flip the sign
c[0] ^= 1ULL;
// and take the compliment
for(size_t i=1; i;<N;i++)
c[i] = MAX_BASE_10 - c[i];
}
}
return;
};
A few details about the // if a and b have opposite sign case follow:
Lets work in base 10. Lets say we are subtracting a - b Lets convert this to an addition. Define the following operation:
Lets name the base 10 digits of a number di. Then any number is n = d1 + 10*d2 + 10*10*d3... The compliment of a digit will now be defined as:
`compliment(d1) = 9-d1`
Then the compliment of a number n is:
compliment(n) = compliment(d1)
+ 10*compliment(d2)
+ 10*10*compliment(d3)
...
Consider two case, a>b and a<b:
EXAMPLE OF a>b: lest say a=830 and b=126. Do the following 830 - 126 -> 830 + compliment(126) = 830 + 873 = 1703 ok so if a>b, I drop the 1, and add 1 the result is 704!
EXAMPLE OF a<b: lest say a=126 and b=830. Do the following 126 - 830 -> 126 + compliment(830) = 126 + 169 = 295 ...? Well what if I compliment it? compliment(295) = 704 !!! so if a<b I already have the result... with opposite sign.
Going to our case, since each number in the array is bounded by MAX_BASE_10 the compliment of our numbers is
compliment(n) = MAX_BASE_10 - n
So using this compliment to convert subtraction to addition
I only need to pay attention to if I carried an extra 1 at
the end of the addition (the a>b case). The algorithm now is
FOR EACH ARRAY subtraction (ith iteration):
na_i - nb_i + carry(i-1)
convert -> na_i + compliment(nb_i) + carry(i-1)
bound the result -> (na_i + compliment(nb_i) + carry(i-1))%MAX_BASE_10
find the carry -> (na_i + compliment(nb_i) + carry(i-1))/MAX_BASE_10
keep on adding the array numbers...
At the end of the array if I carried, forget the carry
and add 1. Else take the compliment of the result
This "and add one" is done by yet another function:
// Just add 1, no matter the sign of c
void blindly_add_one(BigNum &c){
unsigned long long carry=1;
for(size_t i=1; i<N;i++){
c[i] = carry%(MAX_BASE_10+1);
carry = c[i]/(MAX_BASE_10+1);
}
if(carry){ // if carry>0 then I need to extend my basis to fit the number
c.resize(N+1);
c[N]=carry;
}
};
Good up to here. Specifically in this code don't forget that at the start of the function we set the sign of c to the sign of a. So if I carry at the end, that means I had |a|>|b| and I did either +a-b>0 or -a+b=-(a-b)<0. In either case setting the results c sign to a sign was correct. If I don't carry I had |a|<|b| with either +a-b<0 or -a+b=-(a-b)>0. In either case setting the results c sign to a sign was INCORRECT so I need to flip the sign if I don't carry.
The following functions opperates the same way as the above one, only rather than do c = a + b it dose b = a + b
// same logic as above, only b = a + b
void add_equal_size(BigNum &a, BigNum &b, size_t &N){
unsigned long long carry=0;
if(a[0]==b[0]){// if a and b have the same sign
for(size_t i=1; i<N;i++){
b[i] = (a[i] + b[i] + carry)%(MAX_BASE_10+1);
carry = b[i]/(MAX_BASE_10+1);
}
if(carry){// if carry>0 then I need to extend my basis to fit the number
b.resize(N+1);
b[N]=carry;
}
}
else{ // if a and b have oposite sign
b[0] = a[0];
for(size_t i=1; i<N;i++){
b[i] = (a[i] + (MAX_BASE_10 - b[i]) + carry)%(MAX_BASE_10+1);
carry = b[i]/(MAX_BASE_10+1);
}
if(carry){
add_one(b);
}
else{
b[0] ^= 1ULL;
for(size_t i=1; i;<N;i++)
b[i] = MAX_BASE_10 - b[i];
}
}
return;
};
And that is a basic set up on how you could use unsigned numbers in arrays to represent very large integers.
WHERE TO GO FROM HERE
Their are many thing to do from here on out to optimise the code, I will mention a few I could think of:
-Try and replace addition of arrays with possible BLAS calls
-Make sure you are taking advantage of vectorization. Depending on how you write your loops you may or may not be generating vectorized code. If your arrays become big you may benefit from this.
-In the spirit of the above make sure you have properly aligned arrays in memory to actually take advantage of vectorization. From my understanding std::vector dose not guaranty alignment. Neither dose a blind malloc. I think boost libraries have a vector version where you can declare a fixed alignment in which case you can ask for a 64bit aligned array for your unsigned long long array. Another option is to have your own class that manages a raw pointer and dose aligned allocations with a custom alocator. Borrowing aligned_malloc and aligned_free from https://embeddedartistry.com/blog/2017/02/22/generating-aligned-memory/ you could have a class like this to replace std::vector:
// aligned_malloc and aligned_free from:
// https://embeddedartistry.com/blog/2017/02/22/generating-aligned-memory/
// wrapping in absolutly minimal class to handle
// memory allocation and freeing
class BigNum{
private:
unsigned long long *ptr;
size_t size;
public:
BigNum() : ptr(nullptr)
, size(0)
{};
BigNum(const size_t &N) : ptr(nullptr)
, size(N)
{
resize(N);
}
// Defining destructor this will now delete copy and move constructor and assignment. Make your own if you need them
~BigNum(){
aligned_free(ptr);
}
// Access an object in aligned storage
const unsigned long long& operator[](std::size_t pos) const{
return *reinterpret_cast<const unsigned long long*>(&ptr[pos]);
}
// return my size
void size(){
return size;
}
// resize memory allocation
void resize(const size_t &N){
size = N;
if(N){
void* temp = aligned_malloc(ptr,N+1); // N+1, always keep first entry for the sign of BigNum
if(temp!=nullptr)
ptr = static_cast<unsigned long long>(temp);
else
throw std::bad_alloc();
}
else{
aligned_free(ptr);
}
}
};

Why these two functions does not behave the same

For empty vector Fun1 returns 0. Function Fun2, which should be equivalent to Fun1 (only one small change, see below), crashes with error vector subscript out of range. Any ideas why is that?
Code run in Visual Studio 2017
int Fun1(vector<int> service_times) {
sort(service_times.begin(), service_times.end());
int sum = 0;
int sumi = 0;
int st = service_times.size() - 1;//condition stired in variable
for (int i = 0; i < st; i++)
{
sumi += service_times[i];
sum = sum + sumi;
}
return sum;
}
int Fun2(vector<int> service_times) {
sort(service_times.begin(), service_times.end());
int sum = 0;
int sumi = 0;
for (int i = 0; i < (service_times.size() - 1); i++)//condition
//directly written
{
sumi += service_times[i];
sum = sum + sumi;
}
return sum;
}
Since service_times is an empty vector, service_times.size() ought to return 0, no?
No. It returns size_t(0), which is an unsigned type. Therefore, service_times.size() - 1 is a unsigned - signed operation, where the signed value (1) is "promoted" to unsigned type. Therefore, 0 - 1 is actually numeric_limits<size_t>::max().
In the first function, you saved it by storing it again in an int variable: it becomes -1 again. Therefore, i < st is i < -1, which worked incidentally. However, in the second function, i < st is actually i < <some ultra big value>, which, LOL.

Floating point error in C++ code

I am trying to solve a question in which i need to find out the number of possible ways to make a team of two members.(note: a team can have at most two person)
After making this code, It works properly but in some test cases it shows floating point error ad i can't find out what it is exactly.
Input: 1st line : Number of test cases
2nd line: number of total person
Thank you
#include<iostream>
using namespace std;
long C(long n, long r)
{
long f[n + 1];
f[0] = 1;
for (long i = 1; i <= n; i++)
{
f[i] = i * f[i - 1];
}
return f[n] / f[r] / f[n - r];
}
int main()
{
long n, r, m,t;
cin>>t;
while(t--)
{
cin>>n;
r=1;
cout<<C(n, min(r, n - r))+1<<endl;
}
return 0;
}
You aren't getting a floating point exception. You are getting a divide by zero exception. Because your code is attempting to divide by the number 0 (which can't be done on a computer).
When you invoke C(100, 1) the main loop that initializes the f array inside C increases exponentially. Eventually, two values are multiplied such that i * f[i-1] is zero due to overflow. That leads to all the subsequent f[i] values being initialized to zero. And then the division that follows the loop is a division by zero.
Although purists on these forums will say this is undefined, here's what's really happening on most 2's complement architectures. Or at least on my computer....
At i==21:
f[20] is already equal to 2432902008176640000
21 * 2432902008176640000 overflows for 64-bit signed, and will typically become -4249290049419214848 So at this point, your program is bugged and is now in undefined behavior.
At i==66
f[65] is equal to 0x8000000000000000. So 66 * f[65] gets calculated as zero for reasons that make sense to me, but should be understood as undefined behavior.
With f[66] assigned to 0, all subsequent assignments of f[i] become zero as well. After the main loop inside C is over, the f[n-r] is zero. Hence, divide by zero error.
Update
I went back and reverse engineered your problem. It seems like your C function is just trying to compute this expression:
N!
-------------
R! * (N-R)!
Which is the "number of unique sorted combinations"
In which case instead of computing the large factorial of N!, we can reduce that expression to this:
n
[ ∏ i ]
n-r
--------------------
R!
This won't eliminate overflow, but will allow your C function to be able to take on larger values of N and R to compute the number of combinations without error.
But we can also take advantage of simple reduction before trying to do a big long factorial expression
For example, let's say we were trying to compute C(15,5). Mathematically that is:
15!
--------
10! 5!
Or as we expressed above:
1*2*3*4*5*6*7*8*9*10*11*12*13*14*15
-----------------------------------
1*2*3*4*5*6*7*8*9*10 * 1*2*3*4*5
The first 10 factors of the numerator and denominator cancel each other out:
11*12*13*14*15
-----------------------------------
1*2*3*4*5
But intuitively, you can see that "12" in the numerator is already evenly divisible by denominators 2 and 3. And that 15 in the numerator is evenly divisible by 5 in the denominator. So simple reduction can be applied:
11*2*13*14*3
-----------------------------------
1 * 4
There's even more room for greatest common divisor reduction, but this is a great start.
Let's start with a helper function that computes the product of all the values in a list.
long long multiply_vector(std::vector<int>& values)
{
long long result = 1;
for (long i : values)
{
result = result * i;
if (result < 0)
{
std::cout << "ERROR - multiply_range hit overflow" << std::endl;
return 0;
}
}
return result;
}
Not let's implement C as using the above function after doing the reduction operation
long long C(int n, int r)
{
if ((r >= n) || (n < 0) || (r < 0))
{
std::cout << "invalid parameters passed to C" << std::endl;
return 0;
}
// compute
// n!
// -------------
// r! * (n-r)!
//
// assume (r < n)
// Which maps to
// n
// [∏ i]
// n - r
// --------------------
// R!
int end = n;
int start = n - r + 1;
std::vector<int> numerators;
std::vector<int> denominators;
long long numerator = 1;
long long denominator = 1;
for (int i = start; i <= end; i++)
{
numerators.push_back(i);
}
for (int i = 2; i <= r; i++)
{
denominators.push_back(i);
}
size_t n_length = numerators.size();
size_t d_length = denominators.size();
for (size_t n = 0; n < n_length; n++)
{
int nval = numerators[n];
for (size_t d = 0; d < d_length; d++)
{
int dval = denominators[d];
if ((nval % dval) == 0)
{
denominators[d] = 1;
numerators[n] = nval / dval;
}
}
}
numerator = multiply_vector(numerators);
denominator = multiply_vector(denominators);
if ((numerator == 0) || (denominator == 0))
{
std::cout << "Giving up. Can't resolve overflow" << std::endl;
return 0;
}
long long result = numerator / denominator;
return result;
}
You are not using floating-point. And you seem to be using variable sized arrays, which is a C feature and possibly a C++ extension but not standard.
Anyway, you will get overflow and therefore undefined behaviour even for rather small values of n.
In practice the overflow will lead to array elements becoming zero for not much larger values of n.
Your code will then divide by zero and crash.
They also might have a test case like (1000000000, 999999999) which is trivial to solve, but not for your code which I bet will crash.
You don't specify what you mean by "floating point error" - I reckon you are referring to the fact that you are doing an integer division rather than a floating point one so that you will always get integers rather than floats.
int a, b;
a = 7;
b = 2;
std::cout << a / b << std::endl;
this will result in 3, not 3.5! If you want floating point result you should use floats instead like this:
float a, b;
a = 7;
b = 2;
std::cout << a / b << std::end;
So the solution to your problem would simply be to use float instead of long long int.
Note also that you are using variable sized arrays which won't work in C++ - why not use std::vector instead??
Array syntax as:
type name[size]
Note: size must a constant not a variable
Example #1:
int name[10];
Example #2:
const int asize = 10;
int name[asize];

Undoing a recursion tree

SHORT How should I reduce (optimize) the number of needed operations in my code?
LONGER For research, I programmed a set of a equations in C++ to output a sequence if it fits the model. On the very inside of the code is this function, called MANY times during run-time:
int Weight(int i, int q, int d){
int j, sum = 0;
if (i <= 0)
return 0;
else if (i == 1)
return 1;
for (j = 1; j <= d; j++){
sum += Weight((i - j), q, d);
}
sum = 1 + ((q - 1) * sum);
return sum;
}
So based on the size of variable d, the size of the index i, and how many times this function is called in the rest of the code, many redundant calculations are done. How should I go about reducing the number of calculations?
Ideally, for example, after Weight(5, 3, 1) is calculated, how would I tell the computer to substitute in its value rather than recalculate its value when I call for Weight(6, 3, 1), given that the function is defined recursively?
Would multidimensional vectors work in this case to store the values? Should I just print the values to a file to be read off? I have yet to encounter an overflow with the input sizes I'm giving it, but would a tail-recursion help optimize it?
Note: I am still learning how to program, and I'm amazed I was even able to get the model right in the first place.
You may use memoization
int WeightImpl(int i, int q, int d); // forward declaration
// Use memoization and use `WeightImpl` for the real computation
int Weight(int i, int q, int d){
static std::map<std::tuple<int, int, int>, int> memo;
auto it = memo.find(std::make_tuple(i, q, d));
if (it != memo.end()) {
return it->second;
}
const int res = WeightImpl(i, q, d);
memo[std::make_tuple(i, q, d)] = res;
return res;
}
// Do the real computation
int WeightImpl(int i, int q, int d){
int j, sum = 0;
if (i <= 0)
return 0;
else if (i == 1)
return 1;
for (j = 1; j <= d; j++){
sum += Weight((i - j), q, d); // Call the memoize version to save intermediate result
}
sum = 1 + ((q - 1) * sum);
return sum;
}
Live Demo
Note: As you use recursive call, you have to be cautious with which version to call to really memoize each intermediate computation. I mean that the recursive function should be modified to not call itself but the memoize version of the function. For non-recursive function, memoization can be done without modification of the real function.
You could use an array to store the intermediate values. For example, for certain d and q have an array that contains the value of Weight(i, q, d) at index i.
If you initialize the array items at -1 you can then do in your function for example
if(sum_array[i] != -1){ // if the value is pre-calculated
sum += sum_array[i];
}
else{
sum += Weight((i - j), q, d);
}