Related
I Have this formula:
(n - 1)! ((n (n - 1))/2 + ((n - 1) (n - 2))/4)
2<=n<=100000
I would like to modulate the result of this from this formula by any modulo, but for the moment let's assume that it is constant, MOD = 999999997. Unfortunately I can't just calculate the result and modulate it, because unfortunately I don't have variables larger than 2^64 at my disposal, so the main question is. What factors to modulate by MOD to get the results%MOD ?
Now let's assume that n=19. What is in brackets is equal to 247.5
18! = 6402373705728000.
(6402373705728000 * 247.5)mod999999997 = 921442488.
Unfortunately, in case I modulate 18! first, the result will be wrong, because (18!)mod999999997 = 724935119. (724935119 * 247.5)mod9999997 = 421442490.
How to solve this problem?
I think the sum could be break down. The only tricky part here is that (n - 1)(n - 2)/4 may have a .5 decimal., as n(n-1) / 2 will always be integer.
S = (n - 1)! * ((n (n - 1))/2 + ((n - 1) (n - 2))/4)
= [(n-1)! * (n*(n-1)/2)] + [(n-1)! * (n-1)(n-2)/4]
= A + B
A is easy to do. With B, if (n-1)(n-2) % 4 == 0 then there's nothing else either, else you can simplified to X/2, as (n-1)(n-2) is also divisible by 2.
If n = 2, it's trivial, else if n > 2 there's always a 2 in the representation of (N-1)! = 1x2x3x ... xN. In that case, simply calculate ((N-1)!/2) = 1x3x4x5x ... xN.
Late example:
N = 19
MOD = 999999997
--> 18! % MOD = 724935119 (1)
(18!/2) % MOD = 862467558 (2)
n(n-1)/2 = 171 (3)
(n-1)(n-2)/2 = 153 (4)
--> S = (1)*(3) + (2)*(4) = 255921441723
S % MOD = 921442488
On another note, if mod is some prime number, like 1e9+7, you can just apply Fermat's little theorem to calculate multiplicative inverse as such:
(a/b) % P = [(a%P) * ((b^(P-2)) % P)] % P (with P as prime, a and b are co-prime to P)
You will have to use 2 mathematical formulas here:
(a + b) mod c == (a mod c + b mod c) mod c
and
(a * b) mod c == (a mod c * b mod c) mod c
But those are only valid for integers. The nice part here is that formula can only be integer for n >= 2, provided you compute it as:
(((n - 1)! * n * (n - 1))/2) + (((n - 1)! * (n - 1) * (n - 2))/4)
1st part is integer | 2nd part is too
for n == 2, first part boils down to 1 and second is 0
for n > 2 either n or n-1 is even so first part is integer, and again eithe n-1 of n-2 is even and (n-1)! is also even so second part is integer. As your formula can be rewritten to only use additions and multiplications it can be computed.
Here is a possible C++ code (before unsigned long long is required):
#include <iostream>
template<class T>
class Modop {
T mod;
public:
Modop(T mod) : mod(mod) {}
T add(T a, T b) {
return ((a % mod) + (b % mod)) % mod;
}
T mul(T a, T b) {
return ((a % mod) * (b % mod)) % mod;
}
int fact_2(T n) {
T cr = 1;
for (T i = 3; i <= n; ++i) {
cr = mul(cr, i);
}
return cr;
}
};
template<class T>
T formula(T n, T mod) {
Modop<T> op = mod;
if (n == 2) {
return 1;
}
T second, first = op.mul(op.fact_2(n - 1), op.mul(n, n - 1));
if (n % 2 == 0) {
second = op.mul(op.fact_2(n - 1), op.mul((n - 2)/ 2, n - 1));
}
else {
second = op.mul(op.fact_2(n - 1), op.mul(n- 2, (n - 1) / 2));
}
return op.add(first, second);
}
int main() {
std::cout << formula(19ull, 999999997ull) << std::endl;
return 0;
}
First of All , for n=2 we can say that the result is 1.
Then, the expression is equal to: (n*(n-1)(n-1)!)/2 + (((n-1)(n-2)/2)^2)*(n-3)! .
lemma: For every two consecutive integer number , one of them is even.
By lemma we can understand that n*(n-1) is even and also (n-1)*(n-2) is even too. So we know that the answer is an integer number.
First we calculate (n*(n-1)(n-1)!)/2 modulo MOD. We can calculate (n(n-1))/2 that can be saved in a long long variable like x, and we get the mod of it modulo MOD:
x = (n*(n-1))/2;
x %= MOD;
After that for: i (n-1 -> 1) we do:
x = (x*i)%MOD;
And we know that both of 'x' and 'i' are less than MOD and the result of
multiplication can be save in a long long variable.
And likewise we do the same for (((n-1)(n-2)/2)^2)(n-3)! .
We calculate (n-1)*(n-2)/2 that can be save in a long long variable like y, and we get the mod of it modulo MOD:
y = ((n-1)*(n-2))/2;
y %= MOD;
And after that we replace (y^2)%MOD on y because we know that y is less than MOD and y*y can be save in a long long variable:
y = (y*y)%MOD;
Then like before for: i (n-3 -> 1) we do:
y = (y*i)%MOD;
And finally the answer is (x+y)%MOD
I am trying to create a program that will find the smallest integer greater than one that I can multiply a float by and obtain a non-integer. It should output the multiplied by value as well as what it is multiplied to. For instance, if the user enters 2.8 should return 5 and 14. Here is my code but it executes for a long time and then outputs 2 and 5.6. What did I do wrong?
#include <iostream>
#include <string>
int main()
{
float a;
int m = 2;
float c = 0.00;
std::cin >> a;
for (int i = 0; i < 999999999; i++) {
c = a * m;
if (c == (int)c) {
break;
}
else {
m + 1;
}
}
std::cout << "The lowest number to multiply by is " + std::to_string(m) + " and it equals " + std::to_string(c);
}
The brute force approach you're taking will take a long time, even after you've corrected the bugs. And because of floating point inaccuracies it might not deliver correct results anyway. Here's a better algorithm.
Read the number as a string instead of as a float.
Start a denominator at 1. Count the number of digits to the right of the decimal point, and for each digit multiply the denominator by 10.
Remove the decimal point from the string and convert it to an integer; this is your numerator. Your example of 2.8 is equal to 28/10.
Take the GCD of the numerator and denominator and divide both by that number. For your example, the GCD of 28 and 10 is 2, so your fraction is now 14/5.
The simplified denominator is your answer, and the numerator is the result when you multiply.
I had the same question as you and I searched on the Internet. Most answers are complicated, if not inefficient, like counting from denominator = 1 to infinity.
The algorithm below is written in java, yet it can be easily transplanted onto c++.
The algorithm is as belows:
public static String toFrac(double value){
long num = 1, den = 1, tem; ArrayList<Long> gamma = new ArrayList<Long>();
double temp = value;
while(Math.abs(1.*num/den - value) > 1e-13){
/*1e-13 is the error allowance,
as double format has a precision to around 1e-15 for integral values,
and this value should be changed where appropriate,
especially that float values only has a precision to 1e-6 to 1e-7
for integral values.*/
gamma.add((long)temp);
temp = 1/(temp - (long) temp);
den = 1;
num = gamma.get(gamma.size()-1);
for(int i = gamma.size()-2; i >= 0; i--){
tem = num;
num = gamma.get(i)*tem+den;
den = tem;
}
}
return num+"/"+den;
}
Basically, within the loop, we compute the continued fraction of the float.
This can be done by:
*in above code, double means double precision, a format more accurate than float. The algorithm can be applied to float easily. Similarly, long is the equivalent of int but with larger range.
Algorithm
1.temp = your_float
2.Find floor(temp), store it into a list gamma[]. (In java, ArrayList<Long> is used for its convenience to add new elements, you may use int[] instead.)
3.Subtract off this integral part, then store the reciprocal 1/(decimalPart(temp)) back into temp
4.Repeat step 2 - 3 to generate a list of gamma.
Sample Output
The intermediate results are as follows (debugging statement not shown in code):
input: 891/3178
= 0.2803650094398993
input: toFrac 0.2803650094398993
=
0,
val: 0.0
exp: 0/1
0,3,
val: 0.3333333333333333
exp: 1/3
0,3,1,
val: 0.25
exp: 1/4
0,3,1,1,
val: 0.2857142857142857
exp: 2/7
0,3,1,1,3,
val: 0.28
exp: 7/25
0,3,1,1,3,4,
val: 0.2803738317757009
exp: 30/107
0,3,1,1,3,4,9,
val: 0.28036437246963564
exp: 277/988
0,3,1,1,3,4,9,1,
val: 0.28036529680365296
exp: 307/1095
0,3,1,1,3,4,9,1,1,
val: 0.2803648583773404
exp: 584/2083
0,3,1,1,3,4,9,1,1,1,
val: 0.2803650094398993
exp: 891/3178
output "891/3178"
Indeed, this algorithm doesn't take many iterations. This should be an efficient yet clean algorithm.
Working Principle
<b>EXAMPLE: </b> your_float = 0.2803650094398993
Iteration 1: = 0 + 1/ 3.566778900112234
Iteration 2: = 0 + 1/ (3 + 1/ 1.7643564356435628)
Iteration 3: = 0 + 1/ (3 + 1/ (1 + 1/ 1.3082901554404172))
Iteration 4: = 0 + 1/ (3 + 1/ (1 + 1/ (1 + 1/ 3.2436974789915682)))
Iteration 5: = 0 + 1/ (3 + 1/ (1 + 1/ (1 + 1/ (1 + 1/ 4.103448275862547)))
Iteration 6: = 0 + 1/ (3 + 1/ (1 + 1/ (1 + 1/ (3 + 1/ (4 + 1/ 9.666666666621998))))
Iteration 7: = 0 + 1/ (3 + 1/ (1 + 1/ (1 + 1/ (3 + 1/ (4 + 1/ (9 + 1/ 1.5000000001005045)))))
Iteration 8: = 0 + 1/ (3 + 1/ (1 + 1/ (1 + 1/ (3 + 1/ (4 + 1/ (9 + 1/ (1 + 1/ 1.999999999597982))))))
Iteration 9: = 0 + 1/ (3 + 1/ (1 + 1/ (1 + 1/ (3 + 1/ (4 + 1/ (9 + 1/ (1 + 1/ (1 + 1.000000000402018)))))))
Answer = 0 + 1/ (3 + 1/ (1 + 1/ (1 + 1/ (3 + 1/ (4 + 1/ (9 + 1/ (1 + 1/ (1 + 1))))))))
= 0 + 1/ (3 + 1/ (1 + 1/ (1 + 1/ (3 + 1/ (4 + 1/ (9 + 1/ (1 + 1/ 2)))))))
= 0 + 1/ (3 + 1/ (1 + 1/ (1 + 1/ (3 + 1/ (4 + 1/ (9 + 2/ 3))))))
= 0 + 1/ (3 + 1/ (1 + 1/ (1 + 1/ (3 + 1/ (4 + 3/ 29)))))
= 0 + 1/ (3 + 1/ (1 + 1/ (1 + 1/ (3 + 29/ 119))))
= 0 + 1/ (3 + 1/ (1 + 1/ (1 + 119/ 386)))
= 0 + 1/ (3 + 1/ (1 + 386/ 505))
= 0 + 1/ (3 + 505/ 891)
= 0 + 891/ 3178
Although this algorithm might not be the most efficient, it is at least way faster than bruteforcing.
Here is what you are looking for:
Your Example:
input: toFrac 2.8
output: = 14/5
Back onto the root problem, you asked that I am trying to create a program that will find the smallest integer greater than one that I can multiply a float by and obtain an integer., so the algorithm above outputs: 14/5, which are the two values that you are seeking: 2.8*5 = 14.
Generally, when the algorithm above outputs num = NUM_OUT and den = DEN_OUT, then your_float*DEN_OUT = NUM_OUT with an maximum error allowance chosen by you.
Hope this is what you're looking for. :)
I am trying to analyze the Time Complexity of a recursive algorithm that solves the Generate all sequences of bits within Hamming distance t problem. The algorithm is this:
// str is the bitstring, i the current length, and changesLeft the
// desired Hamming distance (see linked question for more)
void magic(char* str, int i, int changesLeft) {
if (changesLeft == 0) {
// assume that this is constant
printf("%s\n", str);
return;
}
if (i < 0) return;
// flip current bit
str[i] = str[i] == '0' ? '1' : '0';
magic(str, i-1, changesLeft-1);
// or don't flip it (flip it again to undo)
str[i] = str[i] == '0' ? '1' : '0';
magic(str, i-1, changesLeft);
}
What is the time complexity of this algorithm?
I fond myself pretty rusty when it comes to this and here is my attempt, which I feel is no where near the truth:
t(0) = 1
t(n) = 2t(n - 1) + c
t(n) = t(n - 1) + c
= t(n - 2) + c + c
= ...
= (n - 1) * c + 1
~= O(n)
where n is the length of the bit string.
Related questions: 1, 2.
It's exponential:
t(0) = 1
t(n) = 2 t(n - 1) + c
t(n) = 2 (2 t(n - 2) + c) + c = 4 t (n - 2) + 3 c
= 2 (2 (2 t(n - 3) + c) + c) + c = 8 t (n - 3) + 7 c
= ...
= 2^i t(n-i) + (2^i - 1) c [at any step i]
= ...
= 2^n t(0) + (2^n - 1) c = 2^n + (2^n - 1) c
~= O(2^n)
Or, using WolframAlpha: https://www.wolframalpha.com/input/?i=t(0)%3D1,+t(n)%3D2+t(n-1)+%2B+c
The reason it's exponential is that your recursive calls are reducing the problem size by 1, but you're making two recursive calls. Your recursive calls are forming a binary tree.
There are 3 numbers: T, N, M. 1 ≤ T, M ≤ 10^9, 1 ≤ N ≤ 10^18 .
What is asked in the problem is to compute [Σ(T^i)]mod(m) where i varies from 0 to n. Obviously, O(N) or O(M) solutions wouldn't work because of 1 second time limit. How should I proceed?
As pointed out in previous answers, you may use the formula for geometric progression sum. However there is a small problem - if m is not prime, computing (T^n - 1) / (T - 1) can not be done directly - the division will not be a well-defined operations. In fact there is a solution that can handle even non prime modules and will have a complexity O(log(n) * log(n)). The approach is similar to binary exponentiation. Here is my code written in c++ for this(note that my solution uses binary exponentiation internally):
typedef long long ll;
ll binary_exponent(ll x, ll y, ll mod) {
ll res = 1;
ll p = x;
while (y) {
if (y % 2) {
res = (res * p) % mod;
}
p = (p * p) % mod;
y /= 2;
}
return res;
}
ll gp_sum(ll a, int n, ll mod) {
ll A = 1;
int num = 0;
ll res = 0;
ll degree = 1;
while (n) {
if (n & (1 << num)) {
n &= (~(1 << num));
res = (res + (A * binary_exponent(a, n, mod)) % mod) % mod;
}
A = (A + (A * binary_exponent(a, degree, mod)) % mod) % mod;
degree *= 2;
num++;
}
return res;
}
In this solution A stores consecutively the values 1, 1 + a, 1 + a + a^2 + a^3, ...1 + a + a^2 + ... a ^ (2^n - 1).
Also just like in binary exponentiation if I want to compute the sum of n degrees of a, I split n to sum of powers of two(essentially using the binary representation of n). Now having the above sequence of values for A, I choose the appropriate lengths(the ones that correspond to 1 bits of the binary representation of n) and multiply the sum by some value of a accumulating the result in res. Computing the values of A will take O(log(n)) time and for each value I may have to compute a degree of a which will result in another O(log(n)) - thus overall we have O(log(n) * log (n)).
Let's take an example - we want to compute 1 + a + a^2 .... + a ^ 10. In this case, we call gp_sum(a, 11, mod).
On the first iteration n & (1 << 0) is not zero as the first bit of 11(1011(2)) is 1. Thus I turn off this bit setting n to 10 and I accumulate in res: 0 + 1 * (a ^ (10)) = a^10. A is now a + 1.
The next second bit is also set in 10(1010(2)), so now n becomes 8 and res is a^10 + (a + 1)*(a^8)=a^10 + a^9 + a^8. A is now 1 + a + a^2 + a^3
Next bit is 0, thus res stays the same, but A will become 1 + a + a^2 + ... a^7.
On the last iteration the bit is 1 so we have:
res = a^10 + a^9 + a^8 + a^0 *(1 + a + a^2 + ... +a^7) = 1 + a .... + a ^10.
One can use an algorithm which is similar to binary exponentiation:
// Returns a pair <t^n mod m, sum of t^0..t^n mod m>,
// I assume that int is big enough to hold all values without overflowing.
pair<int, int> calc(int t, int n, int m)
if n == 0 // Base case. t^0 is always 1.
return (1 % m, 1 % m)
if n % 2 == 1
// We just compute the result for n - 1 and then add t^n.
(prevPow, prevSum) = calc(t, n - 1, m)
curPow = prevPow * t % m
curSum = (prevSum + curPow) % m
return (curPow, curSum)
// If n is even, we compute the sum for the first half.
(halfPow, halfSum) = calc(t, n / 2, m)
curPow = halfPow * halfPow % m // t^n = (t^(n/2))^2
curSum = (halfSum * halfPow + halfSum) % m
return (curPow, curSum)
The time complexity is O(log n)(the analysis is the same as for the binary exponentiation algorithm). Why is it better than a closed form formula for geometric progression? The latter involves division by (t - 1). But it is not guaranteed that there is an inverse of t - 1 mod m.
you can use this:
a^1 + a^2 + ... + a^n = a(1-a^n) / (1-a)
so, you just need to calc:
a * (1 - a^n) / (1 - a) mod M
and you can find O(logN) way to calc a^n mod M
It's a geometric series whose sum is equal to :
I recently wrote a Computer Science exam where they asked us to give a recursive definition for the cos taylor series expansion. This is the series
cos(x) = 1 - x^2/2! + x^4/4! + x^6/6! ...
and the function signature looks as follows
float cos(int n , float x)
where n represents the number in the series the user would like to calculate till and x represents the value of x in the cos function
I obviously did not get that question correct and I have been trying to figure it out for the past few days but I have hit a brick wall
Would anyone be able to help out getting me started somewhere ?
All answers so far recompute the factorial every time. I surely wouldn't do that. Instead you can write :
float cos(int n, float x)
{
if (n > MAX)
return 1;
return 1 - x*x / ((2 * n - 1) * (2 * n)) * cos(n + 1, x);
}
Consider that cos returns the following (sorry for the dots position) :
You can see that this is true for n>MAX, n=MAX, and so on. The sign alternating and powers of x are easy to see.
Finally, at n=1 you get 0! = 1, so calling cos(1, x) gets you the first MAX terms of the Taylor expansion of cos.
By developing (easier to see when it has few terms), you can see the first formula is equivalent to the following :
For n > 0, you do in cos(n-1, x) a division by (2n-3)(2n-2) of the previous result, and a multiplication by x². You can see that when n=MAX+1 this formula is 1, with n=MAX then it is 1-x²/((2MAX-1)2MAX) and so on.
If you allow yourself helper functions, then you should change the signature of the above to float cos_helper(int n, float x, int MAX) and call it like so :
float cos(int n, float x) { return cos_helper(1, x, n); }
Edit : To reverse the meaning of n from degree of the evaluated term (as in this answer so far) to number of terms (as in the question, and below), but still not recompute the total factorial every time, I would suggest using a two-term relation.
Let us define trivially cos(0,x) = 0 and cos(1,x) = 1, and try to achieve generally cos(n,x) the sum of the n first terms of the Taylor series.
Then for each n > 0, we can write, cos(n,x) from cos(n-1,x) :
cos(n,x) = cos(n-1,x) + x2n / (2n)!
now for n > 1, we try to make the last term of cos(n-1,x) appear (because it is the closest term to the one we want to add) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( x2n-2 / (2n-2)! )
By combining this formula with the previous one (adapting it to n-1 instead of n) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( cos(n-1,x) - cos(n-2,x) )
We now have a purely recursive definition of cos(n,x), without helper function, without recomputing the factorial, and with n the number of terms in the sum of the Taylor decomposition.
However, I must stress that the following code will perform terribly :
performance wise, unless some optimization allows to not re-evaluate a cos(n-1,x) that was evaluated at the previous step as cos( (n-1) - 1, x)
precision wise, because of cancellation effects : the precision with which we get x2n-2 / (2n-2)! is very bad
Now this disclaimer is in place, here comes the code :
float cos(int n, float x)
{
if (n < 2)
return n;
float c = x * x / (2 * (n - 1) * 2 * n);
return (1-c) * cos(n-1, x) + c * cos(n-2, x);
}
cos(x)=1 - x^2/2! + x^4/4! - x^6/6! + x^8/8!.....
=1-x^2/2 (1 - x^2/3*4 + x^4/3*4*5*6 -x^6/3*4*5*6*7*8)
=1 - x^2/2 {1- x^2/3*4 (1- x^2/5*6 + x^4/5*6*7*8)}
=1 - x^2/2 [1- x^2/3*4 {1- x^2/5*6 ( 1- x^2/7*8)}]
double cos_series_recursion(double x, int n, double r=1){
if(n>0){
r=1-((x*x*r)/(n*(n-1)));
return cos_series_recursion(x,n-2,r);
}else return r;
}
A simple approach that makes use of static variables:
double cos(double x, int n) {
static double p = 1, f = 1;
double r;
if(n == 0)
return 1;
r = cos(x, n-1);
p = (p*x)*x;
f = f*(2*n-1)*2*n;
if(n%2==0) {
return r+p/f;
} else {
return r-p/f;
}
}
Notice that I'm multiplying 2*n in the operation to get the next factorial.
Having n align to the factorial we need makes this easy to do in 2 operations: f = f * (n - 1) then f = f * n.
when n = 1, we need 2!
when n = 2, we need 4!
when n = 3, we need 6!
So we can safely double n and work from there. We could write:
n = 2*n;
f = f*(n-1);
f = f*n;
If we did this, we would need to update our even/odd check to if((n/2)%2==0) since we're doubling the value of n.
This can instead be written as f = f*(2*n-1)*2*n; and now we don't have to divide n when checking if it's even/odd, since n is not being altered.
You can use a loop or recursion, but I would recommend a loop. Anyway, if you must use recursion you could use something like the code below
#include <iostream>
using namespace std;
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
float Cos(int n, float x) {
if (n == 0) return 1;
return Cos(n-1, x) + (n%2 ? -1 : 1) * pow (x, 2*n) / (fact(2*n));
}
int main()
{
cout << Cos(6, 3.14/6);
}
Just do it like the sum.
The parameter n in float cos(int n , float x) is the l and now just do it...
Some pseudocode:
float cos(int n , float x)
{
//the sum-part
float sum = pow(-1, n) * (pow(x, 2*n))/faculty(2*n);
if(n <= /*Some predefined maximum*/)
return sum + cos(n + 1, x);
return sum;
}
The usual technique when you want to recurse but the function arguments don't carry the information that you need, is to introduce a helper function to do the recursion.
I have the impression that in the Lisp world the convention is to name such a function something-aux (short for auxiliary), but that may have been just a limited group in the old days.
Anyway, the main problem here is that n represents the natural ending point for the recursion, the base case, and that you then also need some index that works itself up to n. So, that's one good candidate for extra argument for the auxiliary function. Another candidate stems from considering how one term of the series relates to the previous one.