sympy how to simplify expressions with a variable as exponent - sympy

I have an expression like this one:
4^k/(12^k*A+4^k*B)
Python code is:
import sympy as sy
k,A,B=sy.symbols('k A B',real=True)
C=sy.Rational(4)**k/(sy.Rational(12)**k*A+sy.Rational(4)**k*B)
Is there any SymPy function able to simplify the expression? Something like radcan in MAXIMA.
I tried sy.simplify(C), sy.factor(C), sy.powsimp(C), sy.radsimp(C), sy.expand(C) without success.

We need to force the factorisation of the integers which we can do with factorint:
In [46]: C
Out[46]:
k
4
────────────
k k
12 ⋅A + 4 ⋅B
In [47]: C.replace(lambda x: x.is_Integer, lambda x: Mul(*factorint(x, multiple=True), evaluate=False))
Out[47]:
k
(2⋅2)
─────────────────────
k k
A⋅(2⋅2⋅3) + B⋅(2⋅2)
In [48]: expand(_)
Out[48]:
2⋅k
2
──────────────────
2⋅k k 2⋅k
2 ⋅3 ⋅A + 2 ⋅B
In [49]: cancel(_)
Out[49]:
1
────────
k
3 ⋅A + B

Related

How can I create such list in Haskell using list comprehension

So I need to create such list
[2,4,5,8,9,10,11,16,17,18,19,20,21,22,23,32 ..]
The pattern goes as follows:
2^1,2^2, 2^2 +1, 2^3, 2^3 +1, 2^3 +2, 2^3 +3 .. So the number of repeats of (2^n +1, 2^n +2 .. is also doubling with each go ) I hope you got the point.
I can create such list using functions in Haskell but I was interested whether or not it is possible to do it using solely list Comprehension
EDIT: Some people asked me to demonstrate a functional approach to this problem. Here it is
rep _ 0 = []
rep a b = a : rep (a+1) (b-1)
createlist a = rep (2^(a+1)) (2^a) ++ createlist (a+1))
So if we say `take 50 (createlist 0) the results would be
[2,4,5,8,9,10,11,16,17,18,19,20,21,22,23,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82]
So you always need to call the function with initial parameter 0. It is really a nasty solution I would like to make it easier.
Based on your example, the list looks like:
2
4 5
8 9 10 11
16 17 18 19 20 21 22 23
32 33 34 35 36 37 38 39 40 41 ...
So for every i from 1 to infinity, we yield the elements in the range [2i,2i+2i-1). We can write this directly into list comprehension:
[ j | i <- [1..], j <- [2^i .. 2^i + 2^(i-1) - 1] ]
We can also let i take powers of two, and yield elements between i, and div (3*i) 2 (exclusive), so:
[ j | i <- iterate (2*) 2, j <- [i .. div (2*i) 3 - 1] ]
We can turn that also into a list monad, like:
iterate (*2) 2 >>= \i -> [i..div (3*i) 2 - 1]
or more point-free (and point-less):
import Control.Monad(ap)
iterate (*2) 2 >>= ap enumFromTo (pred . flip div 2 . (3 *))
One could try to write the ith term of the list using a function f(i), where i >= 0
The overall infinite list can be represented as
L_0 ++ L_1 ++ L_2 ++ ...
where each L_n is a finite list of the form
L_n = [ 2^(n+1), 2^(n+1) + 1, ..., 2^(n+1) + (2^n - 1) ]
The size of L_n is 2^n and we know that for any k, 2^0 + 2^1 + ... + 2^k = 2^(k+1) - 1 (it's a geometric progression) so if we're asked to find which finite list the ith term of the infinite list is in, we can find the highest integer m for which i >= 2^m - 1. Once that's done, we can safely say the ith term is in L_m. We can also say that the ith term of the infinite list is the (i - 2^m + 1)th element of L_m.
This allows us to define the final sequence (let's call it thatList) as
thatList :: [Int]
thatList = [ f i | i <- [0..] ]
and
f :: Int -> Int
f i = (2 ^ (m + 1)) + (i - (2 ^ m) + 1)
where
m = floor (logBase 2 (fromIntegral i + 1))

How should I go about solving this recursion without trial and error

int sum_down(int x)
{
if (x >= 0)
{
x = x - 1;
int y = x + sum_down(x);
return y + sum_down(x);
}
else
{
return 1;
}
}
What is this smallest integer value of the parameter x, so that the returned value is greater than 1.000.000 ?
Right now I am just doing it by trial and error and since this question is asked via a paper format. I don't think I will have enough time to do trial and error. Question is, how do you guys visualise this quickly such that it can be solved easily. Thanks guys and I am new to programming so thanks in advance!
The recursion logic:
x = x - 1;
int y = x + sum_down(x);
return y + sum_down(x);
can be simplified to:
x = x - 1;
int y = x + sum_down(x) + sum_down(x);
return y;
which can be simplified to:
int y = (x-1) + sum_down(x-1) + sum_down(x-1);
return y;
which can be simplified to:
return (x-1) + 2*sum_down(x-1);
Put in mathematical form,
f(N) = (N-1) + 2*f(N-1)
with the recursion terminating when N is -1. f(-1) = 1.
Hence,
f(0) = -1 + 2*1 = 1
f(1) = 0 + 2*1 = 2
f(2) = 1 + 2*2 = 5
...
f(18) = 17 + 2*f(17) = 524269
f(19) = 18 + 2*524269 = 1048556
Your program can be written this way (sorry about c#):
public static void Main()
{
int i = 0;
int j = 0;
do
{
i++;
j = sum_down(i);
Console.Out.WriteLine("j:" + j);
} while (j < 1000000);
Console.Out.WriteLine("i:" + i);
}
static int sum_down(int x)
{
if (x >= 0)
{
return x - 1 + 2 * sum_down(x - 1);
}
else
{
return 1;
}
}
So at first iteration you'll get 2, then 5, then 12... So you can neglect the x-1 part since it'll stay little compared to the multiplication.
So we have:
i = 1 => sum_down ~= 4 (real is 2)
i = 2 => sum_down ~= 8 (real is 5)
i = 3 => sum_down ~= 16 (real is 12)
i = 4 => sum_down ~= 32 (real is 27)
i = 5 => sum_down ~= 64 (real is 58)
So we can say that sum_down(x) ~= 2^x+1. Then it's just basic math with 2^x+1 < 1 000 000 which is 19.
A bit late, but it's not that hard to get an exact non-recursive formula.
Write it up mathematically, as explained in other answers already:
f(-1) = 1
f(x) = 2*f(x-1) + x-1
This is the same as
f(-1) = 1
f(x+1) = 2*f(x) + x
(just switched from x and x-1 to x+1 and x, difference 1 in both cases)
The first few x and f(x) are:
x: -1 0 1 2 3 4
f(x): 1 1 2 5 12 27
And while there are many arbitrary complicated ways to transform this into a non-recursive formula, with easy ones it often helps to write up what the difference is between each two elements:
x: -1 0 1 2 3 4
f(x): 1 1 2 5 12 27
0 1 3 7 15
So, for some x
f(x+1) - f(x) = 2^(x+1) - 1
f(x+2) - f(x) = (f(x+2) - f(x+1)) + (f(x+1) - f(x)) = 2^(x+2) + 2^(x+1) - 2
f(x+n) - f(x) = sum[0<=i<n](2^(x+1+i)) - n
With eg. a x=0 inserted, to make f(x+n) to f(n):
f(x+n) - f(x) = sum[0<=i<n](2^(x+1+i)) - n
f(0+n) - f(0) = sum[0<=i<n](2^(0+1+i)) - n
f(n) - 1 = sum[0<=i<n](2^(i+1)) - n
f(n) = sum[0<=i<n](2^(i+1)) - n + 1
f(n) = sum[0<i<=n](2^i) - n + 1
f(n) = (2^(n+1) - 2) - n + 1
f(n) = 2^(n+1) - n - 1
No recursion anymore.
How about this :
int x = 0;
while (sum_down(x) <= 1000000)
{
x++;
}
The loop increments x until the result of sum_down(x) is superior to 1.000.000.
Edit : The result would be 19.
While trying to understand and simplify the recursion logic behind the sum_down() function is enlightening and informative, this snippet tend to be logical and pragmatic in that it does not try and solve the problem in terms of context, but in terms of results.
Two lines of Python code to answer your question:
>>> from itertools import * # no code but needed for dropwhile() and count()
Define the recursive function (See R Sahu's answer)
>>> f = lambda x: 1 if x<0 else (x-1) + 2*f(x-1)
Then use the dropwhile() function to remove elements from the list [0, 1, 2, 3, ....] for which f(x)<=1000000, resulting in a list of integers for which f(x) > 1000000. Note: count() returns an infinite "list" of [0, 1, 2, ....]
The dropwhile() function returns a Python generator so we use next() to get the first value of the list:
>>> next(dropwhile(lambda x: f(x)<=1000000, count()))
19

sas generate 5 digit id code that first 3 must be letters and last 2 numbers

How can I generate in SAS and ID code with 5 digits(letters & Numbers)? Where the first 3 must be letters and last 2 must be numbers.
You can create a unique mapping of the integers from 0 to 26^3 * 10^2 - 1 to a string of the format AAA00. This wikipedia page introduces the concept of different numerical bases quite well.
Your map would look something like this
value = 100 * (X * 26^2 + Y * 26^1 + Z * 26^0) + a * 10^1 + b * 10^0
where X, Y & Z are integers between 0 and 25 (which can be represented as the letters of the alphabet), and a & b are integers between 0 and 9.
As an example:
47416 = 100 * (0 * 26^2 + 18 * 26^1 + 6 * 26^0) + 1 * 10^1 + 6 * 10^0
Using:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
You get:
47416 -> [0] [18] [6] (1) (6)
A S G 1 6
So 47416 can be represented as ASG16.
To do this programatically you will need to step through your number splitting it into quotient and remainder through division by your bases (10 and 26), storing the remainder as part of your output and using the quotient for the next iteration.
you will probably want to use these functions:
mod() Modulo function to get the remainder from division
floor() Flooring function which returns the rounded down integer part of a real numer
A couple of similar (but slightly simpler) examples to get you started can be found here.
Have a go, and if you get stuck post a new question. You will probably get the best response from SO if you provide a detailed question, code showing your progress, a description of where and why you are stuck, any errors or warnings you are getting and some sample data.

Histogram of the distribution of dice rolls

I saw a question on careercup, but I do not get the answer I want there. I wrote an answer myself and want your comment on my analysis of time complexity and comment on the algorithm and code. Or you could provide a better algorithm in terms of time. Thanks.
You are given d > 0 fair dice with n > 0 "sides", write an function that returns a histogram of the frequency of the result of dice rolls.
For example, for 2 dice, each with 3 sides, the results are:
(1, 1) -> 2
(1, 2) -> 3
(1, 3) -> 4
(2, 1) -> 3
(2, 2) -> 4
(2, 3) -> 5
(3, 1) -> 4
(3, 2) -> 5
(3, 3) -> 6
And the function should return:
2: 1
3: 2
4: 3
5: 2
6: 1
(my sol). The time complexity if you use a brute force depth first search is O(n^d). However, you can use the DP idea to solve this problem. For example, d=3 and n=3. You can use the result of d==1 when computing d==2:
d==1
num #
1 1
2 1
3 1
d==2
first roll second roll is 1
num # num #
1 1 2 1
2 1 -> 3 1
3 1 4 1
first roll second roll is 2
num # num #
1 1 3 1
2 1 -> 4 1
3 1 5 1
first roll second roll is 3
num # num #
1 1 4 1
2 1 -> 5 1
3 1 6 1
Therefore,
second roll
num #
2 1
3 2
4 3
5 2
6 1
The time complexity of this DP algorithm is
SUM_i(1:d) {n*[n(d-1)-(d-1)+1]} ~ O(n^2*d^2)
~~~~~~~~~~~~~~~ <--eg. d=2, n=3, range from 2~6
The code is written in C++ as follows
vector<pair<int,long long>> diceHisto(int numSide, int numDice) {
int n = numSide*numDice;
vector<long long> cur(n+1,0), nxt(n+1,0);
for(int i=1; i<=numSide; i++) cur[i]=1;
for(int i=2; i<=numDice; i++) {
int start = i-1, end = (i-1)*numSide; // range of previous sum of rolls
//cout<<"start="<<start<<" end="<<end<<endl;
for(int j=1; j<=numSide; j++) {
for(int k=start; k<=end; k++)
nxt[k+j] += cur[k];
}
swap(cur,nxt);
for(int j=start; j<=end; j++) nxt[j]=0;
}
vector<pair<int,long long>> result;
for(int i=numDice; i<=numSide*numDice; i++)
result.push_back({i,cur[i]});
return result;
}
You can do it in O(n*d^2). First, note that the generating function for an n-sided dice is p(n) = x+x^2+x^3+...+x^n, and that the distribution for d throws has generating function p(n)^d. Representing the polynomials as arrays, you need O(nd) coefficients, and multiplying by p(n) can be done in a single pass in O(nd) time by keeping a rolling sum.
Here's some python code that implements this. It has one non-obvious optimisation: it throws out a factor x from each p(n) (or equivalently, it treats the dice as having faces 0,1,2,...,n-1 rather than 1,2,3,...,n) which is why d is added back in when showing the distribution.
def dice(n, d):
r = [1] + [0] * (n-1) * d
nr = [0] * len(r)
for k in xrange(d):
t = 0
for i in xrange(len(r)):
t += r[i]
if i >= n:
t -= r[i-n]
nr[i] = t
r, nr = nr, r
return r
def show_dist(n, d):
for i, k in enumerate(dice(n, d)):
if k: print i + d, k
show_dist(6, 3)
The time and space complexity are easy to see: there's nested loops with d and (n-1)*d iterations so the time complexity is O(n.d^2), and there's two arrays of size O(nd) and no other allocation, so the space complexity is O(nd).
Just in case, here a simple example in Python using the OpenTurns platform.
import openturns as ot
d = 2 # number of dice
n = 6 # number of sides per die
# possible values
dice_distribution = ot.UserDefined([[i] for i in range(1, n + 1)])
# create the sum distribution d times the sum
sum_distribution = sum([dice_distribution] * d)
That's it!
print(sum_distribution)
will show you all the possible values and their corresponding probabilities:
>>> UserDefined(
{x = [2], p = 0.0277778},
{x = [3], p = 0.0555556},
{x = [4], p = 0.0833333},
{x = [5], p = 0.111111},
{x = [6], p = 0.138889},
{x = [7], p = 0.166667},
{x = [8], p = 0.138889},
{x = [9], p = 0.111111},
{x = [10], p = 0.0833333},
{x = [11], p = 0.0555556},
{x = [12], p = 0.0277778}
)
You can also draw the probability distribution function:
sum_distribution.drawPDF()

Ranking and unranking of permutations with duplicates

I'm reading about permutations and I'm interested in ranking/unranking methods.
From the abstract of a paper:
A ranking function for the permutations on n symbols assigns a unique
integer in the range [0, n! - 1] to each of the n! permutations. The corresponding
unranking function is the inverse: given an integer between 0 and n! - 1, the
value of the function is the permutation having this rank.
I made a ranking and an unranking function in C++ using next_permutation. But this isn't practical for n>8. I'm looking for a faster method and factoradics seem to be quite popular.
But I'm not sure if this also works with duplicates. So what would be a good way to rank/unrank permutations with duplicates?
I will cover one half of your question in this answer - 'unranking'. The goal is to find the lexicographically 'K'th permutation of an ordered string [abcd...] efficiently.
We need to understand Factorial Number System (factoradics) for this. A factorial number system uses factorial values instead of powers of numbers (binary system uses powers of 2, decimal uses powers of 10) to denote place-values (or base).
The place values (base) are –
5!= 120 4!= 24 3!=6 2!= 2 1!=1 0!=1 etc..
The digit in the zeroth place is always 0. The digit in the first place (with base = 1!) can be 0 or 1. The digit in the second place (with base 2!) can be 0,1 or 2 and so on. Generally speaking, the digit at nth place can take any value between 0-n.
First few numbers represented as factoradics-
0 -> 0 = 0*0!
1 -> 10 = 1*1! + 0*0!
2 -> 100 = 1*2! + 0*1! + 0*0!
3 -> 110 = 1*2! + 1*1! + 0*0!
4 -> 200 = 2*2! + 0*1! + 0*0!
5 -> 210 = 2*2! + 1*1! + 0*0!
6 -> 1000 = 1*3! + 0*2! + 0*1! + 0*0!
7 -> 1010 = 1*3! + 0*2! + 1*1! + 0*0!
8 -> 1100 = 1*3! + 1*2! + 0*1! + 0*0!
9 -> 1110
10-> 1200
There is a direct relationship between n-th lexicographical permutation of a string and its factoradic representation.
For example, here are the permutations of the string “abcd”.
0 abcd 6 bacd 12 cabd 18 dabc
1 abdc 7 badc 13 cadb 19 dacb
2 acbd 8 bcad 14 cbad 20 dbac
3 acdb 9 bcda 15 cbda 21 dbca
4 adbc 10 bdac 16 cdab 22 dcab
5 adcb 11 bdca 17 cdba 23 dcba
We can see a pattern here, if observed carefully. The first letter changes after every 6-th (3!) permutation. The second letter changes after 2(2!) permutation. The third letter changed after every (1!) permutation and the fourth letter changes after every (0!) permutation. We can use this relation to directly find the n-th permutation.
Once we represent n in factoradic representation, we consider each digit in it and add a character from the given string to the output. If we need to find the 14-th permutation of ‘abcd’. 14 in factoradics -> 2100.
Start with the first digit ->2, String is ‘abcd’. Assuming the index starts at 0, take the element at position 2, from the string and add it to the Output.
Output String
c abd
2 012
The next digit -> 1.String is now ‘abd’. Again, pluck the character at position 1 and add it to the Output.
Output String
cb ad
21 01
Next digit -> 0. String is ‘ad’. Add the character at position 1 to the Output.
Output String
cba d
210 0
Next digit -> 0. String is ‘d’. Add the character at position 0 to the Output.
Output String
cbad ''
2100
To convert a given number to Factorial Number System,successively divide the number by 1,2,3,4,5 and so on until the quotient becomes zero. The reminders at each step forms the factoradic representation.
For eg, to convert 349 to factoradic,
Quotient Reminder Factorial Representation
349/1 349 0 0
349/2 174 1 10
174/3 58 0 010
58/4 14 2 2010
14/5 2 4 42010
2/6 0 2 242010
Factoradic representation of 349 is 242010.
One way is to rank and unrank the choice of indices by a particular group of equal numbers, e.g.,
def choose(n, k):
c = 1
for f in xrange(1, k + 1):
c = (c * (n - f + 1)) // f
return c
def rank_choice(S):
k = len(S)
r = 0
j = k - 1
for n in S:
for i in xrange(j, n):
r += choose(i, j)
j -= 1
return r
def unrank_choice(k, r):
S = []
for j in xrange(k - 1, -1, -1):
n = j
while r >= choose(n, j):
r -= choose(n, j)
n += 1
S.append(n)
return S
def rank_perm(P):
P = list(P)
r = 0
for n in xrange(max(P), -1, -1):
S = []
for i, p in enumerate(P):
if p == n:
S.append(i)
S.reverse()
for i in S:
del P[i]
r *= choose(len(P) + len(S), len(S))
r += rank_choice(S)
return r
def unrank_perm(M, r):
P = []
for n, m in enumerate(M):
S = unrank_choice(m, r % choose(len(P) + m, m))
r //= choose(len(P) + m, m)
S.reverse()
for i in S:
P.insert(i, n)
return tuple(P)
if __name__ == '__main__':
for i in xrange(60):
print rank_perm(unrank_perm([2, 3, 1], i))
For large n-s you need arbitrary precision library like GMP.
this is my previous post for an unranking function written in python, I think it's readable, almost like a pseudocode, there is also some explanation in the comments: Given a list of elements in lexicographical order (i.e. ['a', 'b', 'c', 'd']), find the nth permutation - Average time to solve?
based on this you should be able to figure out the ranking function, it's basically the same logic ;)
Java, from https://github.com/timtiemens/permute/blob/master/src/main/java/permute/PermuteUtil.java (my public domain code, minus the error checking):
public class PermuteUtil {
public <T> List<T> nthPermutation(List<T> original, final BigInteger permutationNumber) {
final int size = original.size();
// the return list:
List<T> ret = new ArrayList<>();
// local mutable copy of the original list:
List<T> numbers = new ArrayList<>(original);
// Our input permutationNumber is [1,N!], but array indexes are [0,N!-1], so subtract one:
BigInteger permNum = permutationNumber.subtract(BigInteger.ONE);
for (int i = 1; i <= size; i++) {
BigInteger factorialNminusI = factorial(size - i);
// casting to integer is ok here, because even though permNum _could_ be big,
// the factorialNminusI is _always_ big
int j = permNum.divide(factorialNminusI).intValue();
permNum = permNum.mod(factorialNminusI);
// remove item at index j, and put it in the return list at the end
T item = numbers.remove(j);
ret.add(item);
}
return ret;
}
}