Changed order for parse_expr() - sympy

I use parse_expr("-5 + 2*x + 3 - 7*x + 5 - 3*x", evaluate=False).
According to documentation for evaluate=False, I expected to keep the order of the expression:
"When False, the order of the arguments will remain as they were in the string ..."
But the result is sorted:
-7*x - 3*x + 2*x - 5 + 3 + 5
sympy=1.4

It is as advertised:
>>> u = parse_expr("-5 + 2*x + 3 - 7*x + 5 - 3*x", evaluate=False); u.args
(-5, 2*x, 3, -7*x, 5, -3*x)
The printer, however, prints them in sorted order. It seems like there should be an easier way to do the following, but it works:
>>> s=StrPrinter(dict(order='none'))
>>> s._print_Add(u)
-5 + 2*x + 3 - 7*x + 5 - 3*x

Related

SymPy unable to simplify solution

I am trying to solve the following system of equation using sympy.
from sympy import *
n = 4
K = 2
a = symbols(f"a_:{int(n)}", real=True)
b = symbols(f"b_:{int(n)}", real=True)
X = symbols(f"X_:{int(K)}", real=True)
Y = symbols(f"Y_:{int(K)}", real=True)
lambda_ = symbols("lambda",real=True)
mu = symbols(f"mu_:{int(K)}", real=True)
list_eq = [
# (1)
Eq(a[0] + a[1] + a[2] + a[3], 0),
Eq(a[0] + a[1], X[0]),
Eq(a[2] + a[3], X[1]),
# (2)
Eq(b[0] + b[1] + b[2] + b[3], 0),
Eq(b[0] + b[1], Y[0]),
Eq(b[2] + b[3], Y[1]),
# (3)
Eq(b[0], a[0] - lambda_ - mu[0]),
Eq(b[1], a[1] - lambda_ - mu[0]),
Eq(b[2], a[2] - lambda_ - mu[1]),
Eq(b[3], a[3] - lambda_ - mu[1]),
]
solve(list_eq, dict=True)
[{X_0: -b_2 - b_3 + mu_0 - mu_1,
X_1: b_2 + b_3 - mu_0 + mu_1,
Y_0: -b_2 - b_3,
Y_1: b_2 + b_3,
a_0: -b_1 - b_2 - b_3 + mu_0/2 - mu_1/2,
a_1: b_1 + mu_0/2 - mu_1/2,
a_2: b_2 - mu_0/2 + mu_1/2,
a_3: b_3 - mu_0/2 + mu_1/2,
b_0: -b_1 - b_2 - b_3,
lambda: -mu_0/2 - mu_1/2}]
The analytical solution for b is
b_0 = a_0 + (1/2)*(Y_0 - X_0)
b_1 = a_1 + (1/2)*(Y_0 - X_0)
b_2 = a_2 + (1/2)*(Y_1 - X_1)
b_3 = a_3 + (1/2)*(Y_1 - X_1)
However sympy does not manage to simplify the results and is still using mu_0 and mu_1 in the solution.
Is it possible to simplify those variables in the solution ?
For more details, the system i'm trying to solve is an optimization problem under constraints:
min_b || a - b ||^2 such that b_0 + b_1 + b_2 + b_3 = 0 and b_0 + b_1 = Y_0 and b_2 + b_3 = Y_1.
We assume that a_0 + a_1 + a_2 + a_3 = 0 and a_0 + a_1 = X_0 and a_2 + a_3 = X_1.
Therefore, the equations (1) are the assumptions on a and the equations (2) and (3) are the KKT equations.
You can eliminate variables from a system of linear or polynomial equations using a Groebner basis:
In [61]: G = groebner(list_eq, [*mu, lambda_, *b, *a, *X, *Y])
In [62]: for eq in G: pprint(eq)
X₁ - Y₁ + 2⋅λ + 2⋅μ₀
-X₁ + Y₁ + 2⋅λ + 2⋅μ₁
X₁ + Y₁ + 2⋅a₁ + 2⋅b₀
-X₁ + Y₁ - 2⋅a₁ + 2⋅b₁
-X₁ - Y₁ + 2⋅a₃ + 2⋅b₂
X₁ - Y₁ - 2⋅a₃ + 2⋅b₃
X₁ + a₀ + a₁
-X₁ + a₂ + a₃
X₀ + X₁
Y₀ + Y₁
Here the first two equations have mu and lambda but the others have these symbols eliminated. You can use G[2:] to get the equations that do not involve mu and lambda. The order of the symbols in a lex Groebner basis determines which symbols are eliminated first from the equations. You can solve specifically for b in terms of a, X and Y by picking out the equations involving b:
In [63]: solve(G[2:6], b)
Out[63]:
⎧ X₁ Y₁ X₁ Y₁ X₁ Y₁ X₁ Y₁ ⎫
⎨b₀: - ── - ── - a₁, b₁: ── - ── + a₁, b₂: ── + ── - a₃, b₃: - ── + ── + a₃⎬
⎩ 2 2 2 2 2 2 2 2 ⎭
This is not exactly the form you suggested but the form of solution for the problem is not unique because of the constraints among the variables it is expressed in. There are many equivalent ways to express b in terms of a, X and Y even after eliminating mu and lambda because a, X and Y are not independent (they are 8 symbols connected by 4 constraints).
Sometimes adding auxiliary equations with the pattern you desire and indicating what you don't want as a solution variable can help you get closer to what you desired:
[38] eqs = list_eq + [Y[0]-X[0]-var('z0'), Y[1]-X[1]-var('z1')]
[39] sol = Dict(solve(eqs, exclude=a, dict=True)[0]); sol

Use overload operators to calculate two class

Overload op +
Constructor
Supposed output:
b= + 8 * x^3 + 6 * x^2 + 4 * x + 2;
c= + 3 * x^2 + 1;
d= + 8 * x^3 + 9 * x^2 + 4 * x + 3
I try to use a for loop inside the overload function + to add the two Polynomial classes up. But the overload function does not work.
The d is supposed to be a function that b and c add up to.

Calculating time complexity of a recursive function having a loop inside it

I was working on a simple problem and I came up with a recursive function in C++, below is my function.
void test(int arr[],int n,int x = 0){
cout<<arr[x];
for(int i = x+1;i < n;i++){
test(arr, n, i);
}
}
I wonder what will be the time complexity of the above function if anyone can calculate the time complexity for the above method it will be a great help in improving my function.
You can write its recurrent relation likes the following:
T(n) = T(n-1) + T(n-2) + ... + T(1) + 1
Indeed T'(x) is T(n - x) and T(1) = 1 (The last one in the realtion is is for cout). We can see:
T(2) = T(1) + 1 = 2
T(3) = T(2) + T(1) + 1 = 2 + 1 + 1 = 4
T(4) = 4 + 2 + 1 + 1 = 2^2 + 2^1 + 2^0 + 1 = 8
T(5) = 8 + 4 + 2 + 1 + 1 = 2^3 + 2^2 + 2^1 + 2^0 + 1 = 16
.
.
.
T(n) = 2^{n-2} + 2^{n-1} + ... + 2^0 + 1 = 2^{n-1}
Hence, T(n) = \Theta(2^n).

How should I go about solving this recursion without trial and error

int sum_down(int x)
{
if (x >= 0)
{
x = x - 1;
int y = x + sum_down(x);
return y + sum_down(x);
}
else
{
return 1;
}
}
What is this smallest integer value of the parameter x, so that the returned value is greater than 1.000.000 ?
Right now I am just doing it by trial and error and since this question is asked via a paper format. I don't think I will have enough time to do trial and error. Question is, how do you guys visualise this quickly such that it can be solved easily. Thanks guys and I am new to programming so thanks in advance!
The recursion logic:
x = x - 1;
int y = x + sum_down(x);
return y + sum_down(x);
can be simplified to:
x = x - 1;
int y = x + sum_down(x) + sum_down(x);
return y;
which can be simplified to:
int y = (x-1) + sum_down(x-1) + sum_down(x-1);
return y;
which can be simplified to:
return (x-1) + 2*sum_down(x-1);
Put in mathematical form,
f(N) = (N-1) + 2*f(N-1)
with the recursion terminating when N is -1. f(-1) = 1.
Hence,
f(0) = -1 + 2*1 = 1
f(1) = 0 + 2*1 = 2
f(2) = 1 + 2*2 = 5
...
f(18) = 17 + 2*f(17) = 524269
f(19) = 18 + 2*524269 = 1048556
Your program can be written this way (sorry about c#):
public static void Main()
{
int i = 0;
int j = 0;
do
{
i++;
j = sum_down(i);
Console.Out.WriteLine("j:" + j);
} while (j < 1000000);
Console.Out.WriteLine("i:" + i);
}
static int sum_down(int x)
{
if (x >= 0)
{
return x - 1 + 2 * sum_down(x - 1);
}
else
{
return 1;
}
}
So at first iteration you'll get 2, then 5, then 12... So you can neglect the x-1 part since it'll stay little compared to the multiplication.
So we have:
i = 1 => sum_down ~= 4 (real is 2)
i = 2 => sum_down ~= 8 (real is 5)
i = 3 => sum_down ~= 16 (real is 12)
i = 4 => sum_down ~= 32 (real is 27)
i = 5 => sum_down ~= 64 (real is 58)
So we can say that sum_down(x) ~= 2^x+1. Then it's just basic math with 2^x+1 < 1 000 000 which is 19.
A bit late, but it's not that hard to get an exact non-recursive formula.
Write it up mathematically, as explained in other answers already:
f(-1) = 1
f(x) = 2*f(x-1) + x-1
This is the same as
f(-1) = 1
f(x+1) = 2*f(x) + x
(just switched from x and x-1 to x+1 and x, difference 1 in both cases)
The first few x and f(x) are:
x: -1 0 1 2 3 4
f(x): 1 1 2 5 12 27
And while there are many arbitrary complicated ways to transform this into a non-recursive formula, with easy ones it often helps to write up what the difference is between each two elements:
x: -1 0 1 2 3 4
f(x): 1 1 2 5 12 27
0 1 3 7 15
So, for some x
f(x+1) - f(x) = 2^(x+1) - 1
f(x+2) - f(x) = (f(x+2) - f(x+1)) + (f(x+1) - f(x)) = 2^(x+2) + 2^(x+1) - 2
f(x+n) - f(x) = sum[0<=i<n](2^(x+1+i)) - n
With eg. a x=0 inserted, to make f(x+n) to f(n):
f(x+n) - f(x) = sum[0<=i<n](2^(x+1+i)) - n
f(0+n) - f(0) = sum[0<=i<n](2^(0+1+i)) - n
f(n) - 1 = sum[0<=i<n](2^(i+1)) - n
f(n) = sum[0<=i<n](2^(i+1)) - n + 1
f(n) = sum[0<i<=n](2^i) - n + 1
f(n) = (2^(n+1) - 2) - n + 1
f(n) = 2^(n+1) - n - 1
No recursion anymore.
How about this :
int x = 0;
while (sum_down(x) <= 1000000)
{
x++;
}
The loop increments x until the result of sum_down(x) is superior to 1.000.000.
Edit : The result would be 19.
While trying to understand and simplify the recursion logic behind the sum_down() function is enlightening and informative, this snippet tend to be logical and pragmatic in that it does not try and solve the problem in terms of context, but in terms of results.
Two lines of Python code to answer your question:
>>> from itertools import * # no code but needed for dropwhile() and count()
Define the recursive function (See R Sahu's answer)
>>> f = lambda x: 1 if x<0 else (x-1) + 2*f(x-1)
Then use the dropwhile() function to remove elements from the list [0, 1, 2, 3, ....] for which f(x)<=1000000, resulting in a list of integers for which f(x) > 1000000. Note: count() returns an infinite "list" of [0, 1, 2, ....]
The dropwhile() function returns a Python generator so we use next() to get the first value of the list:
>>> next(dropwhile(lambda x: f(x)<=1000000, count()))
19

How is make_heap in C++ implemented to have complexity of 3N?

I wonder what's the algorithm of make_heap in in C++ such that the complexity is 3*N? Only way I can think of to make a heap by inserting elements have complexity of O(N Log N). Thanks a lot!
You represent the heap as an array. The two elements below the i'th element are at positions 2i+1 and 2i+2. If the array has n elements then, starting from the end, take each element, and let it "fall" to the right place in the heap. This is O(n) to run.
Why? Well for n/2 of the elements there are no children. For n/4 there is a subtree of height 1. For n/8 there is a subtree of height 2. For n/16 a subtree of height 3. And so on. So we get the series n/22 + 2n/23 + 3n/24 + ... = (n/2)(1 * (1/2 + 1/4 + 1/8 + . ...) + (1/2) * (1/2 + 1/4 + 1/8 + . ...) + (1/4) * (1/2 + 1/4 + 1/8 + . ...) + ...) = (n/2) * (1 * 1 + (1/2) * 1 + (1/4) * 1 + ...) = (n/2) * 2 = n. Or, formatted maybe more readably to see the geometric series that are being summed:
n/2^2 + 2n/2^3 + 3n/2^4 + ...
= (n/2^2 + n/2^3 + n/2^4 + ...)
+ (n/2^3 + n/2^4 + ...)
+ (n/2^4 + ...)
+ ...
= n/2^2 (1 + 1/2 + 1/2^4 + ...)
+ n/2^3 (1 + 1/2 + 1/2^3 + ...)
+ n/2^4 (1 + 1/2 + 1/2^3 + ...)
+ ...
= n/2^2 * 2
+ n/2^3 * 2
+ n/2^4 * 2
+ ...
= n/2 + n/2^2 + n/2^3 + ...
= n(1/2 + 1/4 + 1/8 + ...)
= n
And the trick we used repeatedly is that we can sum the geometric series with
1 + 1/2 + 1/4 + 1/8 + ...
= (1 + 1/2 + 1/4 + 1/8 + ...) (1 - 1/2)/(1 - 1/2)
= (1 * (1 - 1/2)
+ 1/2 * (1 - 1/2)
+ 1/4 * (1 - 1/2)
+ 1/8 * (1 - 1/2)
+ ...) / (1 - 1/2)
= (1 - 1/2
+ 1/2 - 1/4
+ 1/4 - 1/8
+ 1/8 - 1/16
+ ...) / (1 - 1/2)
= 1 / (1 - 1/2)
= 1 / (1/2)
= 2
So the total number of "see if I need to fall one more, and if so which way do I fall? comparisons comes to n. But you get round-off from discretization, so you always come out to less than n sets of swaps to figure out. Each of which requires at most 3 comparisons. (Compare root to each child to see if it needs to fall, then the children to each other if the root was larger than both children.)