Maple: Ignore higher order terms in a polynomial with fraction exponents - polynomials

I am trying to ignore higher order exponents in polynomial on Maple. This is an example of what I have1
I would want my mapping to return 1 + $x^{\frac{1}{2}}$ +x^3+x$^{\frac{7}{2}}$. It seems like the map completely ignores any fraction exponents...

Your attempt using degree doesn't work because of the following,
degree(x^(7/2),x);
FAIL
With the terms all being powers of x you could handle that example with,
remove(t->type(t,identical(x)^rational)
and op(2,t)>4, m);
1+x^(1/2)+x^3+x^(7/2)
If you have other kinds of example then you could share them; adjustments are possible.
[edit] If you relax that strict inequality so as also to disallow x^4, then you could also get by with the following (which is also convenient if you have coefficients):
m := 1 + x^(1/2) + x^3 + x^(7/2)
+ x^6 + x^4 + x^(199/2):
convert(series(m,x,4),polynom);
1+x^(1/2)+x^3+x^(7/2)
Compare with,
remove(t->type(t,identical(x)^rational)
and op(2,t)>=4, m);
1+x^(1/2)+x^3+x^(7/2)

Related

How can find out the contents of exp() built in function of the C numerics library of <cmath>

I recently decided to build a simple calculator programme, but when it came to exponents i was lost. OK you can use , but i'd rather know how they solves the problem of that function, other than an impossible amount of if statements e.g.
if(y==2){
x=xx;
}
else if (y==3){
x=xx*x;
}
And so on... So, how did 's exp() do it, and how can i find out?
From An algorithm for calculating exp(x) or e^x:
An algorithm for calculating exp(x) or e^x
This algorithm makes it possible for exp(x) or e^x to be calculated
using only the operations of addition, subtraction, multiplication and
division. The basic idea is to to use a polynomial approximation in
step 3 to calculate e^x. But because this approximation is only
accurate for small arguments x we must take steps 1 and 2 to reduce x
to a smaller value.
Split up x: Write x = n + r, where n is the
nearest integer to x and r is a real number between −½ and +½. Then e^x = e^n · e^r.
Evaluate e^n: Multiply the number e by itself n times. To 14 digits, e
= 2.7182818284590. The multiplication can be done quite efficiently. For example e 8 can be evaluated with just 3 multiplications if it is
written as (((e) 2 ) 2 ) 2. To further increase efficiency various
integer powers of e can be calculated once and stored in a lookup
table.
Evaluate e^r using the polynomial: EXP(r)=e^r=1 + r + (r^2)/2 + (r^3)/6 + (r^4)/24 + (r^5)/120
For r between −½ and +½ this polynomial is accurate to within
±0.00003.
EDIT:
If you are interested in the original implementation in the GNU libc library then you can download the sources from here.

Fast integer solution of x(x-1)/2 = c

Given a non-negative integer c, I need an efficient algorithm to find the largest integer x such that
x*(x-1)/2 <= c
Equivalently, I need an efficient and reliably accurate algorithm to compute:
x = floor((1 + sqrt(1 + 8*c))/2) (1)
For the sake of defineteness I tagged this question C++, so the answer should be a function written in that language. You can assume that c is an unsigned 32 bit int.
Also, if you can prove that (1) (or an equivalent expression involving floating-point arithmetic) always gives the right result, that's a valid answer too, since floating-point on modern processors can be faster than integer algorithms.
If you're willing to assume IEEE doubles with correct rounding for all operations including square root, then the expression that you wrote (plus a cast to double) gives the right answer on all inputs.
Here's an informal proof. Since c is a 32-bit unsigned integer being converted to a floating-point type with a 53-bit significand, 1 + 8*(double)c is exact, and sqrt(1 + 8*(double)c) is correctly rounded. 1 + sqrt(1 + 8*(double)c) is accurate to within one ulp, since the last term being less than 2**((32 + 3)/2) = 2**17.5 implies that the unit in the last place of the latter term is less than 1, and thus (1 + sqrt(1 + 8*(double)c))/2 is accurate to within one ulp, since division by 2 is exact.
The last piece of business is the floor. The problem cases here are when (1 + sqrt(1 + 8*(double)c))/2 is rounded up to an integer. This happens if and only if sqrt(...) rounds up to an odd integer. Since the argument of sqrt is an integer, the worst cases look like sqrt(z**2 - 1) for positive odd integers z, and we bound
z - sqrt(z**2 - 1) = z * (1 - sqrt(1 - 1/z**2)) >= 1/(2*z)
by Taylor expansion. Since z is less than 2**17.5, the gap to the nearest integer is at least 1/2**18.5 on a result of magnitude less than 2**17.5, which means that this error cannot result from a correctly rounded sqrt.
Adopting Yakk's simplification, we can write
(uint32_t)(0.5 + sqrt(0.25 + 2.0*c))
without further checking.
If we start with the quadratic formula, we quickly reach sqrt(1/4 + 2c), round up at 1/2 or higher.
Now, if you do that calculation in floating point, there can be inaccuracies.
There are two approaches to deal with these inaccuracies. The first would be to carefully determine how big they are, determine if the calculated value is close enough to a half for them to be important. If they aren't important, simply return the value. If they are, we can still bound the answer to being one of two values. Test those two values in integer math, and return.
However, we can do away with that careful bit, and note that sqrt(1/4 + 2c) is going to have an error less than 0.5 if the values are 32 bits, and we use doubles. (We cannot make this guarantee with floats, as by 2^31 the float cannot handle +0.5 without rounding).
In essense, we use the quadratic formula to reduce it to two possibilities, and then test those two.
uint64_t eval(uint64_t x) {
return x*(x-1)/2;
}
unsigned solve(unsigned c) {
double test = sqrt( 0.25 + 2.*c );
if ( eval(test+1.) <= c )
return test+1.
ASSERT( eval(test) <= c );
return test;
}
Note that converting a positive double to an integral type rounds towards 0. You can insert floors if you want.
This may be a bit tangential to your question. But, what caught my attention is the specific formula. You are trying to find the triangular root of Tn - 1 (where Tn is the nth triangular number).
I.e.:
Tn = n * (n + 1) / 2
and
Tn - n = Tn - 1 = n * (n - 1) / 2
From the nifty trick described here, for Tn we have:
n = int(sqrt(2 * c))
Looking for n such that Tn - 1 ≤ c in this case doesn't change the definition of n, for the same reason as in the original question.
Computationally, this saves a few operations, so it's theoretically faster than the exact solution (1). In reality, it's probably about the same.
Neither this solution or the one presented by David are as "exact" as your (1) though.
floor((1 + sqrt(1 + 8*c))/2) (blue) vs int(sqrt(2 * c)) (red) vs Exact (white line)
floor((1 + sqrt(1 + 8*c))/2) (blue) vs int(sqrt(0.25 + 2 * c) + 0.5 (red) vs Exact (white line)
My real point is that triangular numbers are a fun set of numbers that are connected to squares, pascal's triangle, Fibonacci numbers, et. al.
As such there are loads of identities around them which might be used to rearrange the problem in a way that didn't require a square root.
Of particular interest may be that Tn + Tn - 1 = n2
I'm assuming you know that you're working with a triangular number, but if you didn't realize that, searching for triangular roots yields a few questions such as this one which are along the same topic.

How do I make a program with complex numbers?

The lack of knowledge of complex numbers doesn't allow me to make the program. C++.
Task: Given real numbers u1, u2, v1, v2, w1, w2. Get 2u + (3uw)/(2+w-v) - 7, where u,v,w - complex numbers: u1+iu2, v1+iv2, w1+iw2. (Determine procedures for arithmetic operation's implementation on complex numbers).
You can implement this program through either classes or structures or any other way. Let us talk something about complex Number. Number of the form a+ib in which i^2 = -1. Note: a is a real part wher as b is an imaginary part. For example 4 + 5i where 4 is real and 5 is an imaginary part.
Rule for addition is simple you add real part to the real part and imaginary part to the imaginary part.
Addition
(a+ib) + (c+id) = (a + c) + i( b + d )
same rule for subtraction
Multiplication
(a + ib ) * (c + id) = ( ac - bd ) + i(ad + bc )
Ok now if you are using classes you can overload operator for addition, multiplication, division and subtraction. Now its upto you how you build your program.
You can also look at std::complex. Read also about this.
Thank You. If you have any questions please feel free to ask.

Flipping a two's complement number's sign using addition, subtraction, and left shifting

On a homework assignment, one of the questions asked us to multiply any arbitrary integer by a constant using only the +, -, and << operators and a maximum of three operations. For example, the first constant was 17, which I solved as
(x << 4) + x
However, some of the constants given were negative (such as -7). Multiplying by 7 is a relatively trivial thing to do (I have it as (x << 3) - x), but I cannot figure out how to flip the sign using only the three allowed operators.
I have attempted to flip that bit by adding or subtracting 2147483648 to every result (with the idea that this would force the most significant bit to be used, thus flipping the sign), but in my test implementation in C#, this has proven unsuccessful.
Is there some positive number by which I can multiply a given int that will be functionally analogous to -7? Would adding 2147483648 work in a language other than C#? Am I overlooking something?
The original question from the book is below:
Suppose we are given the task of generating code to multiply integer variable x by various different constant factors K. To be efficient, we want to use only the operations +, -, and <<. For the following values of K, write C expressions to perform the multiplication using at most three operations per expression.
A. K = 17
B. K = -7
C. K = 60
D. K = -112
You don't need to change the sign. You wrote 7 * x as (equivalent to) 8*x - x. Now, what do you need to do with that to obtain -7 * x?
Is x - (x << 3) not valid?

how to determine base of a number?

Given a integer number and its reresentation in some arbitrary number system. The purpose is to find the base of the number system. For example, number is 10 and representation is 000010, then the base should be 10. Another example: number 21 representation is 0010101 then base is 2. One more example is: number is 6 and representation os 10100 then base is sqrt(2). Does anyone have any idea how to solve such problem?
___
\
number = /__ ( digit[i] * base ^ i )
You know number, you know all digit[i], you just have to find out base.
Whether solving this equation is simple or complex is left as an exercise.
I do not think that an answer can be given for every case. And I actually have a reason to think so! =)
Given a number x, with representation a_6 a_5 a_4 a_3 a_2 a_1 in base b, finding the base means solving
a_6 b^5 + a_5 b^4 + a_4 b^3 + a_3 b^2 + a_2 b^1 + a_1 = x.
This cannot be done generally, as shown by Abel and Ruffini. You might be luckier with shorter numbers, but if more than four digits are involved, the formulas are increasingly ugly.
There are quite a lot good approximation algorithms, though. See here.
For integers only, it's not that difficult (we can enumerate).
Let's look at 21 and its representation 10101.
1 * base^4 <= 21 < (1+1) * base^4
Let's generate the numbers for some bases:
base low high
2 16 32
3 81 162
More generally, we have N represented as ∑ ai * basei. Considering I the maximum power for which aI is non null we have:
a[I] * base^I <= N < (a[I] + 1) * base^I # does not matter if not representable
# Isolate base term
N / (a[I] + 1) < base^I <= N / a[I]
# Ith root
Ithroot( N / (a[I] + 1) ) < base <= Ithroot( N / a[I] )
# Or as a range
base in ] Ithroot(N / (a[I] + 1)), Ithroot( N / a[I] ) ]
In the case of an integer base, or if you have a list of known possible bases, I doubt they'll be many possibilities, so we can just try them out.
Note that it may be faster to actually take the Ithroot of N / (a[I] + 1) and iterate from here instead of computing the second one (which should be close enough)... but I'd need math review on that gut feeling.
If you really don't have any idea (trying to find a floating base)... well it's a bit more difficult I guess, but you can always refine the inequality (including one or two more terms) following the same property.
An algorithm like this should find the base if it is an integer, and should at least narrow down the choices for a non-integer base:
Let N be your integer and R be its representation in the mystery base.
Find the largest digit in R and call it r.
You know that your base is at least r + 1.
For base == (r+1, r+2, ...), let I represent R interpreted in base base
If I equals N, then base is your mystery base.
If I is less than N, try the next base.
If I is greater than N, then your base is somewhere between base - 1 and base.
It's a brute-force method, but it should work. You may also be able to speed it up a bit by incrementing base by more than one if I is significantly smaller than N.
Something else that might help speed things up, particularly in the case of a non-integer base: Remember that as several people have mentioned, a number in an arbitrary base can be expanded as a polynomial like
x = a[n]*base^n + a[n-1]*base^(n-1) + ... + a[2]*base^2 + a[1]*base + a[0]
When evaluating potential bases, you don't need to convert the entire number. Start by converting only the largest term, a[n]*base^n. If this is larger than x, then you already know your base is too big. Otherwise, add one term at a time (moving from most-significant to least-significant). That way, you don't waste time computing terms after you know your base is wrong.
Also, there is another quick way to eliminate a potential base. Notice that you can re-arrange the above polynomial expression and get
(x - a[0]) = a[n]*base^n + a[n-1]*base^(n-1) + ... + a[2]*base^2 + a[1]*base
or
(x - a[0]) = (a[n]*base^(n-1) + a[n-1]*base^(n-2) + ... + a[2]*base + a[1])*base
You know the values of x and a[0] (the "ones" digit, you can interpret it regardless of base). What this gives you the extra condition that (x - a[0]) must be evenly divisible by base (since all your a[] values are integers). If you calculate (x - a[0]) % base and get a non-zero result, then base cannot be the correct base.
Im not sure if this is efficiently solvable. I would just try to pick a random base, see if given the base the result is smaller, larger or equal to the number. In case its smaller, pick a larger base, in case its larger pick a smaller base, otherwise you have the correct base.
This should give you a starting point:
Create an equation from the number and representation, number 42 and represenation "0010203" becomes:
1 * base ^ 4 + 2 * base ^ 2 + 3 = 42
Now you solve the equation to get the value of base.
I'm thinking you will need try and check different bases. To be efficient, your starting base could be max(digit) + 1 as you know it won't be less than that. If that's too small double until you exceed, and then use binary search to narrow it down. This way your algorithm should run in O(log n) for normal situations.
Several of the other posts suggest that the solution might be found by finding the roots of the polynomial the number represents. These will, of course, generally work, though they will have a tendency to produce negative and complex bases as well as positive integers.
Another approach would be to cast this as an integer programming problem and solve using branch-and-bound.
But I suspect that the suggestion of guessing-and-testing will be quicker than any of the cleverer proposals.