sympy division documentation terminology confusion - sympy

On the official sympy docs here: https://docs.sympy.org/latest/modules/polys/basics.html#division
it says:
The function div() provides division of polynomials with remainder.
That is, for polynomials f and g, it computes q and r, such that
๐‘“=๐‘”โ‹…๐‘ž+๐‘Ÿ and deg(๐‘Ÿ)<๐‘ž. For polynomials in one variables with coefficients
in a field, say, the rational numbers, q and r are uniquely defined
this way:
Notice that it says deg(r)<q. Do they mean to say deg(r)<deg(q)?

I have accidentally put a bounty on this question. I am answering my own question here in an attempt to close the question.

Related

Bitwise calculation of an LFSR sequence using CRC-style notation

My question stems from the observation made that we can use a Linear Feedback Shift Register to perform a CRC check. Algebraically this normally is of the form;
S(x) = M(x) * x^k % G(x) ( gives the remainder, for a G(x) of order k)
The implementation of this is shown in this question, (and registers are all initialised to zero) and the mathematical bitwise calculation of the XOR division is shown in this question here.
I understand both of these - however, I also know that another common way of using an LFSR is to have no input, but instead preload the registers with non-zero values, and run (with zero as an input) to form a sequence of pseudo random numbers. This is shown in the image below
My question is, just as the CRC can be represented as a modulo-2 division both bitwise and algebraically, can we do the same for an LFSR sequence generator, given the generator polynomial and initial values? And if so, an example would be great!
Thanks very much, feel free to correct me if I've misrepresented or misunderstood a concept!

Need pow(-1,1.2) to be 1

I am using math.h with GCC and GSL. I was wondering how to get this to evaluate?
I was hoping that the pow function would recognize pow(-1,1.2) as ((-1)^6)^(1/5). But it doesn't.
Does anybody know of a c++ library that will recognize these? Perhaps somebody has a decomposition routine they could share.
Mathematically, pow(-1, 1.2) is simply not defined. There are no powers with fractional exponents of negative numbers, and I hope there is no library that will simply return some arbitray value for such an expression. Would you also expect things like
pow(-1, 0.5) = ((-1)^2)^(1/4) = 1
which obviously isn't desirable.
Moreover, the floating point number 1.2 isn't even exactly equal to 6/5. The closest double precision number to 1.2 is
1.1999999999999999555910790149937383830547332763671875
Given this, what result would you expect now for pow(-1, 1.2)?
If you want to raise negative numbers to powers -- especially fractional powers -- use the cpow() method. You'll need to include <complex> to use it.
It seems like you're looking for pow(abs(x), y).
Explanation: you seem to be thinking in terms of
xy = (xN)(y/N)
If we choose that N === 2, then you have
(x2)y/2 = ((x2)1/2)y
But
(x2)1/2 = |x|
Substituting gives
|x|y
This is a stretch, because the above manipulations only work for non-negative x, but you're the one who chose to use that assumption.
Sounds like you want to perform a complex power (cpow()) and then take the magnitude (abs()) of that after.
>>> abs(cmath.exp(1.2*cmath.log(-1)))
1.0
>>> abs(cmath.exp(1.2*cmath.log(-293.2834)))
913.57662451612202
pow(a,b) is often thought of, defined as, and implemented as exp(log(a)*b) where log(a) is natural logarithm of a. log(a) is not defined for a<=0 in real numbers. So you need to either write a function with special case for negative a and integer b and/or b=1/(some_integer). It's easy to special-case for integer b, but for b=1/(some_integer) it's prone to round-off problems, like Sven Marnach pointed out.
Maybe for your domain pow(-a,b) should always be -pow(a,b)? But then you'd just implement such function, so I assume the question warrants more explanation .
Like duskwuff suggested, a much more robust and "mathematical" solution is to use complex functions log and exp, but it's much more "complex" (excuse my pun) than it seems on the surface (even though there's cpow function). And it'll be much slower if you have to compute a lot of pow()s.
Now there's an important catch with complex numbers that may or may not be relevant to your problem domain: when done right, the result of pow(a,b) is not one, but often a few complex numbers, but in the cases you care about, one of them will be complex number with nearly-zero imaginary part (it'll be non-zero due to roundoff errors) which you can simply ignore and/or not compute in your code.
To demonstrate it, consider what pow(-1,.5) is. It's a number X such that X^2==-1. Guess what? There are 2 such numbers: i and -i. Generally, pow(-1, 1/N) has exactly N solutions, although you're interested in only one of them.
If the imaginary part of all results of pow(a,b) is significant, it means you are passing wrong values. For single-precision floating point values in the range you describe, 1e-6*max(abs(a),abs(b)) would be a good starting point for defining the "significant enough" threshold. The extreme "wrong values" would be pow(-1,0.5) which would return 0 + 1i (0 in real part, 1 in imaginary part). Here the imaginary part is huge relative to the input and real part, so you know you screwed up your input values.
In any reasonable single-return-result implementation of cpow() , cpow(-1,0.3333) will probably return something like -1+0.000001i and ignore two other values with significant imaginary parts. So you can just take that real value and that's your answer.
Use std::complex. Without that, the roots of unity don't make much sense. With it they make a whole lot of sense.

multiplication of string [ containing integer], output also stored in string, How? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicates:
Inputting large numbers in c++?
Arbitrary-precision arithmetic Explanation
I need to multiply two huge huge integers, like:
a=1212121212121212121212121212121212121212121212121212;
b=1212121212121212121212121212121212121212121212121212;
I think there are no data types in C and C++ to hold this huge an integer, so I thought to keep it as a string format like:-
char *number1="1212121212121212121212121212121212121212121212121212";
char *number2="1212121212121212121212121212121212121212121212121212";
during the time of multiplication I convert it into string with help of atoi() function like:
atoi(number1)*atoi(number2);
As usual the output of this multiplication will be obviously huge, so I need to change the output in string format.
I know there is an itoa() function which converts an integer to a string but it is not compatible with all compilers. Can any body tell me what I should do in this scenario?
I am using Ubuntu-10.04 and the g++ compiler.
Since C and C++ do not offer a native type that supports big numbers, it makes no sense to call atoi() to parse such numbers. atoi() returns a native int which is capped at 2,147,483,647 on 32-bit platforms.
You can use one of the numerous bignum libraries, like GMP for instance.
I think, the best variant besides using some math libraries is to split those numbers into int arrays with some fixed limit. Then just perform multiplication using basic math multiplication methods. And do not forget about overflows.
Multiplying the large numbers is very
difficult, however we can do it by
applying the logarithm of
multiplication of two numbers formula
and now we are going know how we
derived the product of two numbersโ€™
logarithm.
Let us consider a, m and n are positive real numbers but a does not equal to 1 which means โ€˜aโ€™ belongs to R+ โ€“ {1}. Logarithm of m and n to base a are x and y respectively by satisfying ax is equal to m and ay is equal to n condition.
loga (m.n) = x + y
As we already know x = loga m and y = loga n.
loga (m.n) = loga m + loga n
logarithm of multiplication of two values is equal to summation of the same valuesโ€™ logarithms. The same logarithmic fundamental can now help us in multiplying the two large numbers by adding the logarithm of those values. If you donโ€™t have a calculator, just take the logarithmic table help to perform this.
Using atoi() is also not helpful since the number itself won't fit in integer data type.
You have to simulate the method you did in elementary school.
121
*23
----
363
242*
----
2783
The implementation is left as an exercise. You would also need to know how to add big numbers.

ฯ†(n) = (p-1)(q-1) p and q are two big numbers find e such that gcd(e,ฯ†(n)) = 1 [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
ฯ†(n) = (p-1)(q-1)
p and q are two big numbers
find e such that gcd(e,ฯ†(n)) = 1
consider p and q to be a very large prime number (Bigint). I want to find a efficient solution for this.
[Edit] I can solve this using a brute force method. But as the numbers are too big I need more efficient solution.
also 1< e < (p-1)(q-1)
Usually you choose e to be a prime number. A common choice is 65537. You then select p and q so that gcd(p-1, e)=1 and gcd(q-1, e)=1, which just required you to check that p-1 and q-1 are not multiples of e (when you (rarely) find that one of them is, you generate a new prime instead).
65537 has the advantage of allowing you to optimize the public key operation by observing that x^65537 = x^(2^16+1) = x^2^2^2^2^2^2^2^2^2^2^2^2^2^2^2^2 * x (mod m), so you just need 16 modular squarings and a modular multiplication.
You have to decide how big you want e to be. This is a system decision. Commonly, e used to be fixed at 3; more usual nowadays is e=65537. In these cases, e is prime, so (as others have already pointed out) you just have to check that (p-1)(q-1) is not a multiple of e.
But some system requirements specify a 32-bit random e. This is because some cryptographers feel that flaws are more likely to be discovered in fixed-exponent RSA systems than in random-exponent systems. (As far as I know, no concrete exploitation has been discovered for fixed-exponent systems; but cryptographers are paid to be over-cautious.)
So let's say you're stuck with having to generate a random 32-bit e that is co-prime to (p-1)(q-1). The simplest solution is this: Generate a random, odd 32-bit number e. Then calculate its inverse mod (p-1)(q-1). If this inverse calculation fails, because e is not co-prime to (p-1)(q-1), then try again.
This is a reasonable, practical solution. You will need to calculate the inverse anyway, and computing an inverse doesn't take much longer than computing a gcd.
If you really need to make it as fast as you can, you can look for small prime factors of (p-1)(q-1) and trial-divide e by these factors: if you find small prime factors, then you can speed up your search for e; if you don't, then the search will probably terminate quickly.
Another reasonable soltuion is to generate a random 32-bit prime e, and check (p-1)(q-1) for divisibility by e. Whether this is allowed would depend on your system requirements. Are you setting these system requirements yourself?
Pick first prime number >= 3 that satisfies this.
If you are looking for speed, you might use small exponent.
There might be two problems whit 2 exponents.
You should not use small exponents to encrypt same massage whit multiple schemes. (for instance if there are tree private/public pairs whit exp = 3 you can use Gaussโ€™s algorithm to recover plaintext.
You should not send short messages because attacker might use only cube root to recover this.
Considering these weaknesses you might use this scheme. And as far as I know number 3 is common number for e.
By the way, brute forcing few numbers is negligible compared to checking for primality.
I think you may have misstated the problem; e=1 works nicely for the one you've written.
What you need to do then is compute de = 1 mod phi(n). This is actually very quick - you simply need to use the extended Euclidean Algorithm on e and phi n. Doing so will allow you to compute de + k\phi(n) = 1 which is to say you have computed the inverse of e under \phi(n).
Edit, Rasmus Faber is correct, you do need to verify that gcd(e, \phi(n)) = 1. The extended Euclidean Algorithm will still do this for you - you compute both the gcd and the multiples of e, phi(n). This tells you what d is, namely that d is the inverse of e, modulu phi n which tells you that t^ed = t^1 modulo phi n.
As for doing this in practice, well, I strongly suggest using a bignum library; rolling your own arbitrary precision euclidean extended algorithm isn't easy. Here is one such function that will do this efficiently for arbitrary precision arithmetic.

How to find out from where (x) integral of a function (from that point to infinety) starts to be lesser than some eps?

So we have some function like (pow(e,(-a*x)))/(sqrt(x)) where a, e are const floats. we have some float eps=pow (10,(-4)). We need to find out starting from which x integral of that function from that x to infinety is less than eps? We can not use functions for special default integration function just standart math like operators. point is to achive max evaluetion speed.
If you perform the u-substitution u=sqrt(x), your integral will become 2 * integral e^(-au^2) du. With one more substitution you can reduce it to a standard normal. Once you have it in standard normal form, this reduces to calculating erf(x). The substitutions can be done abstractly for any a, and the results hardcoded for simplicity and speed.
To calculate this integral you need calculate Error function. If you use gcc you can find erf(...) function in math.h, but it doesn't take params to get exact precise. But you can evaluate Error function's value by youself just using Taylor's series. With given eps it possible to calc the exact number of terms of the series.
Hmm, no one seems to understand the question. The question is: given some function f, find the smallest x such that Integral _ x ^ +inf f(x) < eps. That's the question. So basically we try x = 0, then x = 0.1 then x = 0.2 ... until the integral, for all intents and purposes, vanishes.
For example, given the bell curve for IQ of programmers on SO, at what IQ is the cumulative intelligence of programmers with higher IQ vanishingly small? If we pick x = 100 we know at least half the programmers will have a higher IQ than 100, if we pick 120, how many are left? What about 200? If we have 10,000 programmers here and eps = 1/10000 we're basically asking what IQ the top 0.01% of SO contributors have.
The question is: what is the most efficient way to find this number, given that nothing is known about f other than that is decreases fast enough that its the integral from x to infinity approaches zero as x approaches infinity?
The general answer is: you must start with a guess of some kind. If the result is too big, double your guess, and keep going until you satisfy the requirement. Then, go back to the last value you had (which didn't) and do a binary chop to find the smallest x satisfying the requirement.
To make a good guess is hard. One way is to use a Chebychev approximation of the function, integrate it analytically, solve the problem with the resulting polynomial, and use the solution as your starting guess. The assumption is that all functions look like polynomials off sufficiently high order in any given range.