What is the most optimal way to convert a decimal number into its binary form ,i.e with the best time complexity?
Normally to convert a decimal number into binary,we keep on dividing the number by 2 and storing its remainders.But this would take really long time if the number in decimal form is very large.The time complexity in this case would turn out to be O(log n).
So i want to know if there is any approach other than this that can do my job with better time comlexity?
The problem is essentially that of evaluating a polynomial using binary integer arithmetic, so the result is in binary. Suppose
p(x) = a₀xⁿ + a₁xⁿ⁻¹ + ⋯ + aₙ₋₁x + aₙ
Now if a₀,a₁,a₂,⋯,aₙ are the decimal digits of the number (each implicitly represented by binary numbers in the range 0 through 9) and we evaluate p at x=10 (implicitly in binary) then the result is the binary number that the decimal digit sequence represents.
The best way to evaluate a polynomial at a single point given also the coefficients as input is Horner's Rule. This amounts to rewriting p(x) in a way easy to evaluate as follows.
p(x) = ((⋯((a₀x + a₁)x + a₂)x + ⋯ )x + aₙ₋₁)x + aₙ
This gives the following algorithm. Here the array a[] contains the digits of the decimal number, left to right, each represented as a small integer in the range 0 through 9. Pseudocode for an array indexed from 0:
toNumber(a[])
const x = 10
total = a[0]
for i = 1 to a.length - 1 do
total *= x //multiply the total by x=10
total += a[i] //add on the next digit
return total
Running this code on a machine where numbers are represented in binary gives a binary result. Since that's what we have on this planet, this gives you what you want.
If you want to get the actual bits, now you can use efficient binary operations to get them from the binary number you have constructed, for example, mask and shift.
The complexity of this is linear in the number of digits, because arithmetic operations on machine integers are constant time, and it does two operations per digit (apart from the first). This is a tiny amount of work, so this is supremely fast.
If you need very large numbers, bigger that 64 bits, just use some kind of large integer. Implemented properly this will keep the cost of arithmetic down.
To avoid as much large integer arithmetic as possible if your large integer implementation needs it, break the array of digits into slices of 19 digits, with the leftmost slice potentially having fewer. 19 is the maximum number of digits that can be converted into an (unsigned) 64-bit integer.
Convert each block as above into binary without using large integers and make a new array of those 64-bit values in left to right order. These are now the coefficients of a polynomial to be evaluated at x=10¹⁹. The same algorithm as above can be used only with large integer arithmetic operations, with 10 replaced by 10¹⁹ which should be evaluated with large integer arithmetic in advance of its use.
Related
I know that a number x lies between n and f (f > n > 0). So my idea is to bring that range to [0, 0.65535] by 0.65535 * (x - n) / (f - n).
Then I just could multiply by 10000, round and store integer in two bytes.
Is it going to be an effective use of storage in terms of precision?
I'm doing it for a WebGL1.0 shader, so I'd like to have simple encoding/decoding math, I don't have access to bitwise operations.
Why multiply by 0.65535 and then by 10000.0? That introduces a second rounding with an unnecessary loss of precision.
The data will be represented well if it has equal likelihood over the entire range (f,n). But this is not always a reasonable assumption. What you're doing is similar to creating a fixed-point representation (fixed step size, just not starting at 0 or with steps that are negative powers of 2).
Floating-point numbers use bigger step sizes for bigger numbers. You could do the same by calculating log(x/f) / log(n/f) * 65535
If a user enters a number "n" (integer) not equal to 0, my program should check if the the fraction 1/n has infinite or finite number of digits after the decimal sign. For example: for n=2 we have 1/2=0.5, therefore we have 1 digit after the decimal point. My first solution to this problem was this:
int n=1;
cin>>n;
if((1.0/n)*n==1)
{
cout<<"fixed number of digits after decimal point";
}
else cout<<"infinite number of digits after decimal point";
Since the computer can't store infinite numbers like 1/3, I expected that (1/3)*3 wouldn't be equal to 1. The first time I ran the program, the result was what I expected, but when I ran the program today, for n=3 I got the output (1/3)*3=1. I was surprised by this result and tried
double fraction = 1.0/n;
cout<< fraction*n;
which also returned 1. Why is the behaviour different and can I make my algorithm work? If I can't make it to work, I will have to check if n's divisors are only 1, 2 and 5, which, I think, would be harder to program and compute.
My IDE is Visual Studio, therefore my C++ compiler is VC.
Your code tries to make use of the fact that 1.0/n is not done with perfect precision, which is true. Multiplying the result by n theoretically should get you something not equal to 1, true.
Sadly the multiplication with n in your code is ALSO not done with perfect precision.
The fact which trips your concept up is that the two imperfections can cancel each other out and you get a seemingly perfect 1 in the end.
So, yes. Go with the divisor check.
Binary vs. decimal
Your assignment asks you whether the fraction 1/n can be represented with a finite number of digits in decimal representation. Floating-point numbers in python are represented using binary, which has some similarities and some differences with decimal:
if a rational number can be represented in binary with a finite number of bits, then it can also be represented in decimal with a finite number of digits;
some numbers can be represented in decimal with a finite number of digits, but require an infinite number of bits in decimal.
This is because 10 = 2 * 5; for any integer p, p / 2**k == (p * 5**k) / 10**k. So 1/2==5/10 and 1/4 == 25/100 and 1/8 == 125/1000 can be represented with finitely many digits or bits. But 1/5 can be represented with finitely many digits in decimal, yet requires infinitely many bits in binary.
Floating-point arithmetic and test for equality
See also: Is floating-point math broken? and What every programmer should know about floating-point arithmetic (pdf paper).
The computation (1.0 / n) * n results in an approximation; there is mostly no way to know whether checking for equality with 1.0 will return true or false. In language C, which uses the same floating-point arithmetic as python, compilers will raise a warning if you try to test for equality of two floating-point numbers (this warning can be abled or disabled with option -Wfloat-equal).
A different logic for your algorithm
You can't rely on floating-point arithmetic to decide your problem. But it's not needed. A number can be represented with finitely many digits if and only if it can be written under the form p / 10**k with p and k integers. So your program should examine n to find out whether there exists j and k such that 1 / n == 1 / (2**j * 5**k), without using floating-point arithmetic.
I have a number in the decimal system. Type double. I translate it using the fractional part of the cycle the odds, it looks like this:
double part;
part = part - int(part);
for (auto i = 0; i < ACCURACY; i++) //Точность
{
part *= typeEncode;
result += std::to_string(int(part));
if (part >= typeEncode / 2)
{
part -= int(part);
}
}
And I transfer such number:
double E1 = 0.15625;
And it turns out I find the number of elements equal ACCURACY. How do I calculate the number of ACCURACY that for each number it is unique? And then there was the extra zeros or circumcision binary numbers.
The internal representation of double is not decimal, it's already binary, and it's defined by an international standard, IEEE 754. So double is 64-bit and consists of 3 parts: the sign (1 bit), the exponent (11 bits) and the significand (52 bits).
Roughly saying, this separation allows to store both very small and very large numbers with the same precision: exponent contains information about the magnitude and significand stores the actual value.
And that's why we immediately see the problem of your program: first you take only the fractional part, here some information is lost and you don't know how much is lost. Then you try to do some kind of conversion a-la "divide and conquer", but the problem is the scale: if you divide the interval [0, 1] into two equal parts [0, 0.5) and [0.5, 1], there will be much more numbers in the first of them.
A good point to start with is probably this article (it's in Russian), or the English Wikipedia article on double-precision numbers. After understanding the internal representation, you'll probably be able to extract the bits you want by simple logical operations (like & and >>). If you need production-quality code for decimal-to-binary conversion, I would recommend this library: https://github.com/floitsch/double-conversion.
The most common way is to get the power of 2 for each non-zero position of the binary number, and then sum them up. This is not workable when the binary number is huge, say,
10000...0001 //1000000 positions
It is impossible to let the computer compute the pow(2,1000000). So the traditional way is not workable.
Other way to do this?
Could someone give an arithmetic method about how to compute, not library?
As happydave said, there are existing libraries (such as GMP) for this type of thing. If you need to roll your own for some reason, here's an outline of a reasonably efficient approach.
You'll need bigint subtraction, comparison and multiplication.
Cache values of 10^(2^n) in your binary format until the next value is bigger than your binary number. This will allow you to quickly generate a power of ten by doing the following:
Select the largest value in your cache smaller than your remaining number, store this
in a working variable.
do{
Multiply it by the next largest value in your cache and store the result in a
temporary value.
If the new value is still smaller, set your working value to this number (swapping
references here rather than allocating new memory is a good idea),
Keep a counter to see which digit you're at. If this changes by more than one
between instances of the outer loop, you need to pad with zeros
} Until you run out of cache
This is your next base ten value in binary, subtract it from your binary number while
the binary number is larger than your digit, the number of times you do this is the
decimal digit -- you can cheat a little here by comparing the most significant bits
and finding a lower bound before trying subtraction.
Repeat until your binary number is 0
This is roughly O(n^4) with regards to number of binary digits, and O(nlog(n)) with regards to memory. You can get that n^4 closer to n^3 by using a more sophisticated multiplication algorithm.
You could write your own class for handling arbitrarily large integers (which you can represent as an array of integers, or whatever makes the most sense), and implement the operations (*, pow, etc.) yourself. Or you could google "C++ big integer library", and find someone else who has already implemented it.
It is impossible to let the computer compute the pow(2,1000000). So the traditional way is not workable.
It is not impossible. For example, Python can do the arithmetic calculation instantly, and the conversion to a decimal number in about two seconds (on my machine). Python has built in facilities for dealing with large integers that exceed the size of a machine word.
In C++ (and C), a good choice of big integer library is GMP. It is robust, well tested, and actively maintained. It includes a C++ wrapper that uses operator overloading to provide a nice interface (except, there is no C++ operator for the pow() operation).
Here is a C++ example that uses GMP:
#include <iostream>
#include <gmpxx.h>
int main(int, char *[])
{
mpz_class a, b;
a = 2;
mpz_pow_ui(b.get_mpz_t(), a.get_mpz_t(), 1000000);
std::string s = b.get_str();
std::cout << "length is " << s.length() << std::endl;
return 0;
}
The output of the above is
length is 301030
which executes on my machine in 0.18 seconds.
"This is roughly O(n^4) with regards to number of binary digits, and O(nlog(n)) with regards to memory". You can do O(n^(2 + epsilon)) operations (where n is the number of binary digits), and O(n) memory as follows: Let N be an enormous number of binary length n. Compute the residues mod 2 (easy; grab the low bit) and mod 5 (not easy but not terrible; break the binary string into successive strings of four bits; compute the residue mod 5 of each such 4-tuple, and add them up as with casting out 9's for decimal numbers.). By computing the residues mod 2 and 5 you can read off the low decimal digit. Subtract this; divide by 10 (the internet documents ways to do this), and repeat to get the next-lowest digit.
I calculated 2 ** 1000000 and converted it to decimal in 9.3 seconds in Smalltalk so it's not impossible. Smalltalk has large integer libraries built in.
2 raisedToInteger: 1000000
As mentioned in another answer, you need a library that handles arbitrary precision integer numbers. Once you have that, you do MOD 10 and DIV 10 operations on it to compute the decimal digits in reverse order (least significant to most significant).
The rough idea is something like this:
LargeInteger *a;
char *string;
while (a != 0) {
int remainder;
LargeInteger *quotient;
remainder = a % 10.
*string++ = remainder + 48.
quotient = a / 10.
}
Many details are missing (or wrong) here concerning type conversions, memory management and allocation of objects but it's meant to demonstrate the general technique.
It's quite simple with the Gnu Multiprecision Library. Unfortunately, I couldn't test this program because it seems I need to rebuild my library after a compiler upgrade. But there's not much room for error!
#include "gmpxx.h"
#include <iostream>
int main() {
mpz_class megabit( "1", 10 );
megabit <<= 1000000;
megabit += 1;
std::cout << megabit << '\n';
}
I am making a big integer library and i have a STL vector of int that represent a binary number.
the binary vector would contain {0,0,1,1}
and i need the decimal representation of this number to be stored in another vector like this
{2,1}
What i want to know is what would be the best way to do this?
Also i can't use functions like pow because this needs to work with big numbers.
The most direct way to do this is to simply sum the decimal representation of each power-of-2 by mimicking decimal addition.
You could obtain the decimal representation of each power-of-2 iteratively, i.e. pow(2,n+1) = pow(2,n) + pow(2,n).
So basically, once you've got decimal addition nailed, everything should be pretty easy. This may not be the most efficient method (it's O(n^2) in the number of digits), but it should certainly work.
Knuth's "The Art of Computer Programming" Vol 2. is an excellent reference for arbitrary precision arithmetic.
A simple way to get a base 10 representation is to continously divide your number by 10, each division extracts one digit (the remainder). This obviously requires being able to divide by 10 in whatever base you already have.
For example, if your number is 321 decimal or 101000001 binary, we can divide:
10100001 binary by 1010 binary
100000 with remainder 1 (so first digit is 1 decimal)
divide 100000 by 1010 binary
11 with remainder 10 (so next digit is 2 decimal)
divide 11 by 1010 binary
0 with remainder 11 (so last digit is 3 decimal)
According to: http://people.cis.ksu.edu/~rhowell/calculator/comparison.html this is the radix conversion algorithm used in Sun's Java BigInteger class and is O(n^2) complexity. The authors in this link have implemented an O(n logn) algorithm based on an algorithm described in Knuth p. 592 and credited to A. Shönhage.
Something like that:
int a;
int temp;
a = 21;
vector<int> vet;
while(a>0)
{
temp = a % 10;
vet.push_back(temp);
a = a/10;
}