Given very long binary, convert it to decimal - c++

First, this is a part of my hw. Second, this is only a part of it, so I would really appreciate any hints here.
I've implemented a kind of BigInt class, which stores numbers as sequences of zeros and ones - stores decimal as a binary.
My class can add numbers and multiply them.
Ok, but when I multiply two large numbers, I get a huge number.
My question is - given a really really long binary number, how do I convert it back to decimal?
I've found something about dividing by 10, but I'm not sure, if that's my case... Or it is, and I have to implement binary dividing?
Thanks...

binary means base 2 so if you have for example 10100 and you need the base 10you could apply the following pattern, going from the last element to the first (right to left): 2^0*0 + 2^1*0 + 2^2*1 + 2^3*0 + 2^4*1 = 20 you just raise 2 to power a starting from 0 to length_of_binary_num-1 and you multiply that power with the binary digit from your string(right-to-left) or you have aditional method on
Convert Binary to Decimal
also I would recommend using a string for binary since binary digits tend to look more like strings than integers

Related

how is 128 bit integer formed in abseil library?

In Abseil library absl::uint128 big = absl::MakeUint128(1, 0);
this represents 2^64 , but i don't understand what does '1' and '0' mean here.
Can someone explain me how the number is actually formed ?
absl::MakeUint128(x, y); constructs a number equal to 2^64 * x + y
And see https://abseil.io/docs/cpp/guides/numeric
How? In any possible way. But there is a very simple way to make it.
You may already know how to do arithmetic with one digit numbers in base ten, right? Then you also know to use this arithmetic to get arithmetic of two digit numbers in base 10, right? Be aware that this then gives you an arithmetic of one digit numbers in base 100 (just consider '34' or '66' as a single symbols).
Your computer knows how to make arithmetic of one number digit in base 2^64, so it makes the same extension that you use to get in base 10 to get arithmetic of two digit numbers in base 2^64. This then leads to an arithmetic in base 2^128, or an arithmetic of 128 digits numbers in base 2.

VBA debugger precision

I had a single which I believe the C++ equivalent is float in VBA in an Excel workbook module. Anyways, the value I originally assigned (876.34497) is rounded off to 876.345 in the Immediate Window, and Watch, and hover tooltip when I set a breakpoint on the VBA. However, if I pass this Single to a C++ DLL C++ reports it as the original value 876.34497.
So, is it actually stored in memory as the original value? Is this some limitation of the debugger? Unsure what is going on here. Makes it difficult to test if what I'm passing is what I'm getting on the C++ side.
I tried:
?CStr(test)
876.345
?CDbl(test)
876.344970703125
?CSng(test)
876.345
VBA isn't very straightforward, so at some level it must be stored as 876.34497 in memory. Otherwise, I don't think CDbl would be correct like it is.
VBA variables of type "single" are stored as "32-bit hardware implementation of IEEE 754[-]1985 [sic]." [see: https://msdn.microsoft.com/en-us/library/ee177324.aspx].
What this means in English is, "single" precision numbers are converted to binary then truncated to fit in a 4 byte (32-bit) sequence. The exact process is very well described in Wikipedia under http://en.wikipedia.org/wiki/Single-precision_floating-point_format . The upshot is that all single precision numbers are expressed as
(1) a 23 bit "fraction" between 0 and 1, *times*
(2) an 8-bit exponent which represents a multiplier between 2^(-127) and 2^128, *times*
(3) one more bit for positive or negative.
The process of converting numbers to binary and back causes two types of rounding errors:
(1) Significant Digits -- as you have noticed, there is a limit on significant digits. A 22 bit integer can only have 8,388,607 unique values. Stated another way, no number can be expressed with greater than +/- 0.000012% precision. Reaching back to high school science, you may recall that that is another way of saying you cannot count on more than six significant digits (well, decimal digits, at least ... of course you have 22 significant binary digits). So any representation of a number with more than six significant digits will get rounded off. However, it won't get rounded off to the nearest decimal digit ... it will get rounded off to the nearest binary digit. This often causes some unexpected results (like yours).
(2) Binary conversion -- The other type of error is even more pernicious. There are some numbers with significantly less than six (decimal) digits that will get rounded off. For example, 1/5 in decimal is 0.2000000. It never gets "rounded off." But the same number in binary is 0.00110011001100110011.... repeating forever. (That sequence is equivalent to 1/8 + 1/16 + 1/16*(1/8+1/16) + 1/256*(1/8+1/16) ... ) If you used an arbitrary number of binary digits to represent 0.20, then converted it back to decimal, you will NEVER get exactly 0.20. For example, if you used eight bits, you would have 0.00110011 in binary which is:
0.12500000
0.06250000
0.00781250
+ 0.00390625
------------
0.19921875
No matter how many binary digits you use, you will never get exactly 0.20, because 0.20 cannot be expressed as the sum of powers of two.
That in a nutshell explains what's going on. When you assign 876.34497 to "test," it gets converted internally to:
1 10001000 0110110001011000010011
136 5,969,427
Which is (+1) * 2^(136-127) * (5,969,427)/(2^23)
Excel is automatically truncating the display of your single-precision number to show only six significant digits, because it knows that the seventh digit might be wrong. I can't tell you what the number is exactly because my excel doesn't display enough significant digits! But you get the point.
When you coerce the value into double precision, it uses the entire binary string and then adds another 4 bytes worth of zeroes to the end. It now allows you to display twice as many significant figures because it is double precision, but as you can see, the conversion from 8 decimal digits to 23 binary digits and then appending another long string of zeros has introduced some errors. Not really errors, if you understand what it's doing; just artifacts. After all, it's doing exactly what you told it to do ... you just didn't know what you were telling it to do!

Method to find number of digits after converting from a different base number

The text in quotes gives a bit of background on my program in case it's needed to understand my issue, you might be able to fully understand with the stuff at the end unquoted if you don't feel like reading it.
I'm working on the common project of sorting in C++, and I am
currently doing radix sort. I have it as a function, taking in a
vector of strings, an integer holding the max number of digits, and an
integer with the radix/base of the numbers: (numbers, maxDigits, radix)
Since the program takes in numbers of different base and as a string,
I'm using stoi to convert them to a base 10 integer to make the
process easier to generalize. Here's a quick summary of the algorithm:
create 10 queues to hold values 0 to 9
iterate through each digit (maxDigit times)
iterate through each number in the vector (here it converts to a base 10)
put them into the queue based on the current digit it's looking at
pull the numbers out of the queues from beginning to end back into the vector
As for the problem I'm trying to wrap my head around, I want to change the maxDigit value (with whatever radix the user inputs) to a maxDigit value after it is converted to base 10. In other words, say the user used the code
radixSort(myVector, 8, 2)
to sort a vector of numbers with the max number of digits 8 and a radix of 2. Since I convert the radix of the number to 10, I'm trying to find an algorithm to also change the maxDigits, if that makes sense.
I've tried thinking about this so much, trying to figure out a simple way through trial and error. If I could get some tips or help in the right direction that would be a great help.
If something is in radix 2 and max digits 8, then its largest value is all ones. And 11111111 = 255, which is (2^8 - 1).
The maximum digits in base 10 will be whatever is needed to represent that largest value. Here we see that to be 3. Which is the base 10 logarithm of 255 (2.40654018043), rounded up to 3.
So basically just round up log10 (radix^maxdigits - 1) to the nearest whole number.

How to measure the length of a string form an integer before creating it

I got a list of numbers (int and doubles) which I need to export to a buffer as strings. The buffer has to be reserved beforehand. For speed and size reasons I do not want to create the strings, measure its size and then create it again into the buffer. And no, the used system does not allow to create the whole string and copy it afterwards.
For integers, you'll need floor(log10(number)) + 1 decimal digits (adjusted for 0 and sign as necessary).
For doubles, the situation is a bit more complicated - it really depends on how you want to represent them. Most importantly, do you mind trailing 0s after the decimal point? Is scientific notation an option?
One way to approach this would be: you need 17 decimal digits after the decimal point to represent an IEEE double in a string so that it can be reconstructed unambiguously. So always reserve those 17 digits, plus the period, and use the integer formula above for the integral part.

convert very big int (written as string) to binary string in c/c++

I have a number in base 10 which has around 10k digits. I want to convert it into base 2 (1010101001...). All I can think of is primitive algorithm:
take last digit mod 2 -> write down bit
number divide by 2;
It's shouldn't be hard to implement primary school division on string, but i'm thinking that it very inefficiente. If i'm right it will be O(l^2), where l means length of number in base 10. Can that be done faster?
From what I understand you have your big number represented as a sequence of decimal digits. If that is so, you can compute a "binary" representation using multiplication and addition:
value = sum(i in 0...n-1) 10i * digiti
This computation can be split into parts in a divide and conquor way, although I'm not sure if you can arrive at a O(n log n) algorithm.
If you are working with big numbers, I really suggest you use a multi precision library. Try GMP or MPRF or something similar. -Øystein
Division by 2 is the same as multiplication by 1/2. For the latter you can use some of the well known fast multiplication algorithms (Toom–Cook, Schönhage–Strassen,etc).