I am looking for a fast large numbers multiplication algorithm in C++.
I have tried something like this but I think I am creating too many string objects.
string sum(string v1, string v2)
{
string r;
int temp = 0, i, n, m;
int size1 = v1.size(), size2 = v2.size();
n = min(size1, size2);
m = max(size1, size2);
if ((v1 == "0" || v1 == "") && (v2 == "0" || v2 == "")) return "0";
r.resize(m, '0');
for (i = 0; i < n; i++)
{
temp += v1[size1 - 1 - i] + v2[size2 - 1 - i] - 96;
r[m - 1 - i] = temp % 10 + 48;
temp /= 10;
}
while (i < size1)
{
temp += v1[size1 - 1 - i] - 48;
r[m - 1 - i] = (char)(temp % 10 + 48);
temp /= 10;
++i;
}
while (i < size2)
{
temp += v2[size2 - 1 - i] - 48;
r[m - 1 - i] = (char)(temp % 10 + 48);
temp /= 10;
++i;
}
if (temp != 0)
r = (char)(temp + 48) + r;
return r;
}
string multSmall(string v1, int m)
{
string ret = "0";
while(m)
{
if (m & 1) ret = sum(ret, v1);
m >>= 1;
if (m) v1 = sum(v1, v1);
}
return ret;
}
string multAll(string v1, string v2)
{
string ret = "0", z = "", pom;
int i, size;
if (v1.size() < v2.size())
std::swap(v1, v2);
size = v2.size();
for (i = 0; i < size; i++)
{
pom = multSmall(v1, v2[size - 1 - i] - 48);
pom.append(z);
ret = sum(ret, pom);
z.resize(i + 1, '0');
}
return ret;
}
I DON'T want do use any external libraries. How should I do it? Maybe I should use char arrays instead of strings? But I am not sure if reallocating memory for an array would be faster than creating string object.
Fast large number multiplication is a big project. A very big project depending upon just how large of numbers you want to multiply.
Probably the simplest important thing, however, is that you want to get as much mileage out of your CPU's native instructions as possible. Addition of 64-bit numbers is 8 times more powerful than addition of 8-bit numbers, and over 19 times more powerful than addition of decimal digits. But your computer can probably add 64-bit numbers just as quickly as it can add 8-bit numbers, and a lot faster than any code you write to do addition of decimal digits.
Multiplication is much more dramatic; an instruction that multiplies two 64-bit numbers to produce a 128-bit result is doing around 64 times more work than an instruction that multiplies two 8-bit numbers to produce a 16-bit result -- but your CPU can probably do them at the same speed, or maybe the latter is twice as fast as the former.
So, you really want to orient your data structures and base case algorithms around the idea of using these more powerful instructions as much as you can.
If you need to, you can think of it as doing arithmetic in base 2^64. (or maybe base 2^32, if you can't or don't want to use 64-bit arithmetic)
Related
I tried to solve Multiply Strings by c++ by this approach, but I cannot avoid integer overflow by change type from int to long long int or double. Python won't overflow, so my code works like below.
Given two non-negative integers num1 and num2 represented as strings, return the product of num1 and num2, also represented as a string.
Python works:
class Solution:
def multiply(self, num1: str, num2: str) -> str:
n = len(num1) # assume n >= m
m = len(num2)
if n < m:
num1, num2 = num2, num1
n, m = m, n
product = 0
for i in range(1, m + 1):
multiplier = int(num2[m - i]) # current character of num2
sum_ = 0
for j in range(0, n): # multiply num1 by multiplier
multiplicand = int(num1[n - j - 1])
num = multiplicand * (10 ** j) * multiplier
sum_ += num
product += sum_ * (10 ** (i - 1))
return str(product)
C++ failed:
string multiply(string num1, string num2) {
int n = num1.size();
int m = num2.size();
if (n < m) {
std::swap(num1, num2);
std::swap(n, m);
}
long long int product = 0;
for (int i = 1; i <= m; ++i) {
int multiChar = num2[m - i] - '0';
long long int sum = 0;
for (int j = 0; j < n ; ++j) {
int charCand = num1[n - j - 1] - '0';
long long int num = charCand * ((pow(10, j))) * multiChar;
sum += num;
}
product += sum * ((pow(10, i - 1)));
}
return std::to_string(product);
}
As far as I have tested, some cases are OK, but overflow seems unavoidable if the number is too big. Is there any way to fix my code?
Testcase:
"12323247989"
"98549324321"
runtime error: 1.05355e+20 is outside the range of representable values of type 'long long' (solution.cpp)
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior prog_joined.cpp:28:17
Expected:
"1214447762756072040469"
You are not on the right way. Imagine how you would do that by hand:
abc*def
-------
xxxx
xxxx0
xxxx00
-------
You just add single digits as well, don't you? Only those of same significance – possibly considering some carry.
You might rather reproduce the same in code, too. Producing overflow that way is much less likely (I assume that after multiplying single digits summing up the results in a single integer – recommending an unsigned type for – is acceptable; if not, you'd have to build up a std::string again). The sign you calculate independently, just as you'd do by hand as well.
One difference to multiplication by hand we'll have, though: By hand you would create rather large intermediate numbers by multiplying one number with each digit
of the other number. That would require to store these intermediate numbers as strings again, e. g. in a vector. More efficient, though, is identifying those digit pairs of which the multiplication results in the same significance.
These will be 0|0 -> 0; 0|1, 1|0 -> 1; 0|2, 1|1, 2|0 -> 2, and so on. You produce these pairs by:
for(size_t i = 0, max = std::max(num1.length(), num2.length); i < max; ++i)
{
for(size_t j = 0; j < i; ++j)
{
if(j < num1.length() && i - j < num2.length())
{
// iterate backwards for easy carry handling
size_t idx1 = num1.length() - j;
size_t idx2 = num2.length() - (i - j);
// multiply characters at num1[idx1] and num2[idx2] and add result to sum
}
}
// add carry
// calculate last digit and a p p e n d to a result string
// update carry
}
// append '-' sign, if result is negative
std::reverse(result.begin(), result.end());
Building up the string in reverse order is more efficient, as you do not need to move the subsequent characters all the time. (Untested code, if you find a bug, please fix it yourself).
The loops are in my preferred variant; if you feel better with another, feel free to change; just be aware that with signed types you can produce endless loops if you try e. g. for(unsigned i = n; n >= 0; --i /* overflows to UINT_MAX */).
Side note: You should accept the input strings by reference (std::string const& num1, std::string const& num2), that avoids the needless copies arising by accepting by value.
I am trying to subtract 2 very large ints / big nums, but I have run into an issue. My code works for subtractions like 123 - 94, 5 - 29 but I can't seem to get around edge cases. For example 13 - 15 should result in -2. But if I do num1 - num2 - borrow + 10 on the first digit I get 8 and borrow becomes 1. Moving on to the last digit I end up with 1 - 1 - borrow(=1) which leaves me with -1 therefor my end result is -18 instead of being -2.
Here is my code for the subtraction:
//Infint is the class for the very large number
Infint Infint::sub(Infint other)
{
string result;
Infint i1 = *this;
Infint i2 = other;
if (int(i1._numberstr.length() - i2._numberstr.length()) < 0)
{
Infint(result) = i2 - i1;
result._numberstr.insert(result._numberstr.begin(), '-');
return result;
}
else if (i1._numberstr.length() - i2._numberstr.length() > 0)
{
int diff = i1._numberstr.length() - i2._numberstr.length();
for (int i = diff; i > 0 ; --i)
{
i2._numberstr.insert(i2._numberstr.begin(), '0');
}
}
int borrow = 0;
int i = i2._numberstr.length() - 1;
for (; i >= 0 ; --i)
{
int sub = (i1._numberstr[i] - '0') - (i2._numberstr[i] - '0') - borrow;
if (sub < 0)
{
sub += 10;
borrow = 1;
}
else
borrow = 0;
result.insert(0, to_string(sub));
}
while (i > 0)
{
result.insert(result.begin(), i1._numberstr[i1._numberstr.length() - i]);
--i;
}
int j = 0;
while (result[j] == '0')
j++;
result.erase(0, j);
if (borrow == 1)
result.insert(result.begin(), '-');
return Infint(result);
}
Would you kindly help me understand the errors or mistakes in logic I have made ?
Since you got 8 at the 1s position and -1 at the 10s position. the sum of these two is -10 + 8 = -2, the correct answer (instead of -10 - 8 = -18, which is wrong).
EDIT: To systematically derive the correct answer, if you find the highest-digit difference to be negative, distribute the minus sign to all digits. Suppose the per-digit differences of two n-digit values are
an-1, ..., a0
with aj be the difference at digit of 10j, and you find that an-1 < 0. Then total difference of the two numbers could be calculated as
-1 * (-an-1 * 10n-1 + ... + -a0)
It should be fairly straight-forward to derive the correct (negative) answer by going through the sum from 10n-1 down to 1s.
I have a problem:
You are given a sequence, in the form of a string with characters ‘0’, ‘1’, and ‘?’ only. Suppose there are k ‘?’s. Then there are 2^k ways to replace each ‘?’ by a ‘0’ or a ‘1’, giving 2^k different 0-1 sequences (0-1 sequences are sequences with only zeroes and ones).
For each 0-1 sequence, define its number of inversions as the minimum number of adjacent swaps required to sort the sequence in non-decreasing order. In this problem, the sequence is sorted in non-decreasing order precisely when all the zeroes occur before all the ones. For example, the sequence 11010 has 5 inversions. We can sort it by the following moves: 11010 →→ 11001 →→ 10101 →→ 01101 →→ 01011 →→ 00111.
Find the sum of the number of inversions of the 2^k sequences, modulo 1000000007 (10^9+7).
For example:
Input: ??01
-> Output: 5
Input: ?0?
-> Output: 3
Here's my code:
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <string>
#include <string.h>
#include <math.h>
using namespace std;
void ProcessSequences(char *input)
{
int c = 0;
/* Count the number of '?' in input sequence
* 1??0 -> 2
*/
for(int i=0;i<strlen(input);i++)
{
if(*(input+i) == '?')
{
c++;
}
}
/* Get all possible combination of '?'
* 1??0
* -> ??
* -> 00, 01, 10, 11
*/
int seqLength = pow(2,c);
// Initialize 2D array of integer
int **sequencelist, **allSequences;
sequencelist = new int*[seqLength];
allSequences = new int*[seqLength];
for(int i=0; i<seqLength; i++){
sequencelist[i] = new int[c];
allSequences[i] = new int[500000];
}
//end initialize
for(int count = 0; count < seqLength; count++)
{
int n = 0;
for(int offset = c-1; offset >= 0; offset--)
{
sequencelist[count][n] = ((count & (1 << offset)) >> offset);
// cout << sequencelist[count][n];
n++;
}
// cout << std::endl;
}
/* Change '?' in former sequence into all possible bits
* 1??0
* ?? -> 00, 01, 10, 11
* -> 1000, 1010, 1100, 1110
*/
for(int d = 0; d<seqLength; d++)
{
int seqCount = 0;
for(int e = 0; e<strlen(input); e++)
{
if(*(input+e) == '1')
{
allSequences[d][e] = 1;
}
else if(*(input+e) == '0')
{
allSequences[d][e] = 0;
}
else
{
allSequences[d][e] = sequencelist[d][seqCount];
seqCount++;
}
}
}
/*
* Sort each sequences to increasing mode
*
*/
// cout<<endl;
int totalNum[seqLength];
for(int i=0; i<seqLength; i++){
int num = 0;
for(int j=0; j<strlen(input); j++){
if(j==strlen(input)-1){
break;
}
if(allSequences[i][j] > allSequences[i][j+1]){
int temp = allSequences[i][j];
allSequences[i][j] = allSequences[i][j+1];
allSequences[i][j+1] = temp;
num++;
j = -1;
}//endif
}//endfor
totalNum[i] = num;
}//endfor
/*
* Sum of all Num of Inversions
*/
int sum = 0;
for(int i=0;i<seqLength;i++){
sum = sum + totalNum[i];
}
// cout<<"Output: "<<endl;
int out = sum%1000000007;
cout<< out <<endl;
} //end of ProcessSequences method
int main()
{
// Get Input
char seq[500000];
// cout << "Input: "<<endl;
cin >> seq;
char *p = &seq[0];
ProcessSequences(p);
return 0;
}
the results were right for small size input, but for bigger size input I got time CPU time limit > 1 second. I also got exceeded memory size. How to make it faster and optimal memory use? What algorithm should I use and what better data structure should I use?, Thank you.
Dynamic programming is the way to go. Imagine You are adding the last character to all sequences.
If it is 1 then You get XXXXXX1. Number of swaps is obviously the same as it was for every sequence so far.
If it is 0 then You need to know number of ones already in every sequence. Number of swaps would increase by the amount of ones for every sequence.
If it is ? You just add two previous cases together
You need to calculate how many sequences are there. For every length and for every number of ones (number of ones in the sequence can not be greater than length of the sequence, naturally). You start with length 1, which is trivial, and continue with longer. You can get really big numbers, so You should calculate modulo 1000000007 all the time. The program is not in C++, but should be easy to rewrite (array should be initialized to 0, int is 32bit, long in 64bit).
long Mod(long x)
{
return x % 1000000007;
}
long Calc(string s)
{
int len = s.Length;
long[,] nums = new long[len + 1, len + 1];
long sum = 0;
nums[0, 0] = 1;
for (int i = 0; i < len; ++i)
{
if(s[i] == '?')
{
sum = Mod(sum * 2);
}
for (int j = 0; j <= i; ++j)
{
if (s[i] == '0' || s[i] == '?')
{
nums[i + 1, j] = Mod(nums[i + 1, j] + nums[i, j]);
sum = Mod(sum + j * nums[i, j]);
}
if (s[i] == '1' || s[i] == '?')
{
nums[i + 1, j + 1] = nums[i, j];
}
}
}
return sum;
}
Optimalization
The code above is written to be as clear as possible and to show dynamic programming approach. You do not actually need array [len+1, len+1]. You calculate column i+1 from column i and never go back, so two columns are enough - old and new. If You dig more into it, You find out that row j of new column depends only on row j and j-1 of the old column. So You can go with one column if You actualize the values in the right direction (and do not overwrite values You would need).
The code above uses 64bit integers. You really need that only in j * nums[i, j]. The nums array contain numbers less than 1000000007 and 32bit integer is enough. Even 2*1000000007 can fit into 32bit signed int, we can make use of it.
We can optimize the code by nesting loop into conditions instead of conditions in the loop. Maybe it is even more natural approach, the only downside is repeating the code.
The % operator is, as every dividing, quite expensive. j * nums[i, j] is typically far smaller that capacity of 64bit integer, so we do not have to do modulo in every step. Just watch the actual value and apply when needed. The Mod(nums[i + 1, j] + nums[i, j]) can also be optimized, as nums[i + 1, j] + nums[i, j] would always be smaller than 2*1000000007.
And finally the optimized code. I switched to C++, I realized there are differences what int and long means, so rather make it clear:
long CalcOpt(string s)
{
long len = s.length();
vector<long> nums(len + 1);
long long sum = 0;
nums[0] = 1;
const long mod = 1000000007;
for (long i = 0; i < len; ++i)
{
if (s[i] == '1')
{
for (long j = i + 1; j > 0; --j)
{
nums[j] = nums[j - 1];
}
nums[0] = 0;
}
else if (s[i] == '0')
{
for (long j = 1; j <= i; ++j)
{
sum += (long long)j * nums[j];
if (sum > std::numeric_limits<long long>::max() / 2) { sum %= mod; }
}
}
else
{
sum *= 2;
if (sum > std::numeric_limits<long long>::max() / 2) { sum %= mod; }
for (long j = i + 1; j > 0; --j)
{
sum += (long long)j * nums[j];
if (sum > std::numeric_limits<long long>::max() / 2) { sum %= mod; }
long add = nums[j] + nums[j - 1];
if (add >= mod) { add -= mod; }
nums[j] = add;
}
}
}
return (long)(sum % mod);
}
Simplification
Time limit still exceeded? There is probably better way to do it. You can either
get back to the beginning and find out mathematically different way to calculate the result
or simplify actual solution using math
I went the second way. What we are doing in the loop is in fact convolution of two sequences, for example:
0, 0, 0, 1, 4, 6, 4, 1, 0, 0,... and 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,...
0*0 + 0*1 + 0*2 + 1*3 + 4*4 + 6*5 + 4*6 + 1*7 + 0*8...= 80
The first sequence is symmetric and the second is linear. It this case, the sum of convolution can be calculated from sum of the first sequence which is = 16 (numSum) and number from second sequence corresponding to the center of the first sequence, which is 5 (numMult). numSum*numMult = 16*5 = 80. We replace the whole loop with one multiplication if we are able to update those numbers in each step, which fortulately seems the case.
If s[i] == '0' then numSum does not change and numMult does not change.
If s[i] == '1' then numSum does not change, only numMult increments by 1, as we shift the whole sequence by one position.
If s[i] == '?' we add original and shiftet sequence together. numSum is multiplied by 2 and numMult increments by 0.5.
The 0.5 means a bit problem, as it is not the whole number. But we know, that the result would be whole number. Fortunately in modular arithmetics in this case exists inversion of two (=1/2) as a whole number. It is h = (mod+1)/2. As a reminder, inversion of 2 is such a number, that h*2=1 modulo mod. Implementation wisely it is easier to multiply numMult by 2 and divide numSum by 2, but it is just a detail, we would need 0.5 anyway. The code:
long CalcOptSimpl(string s)
{
long len = s.length();
long long sum = 0;
const long mod = 1000000007;
long numSum = (mod + 1) / 2;
long long numMult = 0;
for (long i = 0; i < len; ++i)
{
if (s[i] == '1')
{
numMult += 2;
}
else if (s[i] == '0')
{
sum += numSum * numMult;
if (sum > std::numeric_limits<long long>::max() / 4) { sum %= mod; }
}
else
{
sum = sum * 2 + numSum * numMult;
if (sum > std::numeric_limits<long long>::max() / 4) { sum %= mod; }
numSum = (numSum * 2) % mod;
numMult++;
}
}
return (long)(sum % mod);
}
I am pretty sure there exists some simple way to get this code, yet I am still unable to see it. But sometimes path is the goal :-)
If a sequence has N zeros with indexes zero[0], zero[1], ... zero[N - 1], the number of inversions for it would be (zero[0] + zero[1] + ... + zero[N - 1]) - (N - 1) * N / 2. (you should be able to prove it)
For example, 11010 has two zeros with indexes 2 and 4, so the number of inversions would be 2 + 4 - 1 * 2 / 2 = 5.
For all 2^k sequences, you can calculate the sum of two parts separately and then add them up.
1) The first part is zero[0] + zero[1] + ... + zero[N - 1]. Each 0 in the the given sequence contributes index * 2^k and each ? contributes index * 2^(k-1)
2) The second part is (N - 1) * N / 2. You can calculate this using a dynamic programming (maybe you should google and learn this first). In short, use f[i][j] to present the number of sequence with j zeros using the first i characters of the given sequence.
I'm making a BigInt class in C++ as an exercise. I'm currently working on the multiplication functionality. My BigInt's are represented as a fixed length (that is very big) int[], with each entry being a digit of the number entered.
So, BigInt = 324, will result in [0,0,0,..,3,2,4].
I'm currently trying to multiply using this code:
// multiplication
BigInt BigInt::operator*(BigInt const& other) const {
BigInt a = *this;
BigInt b = other;
cout << a << b << endl;
BigInt product = 0;
for(int i = 0; i < arraySize; i++){
int carry = 0;
for(int j = 0; j < arraySize; j++){
product.digits[arraySize - (j + i)] += (carry + (a.digits[j] * b.digits[i]));
carry = (product.digits[arraySize - (j + i)] / 10);
product.digits[arraySize - (j + i)] = (product.digits[arraySize - (j + i)] % 10);
}
product.digits[arraySize - i] += carry;
}
return product;
}
My answer keeps returning 0. For example, 2 * 2 = 0.
It is not sure that this will fix your program, but you have Undefined Behavior because of this:
product.digits[arraySize - (j + i)]
This index arraySize - (j + i) becomes negative when i + j > arraySize, which will obviously occur in your loop.
Basically, when multiplying two numbers with n digits, the result may be as wide as 2n digits. Since you encode all your numbers on fixed length arraySize, you have to take measures to avoid out of bound.
A simple test if(i+j) <= arraySize could do, or by changing the second loop:
for(int j = 0; j < arraySize - i; j++)
Alternatively, it would be better to use std::vector as the internal representation of your BigInt. It can be sized dynamically to fit your result beforehand.
It is not completely sure that this will fix completely your code, but it has to be fixed, before proceeding with the debugging. It will be easier after removing the UB. Here I approve #Dúthomhas's note that your indexing through the arrays seems obviously irregular... You go from right to left with the result, while from left to right with the inputs...
I am using this code to convert binary to decimal. But this code will not work for more 64 bits as __int64 holds only 8 bytes. Could you please tell suggest an algorithm to use to convert more than 64 bits to decimal values. Also my end result has to be string. Help is appreciated. Thanks.
int bin2dec(char *bin)
{
__int64 b, k, m, n;
__int64 len, sum = 0;
len = strlen(bin) - 1;
for(k = 0; k <= len; k++)
{
n = (bin[k] - '0'); // char to numeric value
if ((n > 1) || (n < 0))
{
puts("\n\n ERROR! BINARY has only 1 and 0!\n");
return (0);
}
for(b = 1, m = len; m > k; m--)
{
// 1 2 4 8 16 32 64 ... place-values, reversed here
b *= 2;
}
// sum it up
sum = sum + n * b;
}
return(sum);
}
Typically, when dealing with data bigger than what you can store in one integer unit, the solution is one of two things:
Use a character array/string to store the value as "ASCII" (in this one bit per char)
Use multiple integers in an array to store the values, using X bits per element.
There is nothing particularly different about the conversion, just that once you have done X bits, you shift to the next element.
By the way:
int bin2dec(char *bin)
{
int k, n;
int len;
__int64 sum = 0;
len = strlen(bin);
for(k = 0; k < len; k++)
{
n = (bin[k] - '0'); // char to numeric value
if ((n > 1) || (n < 0))
{
puts("\n\n ERROR! BINARY has only 1 and 0!\n");
return (0);
}
// sum it up
sum <<= 1;
sub += n;
}
return(sum);
}
is a bit simpler.
The algorithm is simple: keep dividing by powers of 10 in order to get each 10s place of the value. The trick is being able to store and divide by powers of 10 for numbers bigger than 64 bit. The algorithms for storing big numbers exist and you should find one, though they are not hard to right, they are bigger than is appropriate to type into an answer here at Stackoverflow.
But basically, you create an accumulator bignum, set it to 1 and start multiplying it by 10 until it is bigger in value than your target bignum. Then you divide it by 10 and start the algorithm:
while accum >= 1
divide source/accum place the dividend in your output string.
substract that number time accum from your source.
divide accum by 10 and loop
Do you recognize that algorithm? It is probably how you were taught to do long division in grade school. Well, that's how you "print" a binary number in decimal.
There are lots of ways to improve the performance of this. (Hint, you don't have to work in base 10. Work in base 10^8 for 32-bit ints or base 10^17 for 64-bit ints.) But first you need a library that will subtract, add, multiple, divide and compare bignums.
Of course a bignum library probably already has a toString function.
You can readily store big numbers (in any base) as a std::deque of digits -- using a deque makes it easy to add digits on either end. You can implement basic arithmetic operations on them, which makes it easy to convert binary to decimal using the standard multiply and add digits algorithm:
std::deque<char> &operator *=(std::deque<char> &a, unsigned b)
{
unsigned carry = 0;
for (auto d = a.rbegin(); d != a.rend(); d++) {
carry += (*d - '0') * b;
*d = (carry % 10) + '0';
carry /= 10; }
while (carry > 0) {
a.push_front((carry % 10) + '0');
carry /= 10; }
return a;
}
std::deque<char> &operator +=(std::deque<char> &a, unsigned b)
{
for (auto d = a.rbegin(); b > 0 && d != a.rend(); d++) {
b += (*d - '0');
*d = (b % 10) + '0';
b /= 10; }
while (b > 0) {
a.push_front((b % 10) + '0');
b /= 10; }
return a;
}
std::string bin2dec(char *bin) {
std::deque<char> tmp{'0'};
while (*bin) {
if (*bin != '0' && *bin != '1') {
puts("\n\n ERROR! BINARY has only 1 and 0!\n");
return ""; }
tmp *= 2;
if (*bin++ == '1')
tmp += 1; }
return std::string(tmp.begin(), tmp.end());
}
manually:
int binaryToDec(char *bin)
{
int k, n;
int len=strlen(bin);
int dec = 0;
for(k = 0; k < len; k++)
{
n = (bin[k] - '0');
dec <<= 1;
dec += n;
}
return(dec);
}
you can also consider a bitset:
std::bitset<64> input(*bin);
std::cout<<input.u_long();