C++ exit code 3221225725, Karatsuba multiplication recursive algorithm - c++

The Karatsuba multiplication algorithm implementation does not output any result and exits with code=3221225725.
Here is the message displayed on the terminal:
[Running] cd "d:\algorithms_cpp\" && g++ karatsube_mul.cpp -o karatsube_mul && "d:\algorithms_cpp\"karatsube_mul
[Done] exited with code=3221225725 in 1.941 seconds
Here is the code:
#include <bits/stdc++.h>
using namespace std;
string kara_mul(string n, string m)
{
int len_n = n.size();
int len_m = m.size();
if (len_n == 1 && len_m == 1)
{
return to_string((stol(n) * stol(m)));
}
string a = n.substr(0, len_n / 2);
string b = n.substr(len_n / 2);
string c = m.substr(0, len_m / 2);
string d = m.substr(len_m / 2);
string p1 = kara_mul(a, c);
string p2 = kara_mul(b, d);
string p3 = to_string((stol(kara_mul(a + b, c + d)) - stol(p1) - stol(p2)));
return to_string((stol(p1 + string(len_n, '0')) + stol(p2) + stol(p3 + string(len_n / 2, '0'))));
}
int main()
{
cout << kara_mul("15", "12") << "\n";
return 0;
}
And after fixing this I would also like to know how to multiply two 664 digit integers using this technique.

There are several issues:
The exception you got is caused by infinite recursion at this call:
kara_mul(a + b, c + d)
As these variables are strings, the + is a string concatenation. This means these arguments evaluate to
n and m, which were the arguments to the current execution of the function.
The correct algorithm would perform a numerical addition here, for which you need to provide an implementation (adding two string representations of potentially very long integers)
if (len_n == 1 && len_m == 1) detects the base case, but the base case should kick in when either of these sizes is 1, not necessary both. So this should be an || operator, or should be written as two separate if statements.
The input strings should be split such that b and d are equal in size. This is not what your code does. Note how the Wikipedia article stresses this point:
The second argument of the split_at function specifies the number of digits to extract from the right
stol should never be called on strings that could potentially be too long for conversion to long. So for example, stol(p1) is not safe, as p1 could have 20 or more digits.
As a consequence of the previous point, you'll need to implement functions that add or subtract two string representations of numbers, and also one that can multiply a string representation with a single digit (the base case).
Here is an implementation that corrects these issues:
#include <iostream>
#include <algorithm>
int digit(std::string n, int i) {
return i >= n.size() ? 0 : n[n.size() - i - 1] - '0';
}
std::string add(std::string n, std::string m) {
int len = std::max(n.size(), m.size());
std::string result;
int carry = 0;
for (int i = 0; i < len; i++) {
int sum = digit(n, i) + digit(m, i) + carry;
result += (char) (sum % 10 + '0');
carry = sum >= 10;
}
if (carry) result += '1';
reverse(result.begin(), result.end());
return result;
}
std::string subtract(std::string n, std::string m) {
int len = n.size();
if (m.size() > len) throw std::invalid_argument("subtraction overflow");
if (n == m) return "0";
std::string result;
int carry = 0;
for (int i = 0; i < len; i++) {
int diff = digit(n, i) - digit(m, i) - carry;
carry = diff < 0;
result += (char) (diff + carry * 10 + '0');
}
if (carry) throw std::invalid_argument("subtraction overflow");
result.erase(result.find_last_not_of('0') + 1);
reverse(result.begin(), result.end());
return result;
}
std::string simple_mul(std::string n, int coefficient) {
if (coefficient < 2) return coefficient ? n : "0";
std::string result = simple_mul(add(n, n), coefficient / 2);
return coefficient % 2 ? add(result, n) : result;
}
std::string kara_mul(std::string n, std::string m) {
int len_n = n.size();
int len_m = m.size();
if (len_n == 1) return simple_mul(m, digit(n, 0));
if (len_m == 1) return simple_mul(n, digit(m, 0));
int len_min2 = std::min(len_n, len_m) / 2;
std::string a = n.substr(0, len_n - len_min2);
std::string b = n.substr(len_n - len_min2);
std::string c = m.substr(0, len_m - len_min2);
std::string d = m.substr(len_m - len_min2);
std::string p1 = kara_mul(a, c);
std::string p2 = kara_mul(b, d);
std::string p3 = subtract(kara_mul(add(a, b), add(c, d)), add(p1, p2));
return add(add(p1 + std::string(len_min2*2, '0'), p2), p3 + std::string(len_min2, '0'));
}

Related

Looking for nbit adder in c++

I was trying to build 17bit adder, when overflow occurs it should round off should appear just like int32.
eg: In int32 add, If a = 2^31 -1
int res = a+1
res= -2^31-1
Code I tried, this is not working & is there a better way. Do I need to convert decimal to binary & then perform 17bit operation
int addOvf(int32_t result, int32_t a, int32_t b)
{
int max = (-(0x01<<16))
int min = ((0x01<<16) -1)
int range_17bit = (0x01<<17);
if (a >= 0 && b >= 0 && (a > max - b)) {
printf("...OVERFLOW.........a=%0d b=%0d",a,b);
}
else if (a < 0 && b < 0 && (a < min - b)) {
printf("...UNDERFLOW.........a=%0d b=%0d",a,b);
}
result = a+b;
if(result<min) {
while(result<min){ result=result + range_17bit; }
}
else if(result>min){
while(result>max){ result=result - range_17bit; }
}
return result;
}
int main()
{
int32_t res,x,y;
x=-65536;
y=-1;
res =addOvf(res,x,y);
printf("Value of x=%0d y=%0d res=%0d",x,y,res);
return 0;
}
You have your constants for max/min int17 reversed and off by one. They should be
max_int17 = (1 << 16) - 1 = 65535
and
min_int17 = -(1 << 16) = -65536.
Then I believe that max_int_n + m == min_int_n + (m-1) and min_int_n - m == max_int_n - (m-1), where n is the bit count and m is some integer in [min_int_n, ... ,max_int_n]. So putting that all together the function to treat two int32's as though they are int17's and add them would be like
int32_t add_as_int17(int32_t a, int32_t b) {
static const int32_t max_int17 = (1 << 16) - 1;
static const int32_t min_int17 = -(1 << 16);
auto sum = a + b;
if (sum < min_int17) {
auto m = min_int17 - sum;
return max_int17 - (m - 1);
} else if (sum > max_int17) {
auto m = sum - max_int17;
return min_int17 + (m - 1);
}
return sum;
}
There is probably some more clever way to do that but I believe the above is correct, assuming I understand what you want.

Input 300 000 sets of numbers with cin or scanf

Hi,
I am participating in programming contest. My algorithm is fine with number of sets to 5000.
Sets of values are consist of three integers.
But I enter 300 000 sets of numbers, it takes too long.
Limit of running program: 14s.
Fetching data: 576s. (Way too long)
My formatted input is:
300000
a b c
300000 - number of sets
a, b, c - elements of the set
My algorithm (dont judge about the code):
#include <iostream>
using namespace std;
int min_replacements(int n, int *ds, int *ps, int *rs);
int max(int a, int b, int c);
bool ot(int a, int b, int c);
bool ooo(int a, int b, int c);
bool to(int a, int b, int c);
int main()
{
int n = 0;
cin >> n;
int *ds, *ps, *rs;
ds = new int[n];
ps = new int[n];
rs = new int[n];
int d{}, p{}, r{};
for (int i = 0; i < n; i++)
{
scanf("%d %d %d", &ds[i], &ps[i], &rs[i]);
printf("%d", i);
}
int t = min_replacements(n, ds, ps, rs);
printf("%d\n", t);
delete[] ds;
delete[] ps;
delete[] rs;
}
bool ot(int a, int b, int c)
{
return (a != 0 && b == 0 && c == 0);
}
bool ooo(int a, int b, int c)
{
return (a == 0 && b != 0 && c == 0);
}
bool to(int a, int b, int c)
{
return (a == 0 && b == 0 && c != 0);
}
int max(int a, int b, int c)
{
int m = 0;
if (a == b && c < a)
{
m = a;
}
if (b == c && a < b)
{
m = b;
}
if (a == c && b < c)
{
m = c;
}
if (b < a && c < a)
{
m = a;
}
if (a < b && c < b)
{
m = b;
}
if (a < c && b < c)
{
m = c;
}
if (a == b && b == c)
{
m = a;
}
return m;
}
int min_replacements(int n, int *ds, int *ps, int *rs)
{
int t = 0;
if (ds[0] == ps[0] && ps[0] == rs[0] && ds[0] == rs[0])
{
return (n + ps[0]) * rs[0];
}
bool loop = true;
while (loop)
{
for (int i = 0; i < n - 1; ++i)
{
if (ot(*(ds + i), *(ps + i), *(rs + i)) || ooo(*(ds + i), *(ps + i), *(rs + i)) || to(*(ds + i), *(ps + i), *(rs + i)))
{
continue;
}
int m = max(*(ds + i), *(ps + i), *(rs + i));
if (m == *(ds + i))
{
*(ps + i + 1) += *(ps + i);
*(rs + i + 1) += *(rs + i);
*(ps + i) = *(rs + i) = 0;
t += 2;
}
if (m == *(ps + i))
{
*(ds + i + 1) += *(ds + i);
*(rs + i + 1) += *(rs + i);
*(ds + i) = *(rs + i) = 0;
t += 2;
}
if (m == *(rs + i))
{
*(ds + i + 1) += *(ds + i);
*(ps + i + 1) += *(ps + i);
*(ps + i) = *(ds + i) = 0;
t += 2;
}
}
for (int i = 0; i < n; ++i)
{
if (ot(*(ds + i), *(ps + i), *(rs + i)) || ooo(*(ds + i), *(ps + i), *(rs + i)) || to(*(ds + i), *(ps + i), *(rs + i)))
{
loop = false;
}
else
{
loop = true;
}
}
if (loop)
{
*ds += *(ds + n - 1);
*ps += *(ps + n - 1);
*rs += *(rs + n - 1);
*(ds + n - 1) = *(ps + n - 1) = *(rs + n - 1) = 0;
t -= 2;
}
}
if (t == 0)
return 0;
return t + 1;
}
I used a cin in this algorithm
Can you help me? Thank you so much.
How do you know the std::cin part is the problem? Did you profile your code? If not, I suggest doing that, it's often surprising which part of the code is taking up most time. See e.g. How can I profile C++ code running on Linux?.
You're doing a lot of unnecessary work in various parts of the code. For example, your max function does at least 7 comparissons, and looks extremely error prone to write. You could simply replace the whole function by:
std::max({ a, b, c })
I would also take a look at your min_replacements function and see if it can be simplified. Unfortunately, you're using variable names which are super vague, so it's pretty much impossible to understand what the code should be doing. I suggest using much more descriptive variable names. That way the code will become much easier to reason about. The way it's currently written, there's a very good change even you yourself won't be able to make sense of it in a month's time.
Just glacing over the min_replacements function though, there's definitely a lot more work going on than necessary. E.g. the last for-loop:
for (int i = 0; i < n; ++i)
{
if (ot(*(ds + i), *(ps + i), *(rs + i)) || ooo(*(ds + i), *(ps + i), *(rs + i)) || to(*(ds + i), *(ps + i), *(rs + i)))
{
loop = false;
}
else
{
loop = true;
}
}
Each loop iterator sets the loop variable. Assuming this code is correct, you don't need the loop at all, just do the check only once for i = n - 1. That's already O(n) changed to O(1).

Convert string to float or integer without using built in functions (like atoi or atof)

I'm new to C++ and our teacher asked us to get a function that does the above title. So far I've got a function that converts a string to an integer, but I have no idea about how to modify it to make it work if the numbers in the string would represent a float.
int convert(char str[], int size) {
int number = 0;
for (int i = 0; i < size; ++i) {
number += (str[i] - 48)*pow(10, (size - i - 1));
}
return number;
}
If I run:
char myString[] = "12345";
convert(myString, 5);
I get:
12345
But if I run:
char myString[] = "123.45";
convert(myString, 5);
I get:
122845
How could I modify my program to work with floats too? I know convert function is meant to return an int so, should I use two more functions?
I was thinking about one that determinates if the string is inteded to be converted to an integer or a string, and the other that'll actually convert the string to a float.
Here is the function for doing so...
template<class T, class S>
T convert_string_to_number(S s)
{
auto result = T(0.l);
if (s.back() == L'F' || s.back() == L'f')
s = s.substr(0u, s.size() - 1u);
auto temp = s;
auto should_add = false;
if (!std::is_floating_point<T>::value)
{
should_add = temp.at(temp.find_first_of(L'.') + 1) >= '5';
temp.erase(temp.begin() + temp.find_first_of(L'.'), temp.end());
}
else if (temp.find_first_of(L'.') != S::npos)
temp.erase(temp.begin() + temp.find_first_of(L'.'));
for (int i = temp.size() - 1u; i >= 0; --i)
if (temp[i] >= L'0' && temp[i] <= L'9')
result += T(std::powl(10.l, temp.size() - i - 1.l) * (temp[i] - L'0'));
else
throw std::invalid_argument("Invalid numerical string!");
if (s.find(L'-') != S::npos)
result = -T(std::fabs(result));
if (s.find(L'.') != S::npos && std::is_floating_point<T>::value)
result /= T(std::powl(10.l, s.size() - s.find(L'.') - 1.l));
return std::is_floating_point<T>::value ? T(result) : T(result + T(should_add));
}
Just use it like you typically would...
auto some_number = convert_string_to_number<float>(myString);...
For the floating point part of the assignment: what about regular expressions? It is also kind of built-in functionality, but general purpose, not designed for your particular task, so I hope your teacher will be fine with this idea.
You can use the following regex: [+-]?([0-9]*[.])?[0-9]+ (I got it from this answer) to detect if provided string is a floating point number. Then you can modify the expression a little bit to capture the +/- signs and parts before/after the dot separator. Once you extract these features the task should be relatively simple.
Also please change your method signature to: float convert(const std::string& str).
Try this :
int convert(char str[], int size) {
int number = 0;
for (int i = 0; i < size; ++i) {
number += (str[i] - 48)*pow(10, (size - i - 1));
}
return number;
}
int pow10(int radix)
{
int r = 1;
for (int i = 0; i < radix; i++)
r *= 10;
return r;
}
float convert2float(char str[], int size) { //size =6
// convert to string_without_decimal
char str_without_decimal[10];
int c = 0;
for (int i = 0; i < size; i++)
{
if (str[i] >= 48 && str[i] <= 57) {
str_without_decimal[c] = str[i];
c++;
}
}
str_without_decimal[c] = '\0'; //str_without_decimal = "12345"
//adjust size if dot present or not. If no dot present => size = c
size = (size != c ?) size - 1 : size; //size = 5 = 6-1 since dot is present
//convert to decimal
int decimal = convert(str_without_decimal, size); //decimal = 12345
//get divisor
int i;
for (i = size; i >= 0; i--) {
if (str[i] == '.') break;
}
int divisor = pow10(size - i); //divisor = 10;
return (float)decimal/(float) divisor; // result = 12345 /10
}
int main()
{
char str[] = "1234.5";
float f = convert2float(str, 6);
cout << f << endl;
return 0;
}

My code gets a TLE (time limit exceeded) on an online judge even though I have coded according to the editorial

This is the question link - QSET - Codechef
This is the editorial link - QSET - Editorial
Basically the question queries for the number of substring in some range [L, R]. I have implemented a segment tree to solve this question. I have closely followed the editorial.
I have created a struct to represent a node of the segment tree.
Can someone explain to me how to make this program faster? I'm guessing faster I/O is the key here. Is that so?
#include <iostream>
#include <vector>
#include <algorithm>
#include <string>
#define ll long long
using namespace std;
struct stnode
{
ll ans; // the answer for this interval
ll pre[3]; // pre[i] denotes number of prefixes of interval which modulo 3 give i
ll suf[3]; // suf[i] denotes number of suffixes of interval which modulo 3 give i
ll total; // sum of interval modulo 3
void setLeaf(int value)
{
if (value % 3 == 0) ans = 1;
else ans = 0;
pre[0] = pre[1] = pre[2] = 0;
suf[0] = suf[1] = suf[2] = 0;
pre[value % 3] = 1;
suf[value % 3] = 1;
total = value % 3;
}
void merge(stnode leftChild, stnode rightChild)
{
ans = leftChild.ans + rightChild.ans;
for (int i = 0; i < 3; i++)
for (int j = 0; j < 3; j++)
if ((i + j) % 3 == 0) ans += leftChild.suf[i] * rightChild.pre[j];
pre[0] = pre[1] = pre[2] = 0;
suf[0] = suf[1] = suf[2] = 0;
for (int i = 0; i < 3; i++)
{
pre[i] += leftChild.pre[i] + rightChild.pre[(3 - leftChild.total + i) % 3];
suf[i] += rightChild.suf[i] + leftChild.suf[(3 - rightChild.total + i) % 3];
}
total = (leftChild.total + rightChild.total) % 3;
}
} segtree[400005];
void buildST(string digits, int si, int ss, int se)
{
if (ss == se)
{
segtree[si].setLeaf(digits[ss] - '0');
return;
}
long left = 2 * si + 1, right = 2 * si + 2, mid = (ss + se) / 2;
buildST(digits, left, ss, mid);
buildST(digits, right, mid + 1, se);
segtree[si].merge(segtree[left], segtree[right]);
}
stnode getValue(int qs, int qe, int si, int ss, int se)
{
if (qs == ss && se == qe)
return segtree[si];
stnode temp;
int mid = (ss + se) / 2;
if (qs > mid)
temp = getValue(qs, qe, 2 * si + 2, mid + 1, se);
else if (qe <= mid)
temp = getValue(qs, qe, 2 * si + 1, ss, mid);
else
{
stnode temp1, temp2;
temp1 = getValue(qs, mid, 2 * si + 1, ss, mid);
temp2 = getValue(mid + 1, qe, 2 * si + 2, mid + 1, se);
temp.merge(temp1, temp2);
}
return temp;
}
void updateTree(int si, int ss, int se, int index, int new_value)
{
if (ss == se)
{
segtree[si].setLeaf(new_value);
return;
}
int mid = (ss + se) / 2;
if (index <= mid)
updateTree(2 * si + 1, ss, mid, index, new_value);
else
updateTree(2 * si + 2, mid + 1, se, index, new_value);
segtree[si].merge(segtree[2 * si + 1], segtree[2 * si + 2]);
}
int main()
{
ios_base::sync_with_stdio(false);
int n, m; cin >> n >> m;
string digits; cin >> digits;
buildST(digits, 0, 0, n - 1);
while (m--)
{
int q; cin >> q;
if (q == 1)
{
int x; int y; cin >> x >> y;
updateTree(0, 0, n - 1, x - 1, y);
}
else
{
int c, d; cin >> c >> d;
cout << getValue(c-1, d-1, 0, 0, n - 1).ans << '\n';
}
}
}
I am getting TLE for larger test cases, ie subtasks 3 and 4 (check the problem page). For subtasks 1 and 2, it gets accepted.
[www.codechef.com/viewsolution/5909107] is an accepted solution. It has pretty much the same code structure except that scanf is used instead of cin. But, I turned off the sync_with_stdio so that shouldn't be a differentiator, right?
I found out what was making this program slow. In the buildST function, I pass the string digits. Since the function is recursive, and the input is fairly large, this creates many copies of the string digits thus incurring large overhead.
I declared a char digits[] at the start of the program and modified the method buildST as follows (basically same but without string digits as a parameter:
void buildST(int si, int ss, int se)
{
if (ss == se)
{
segtree[si].setLeaf(digits[ss] - '0');
return;
}
long left = 2 * si + 1, right = 2 * si + 2, mid = (ss + se) / 2;
buildST(left, ss, mid);
buildST(right, mid + 1, se);
segtree[si].merge(segtree[left], segtree[right]);
}
This solution got accepted.

How to convert a decimal string to binary string?

I have a decimal string like this (length < 5000):
std::string decimalString = "555";
Is there a standard way to convert this string to binary representation? Like this:
std::string binaryString = "1000101011";
Update.
This post helps me.
As the number is very large, you can use a big integer library (boost, maybe?), or write the necessary functions yourself.
If you decide to implement the functions yourself, one way is to implement the old pencil-and-paper long division method in your code, where you'll need to divide the decimal number repeatedly by 2 and accumulate the remainders in another string. May be a little cumbersome, but division by 2 should not be so hard.
Since 10 is not a power of two (or the other way round), you're out of luck. You will have to implement arithmetics in base-10. You need the following two operations:
Integer division by 2
Checking the remainder after division by 2
Both can be computed by the same algorithm.
Alternatively, you can use one of the various big integer libraries for C++, such as GNU MP or Boost.Multiprecision.
I tried to do it.. I don't think my answer is right but here is the IDEA behind what I was trying to do..
Lets say we have 2 decimals:
100 and 200..
To concatenate these, we can use the formula:
a * CalcPower(b) + b where CalcPower is defined below..
Knowing this, I tried to split the very long decimal string into chunks of 4. I convert each string to binary and store them in a vector..
Finally, I go through each string and apply the formula above to concatenate each binary string into one massive one..
I didn't get it working but here is the code.. maybe someone else see where I went wrong.. BinaryAdd, BinaryMulDec, CalcPower works perfectly fine.. the problem is actually in ToBinary
#include <iostream>
#include <bitset>
#include <limits>
#include <algorithm>
std::string BinaryAdd(std::string First, std::string Second)
{
int Carry = 0;
std::string Result;
while(Second.size() > First.size())
First.insert(0, "0");
while(First.size() > Second.size())
Second.insert(0, "0");
for (int I = First.size() - 1; I >= 0; --I)
{
int FirstBit = First[I] - 0x30;
int SecondBit = Second[I] - 0x30;
Result += static_cast<char>((FirstBit ^ SecondBit ^ Carry) + 0x30);
Carry = (FirstBit & SecondBit) | (SecondBit & Carry) | (FirstBit & Carry);
}
if (Carry)
Result += 0x31;
std::reverse(Result.begin(), Result.end());
return Result;
}
std::string BinaryMulDec(std::string value, int amount)
{
if (amount == 0)
{
for (auto &s : value)
{
s = 0x30;
}
return value;
}
std::string result = value;
for (int I = 0; I < amount - 1; ++I)
result = BinaryAdd(result, value);
return result;
}
std::int64_t CalcPowers(std::int64_t value)
{
std::int64_t t = 1;
while(t < value)
t *= 10;
return t;
}
std::string ToBinary(const std::string &value)
{
std::vector<std::string> sets;
std::vector<int> multipliers;
int Len = 0;
int Rem = value.size() % 4;
for (auto it = value.end(), jt = value.end(); it != value.begin() - 1; --it)
{
if (Len++ == 4)
{
std::string t = std::string(it, jt);
sets.push_back(std::bitset<16>(std::stoull(t)).to_string());
multipliers.push_back(CalcPowers(std::stoull(t)));
jt = it;
Len = 1;
}
}
if (Rem != 0 && Rem != value.size())
{
sets.push_back(std::bitset<16>(std::stoull(std::string(value.begin(), value.begin() + Rem))).to_string());
}
auto formula = [](std::string a, std::string b, int mul) -> std::string
{
return BinaryAdd(BinaryMulDec(a, mul), b);
};
std::reverse(sets.begin(), sets.end());
std::reverse(multipliers.begin(), multipliers.end());
std::string result = sets[0];
for (std::size_t i = 1; i < sets.size(); ++i)
{
result = formula(result, sets[i], multipliers[i]);
}
return result;
}
void ConcatenateDecimals(std::int64_t* arr, int size)
{
auto formula = [](std::int64_t a, std::int64_t b) -> std::int64_t
{
return (a * CalcPowers(b)) + b;
};
std::int64_t val = arr[0];
for (int i = 1; i < size; ++i)
{
val = formula(val, arr[i]);
}
std::cout<<val;
}
int main()
{
std::string decimal = "64497387062899840145";
//6449738706289984014 = 0101100110000010000100110010111001100010100000001000001000001110
/*
std::int64_t arr[] = {644, 9738, 7062, 8998, 4014};
ConcatenateDecimals(arr, 5);*/
std::cout<<ToBinary(decimal);
return 0;
}
I found my old code that solve sport programming task:
ai -> aj
2 <= i,j <= 36; 0 <= a <= 10^1000
time limit: 1sec
Execution time was ~0,039 in worst case. Multiplication, addition and division algorithms is very fast because of using 10^9 as numeration system, but implementation can be optimized very well I think.
source link
#include <iostream>
#include <string>
#include <vector>
using namespace std;
#define sz(x) (int((x).size()))
typedef vector<int> vi;
typedef long long llong;
int DigToNumber(char c) {
if( c <= '9' && c >= '0' )
return c-'0';
return c-'A'+10;
}
char NumberToDig(int n) {
if( n < 10 )
return '0'+n;
return n-10+'A';
}
const int base = 1000*1000*1000;
void mulint(vi& a, int b) { //a*= b
for(int i = 0, carry = 0; i < sz(a) || carry; i++) {
if( i == sz(a) )
a.push_back(0);
llong cur = carry + a[i] * 1LL * b;
a[i] = int(cur%base);
carry = int(cur/base);
}
while( sz(a) > 1 && a.back() == 0 )
a.pop_back();
}
int divint(vi& a, int d) { // carry = a%d; a /= d; return carry;
int carry = 0;
for(int i = sz(a)-1; i >= 0; i--) {
llong cur = a[i] + carry * 1LL * base;
a[i] = int(cur/d);
carry = int(cur%d);
}
while( sz(a) > 1 && a.back() == 0 )
a.pop_back();
return carry;
}
void add(vi& a, vi& b) { // a += b
for(int i = 0, c = 0, l = max(sz(a),sz(b)); i < l || c; i++) {
if( i == sz(a) )
a.push_back(0);
a[i] += ((i<sz(b))?b[i]:0) + c;
c = a[i] >= base;
if( c ) a[i] -= base;
}
}
int main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
int from, to; cin >> from >> to;
string s; cin >> s;
vi res(1,0); vi m(1,1); vi tmp;
for(int i = sz(s)-1; i >= 0; i--) {
tmp.assign(m.begin(), m.end());
mulint(tmp,DigToNumber(s[i]));
add(res,tmp); mulint(m,from);
}
vi ans;
while( sz(res) > 1 || res.back() != 0 )
ans.push_back(divint(res,to));
if( sz(ans) == 0 )
ans.push_back(0);
for(int i = sz(ans)-1; i >= 0; i--)
cout << NumberToDig(ans[i]);
cout << "\n";
return 0;
}
How "from -> to" works for string "s":
accumulate Big Number (vector< int >) "res" with s[i]*from^(|s|-i-1), i = |s|-1..0
compute digits by dividing "res" by "to" until res > 0 and save them to another vector
send it to output digit-by-digit (you can use ostringstream instead)
PS I've noted that nickname of thread starter is Denis. And I think this link may be useful too.