So I am learning C++, and in one of the books I'm reading, there is an example for finding GCF (greatest common factor). The function is as follows:
int gcf(int a, int b) {
if(b == 0) {
return a;
}
else {
return gcf(b, a%b);
}
}
What I don't understand is that if I put in 15 and 5 for example, then
a = 15
b = 5
b is not 0 so then the else statement executes
(5, 15%5 = 0) so since b is now 0 it returns, a, which is 5.
That makes sense, but if I reverse the numbers, why/how do I get the same answer?
a = 5
b = 15
b is not 0 so then the else statement executes
(15, 5%15) but 5%15 is .3 or 1/3, but in C++, 5%15 returns 5.
I don't understand where 5 comes from, if anything, since it's an integer, I thought it maybe return 0 but it doesn't return 15, so it can't be.
What you're doing is integer calculation - no floating points or fractions involved.
5 % 15 is actually the remainder you get after dividing 5 by 15, and that is, of course, 5 (the quotient would be 0).
15 | 5 | 0 <-- this is the first call gcf(5, 15)
0
---
5 | 15 | 3 <-- this is the first recursive call gcf(15, 5)
15
---
0 | 5 | <-- this is the second recursive call gcf(5, 0), returns 5
Modulo operator is different from division,usually when we divide the return value is a quotient but when you use modulo operator return value is its reminder.
so in your case when
**
a=5 and b = 15, a%b the return value of this was 0 ,
**
that is the reason why it returned 5. check the following links for greater clarity on modulo operator
http://www.cplusplus.com/doc/tutorial/operators/
http://www.cprogramming.com/tutorial/modulus.html
In integer division, 5/15 = 0. Since 5%15 is the remainder, it needs to be 5. C and C++ mandate that for any a and b, a/b*b + a%b = a.
If you are interested, the piece of code you have written there is called Euclid's Algorithm which is based upon Euclid's Lemma (big surprise there). Although I heard from a professor that some people might refer to different formulations of Euclid's Lemma. My Higher Algebra book particularly refers to it as "Equal gcd's".
It states:
Let a, b, q, and c be integers with a=qb+c. Then gcd(a,b)=gcd(b,c)
gcd(a,b) refers to the greatest common divisor of a and b.
This seems to be precisely what you are doing in your program.
Any integer a can be written as the qb+c for any b. This means that a is a product qb plus some remainder c. The remainder here is what you are calculating when you use the % operator. If we let a = 12 and b = 5 then can write 12=5q+c. Let q be 2. Then our remainder c is 2. Perhaps these things are elementary but hopefully this is nice background to supplement your book's explanation.
Related
I have guessed 5 % 2 is 1 , -5 % 2 is -1
But, In Python, I get the same result.
I think it's not math problem.
>>> -5 % 2
1 ( I think this should be -1 )
>>> 5 % 2
1
>>> -7 % 6
5 ( I think this should be -1 )
>>> 7 % 6
1
Why? Because the modulo operator is defined that way in python.
The documentation states:
The modulo operator always yields a result with the same sign as its
second operand (or zero); [...]
And:
The function math.fmod() returns a result whose sign matches the
sign of the first argument instead, [...] Which approach is more
appropriate depends on the application.
You can look at the % operation in at least a couple of different ways. One important point of view is that m % n finds the element of Z[n] which is congruent to m, where Z[n] is an algebraic representation of the integers restricted to 0, 1, 2, ..., n, called the ring of integers modulo n. Note that all integers, positive, negative, and 0, are congruent to some element 0, 1, 2, ..., n in Z[n].
This ring (that is, this set plus certain operations on it) has many well-known and useful properties. For that reason, it's often advantageous to try to cast a problem in a form that leads to Z[n], where it may be easier to work. This, ultimately, is the reason the Python % was given its definition -- in the end, it has to do with operations in the ring of integers modulo n.
This article about modular arithmetic (in particular, the part about integers modulo n) could be a good starting point if you'd like to know more about this topic.
I have seen many answers for questions concerning modulo of negative numbers. Every answer placed the standard
(a/b)*b + a%b is equal to a
explanation. I can calculate any modulo with this method, and I understand it's necessary employ a modulo function that adds b to the value of a%b if it is negative for the modulo to make sense.
I am trying to make sense of this in laymen's terms. Just what is the modulo of a negative number? I read somewhere that you can calculate the proper modulo of negative numbers by hand some laymen's method of just adding numbers together. This would be helpful, because the a/b *b + a%b method is a little tedious.
To clarify, the modulo of a positive number can be interpreted in laymen's terms as the remainder when you divide the numbers. Obviously this isn't true in the case of negative numbers, so how do you properly "make sense" of the result?
This used to be implementation-defined in older revisions of C++, but now it's all fully specified:
Division truncates, i.e. a / b is the mathematical value with the fractional part discarded. For example, 9 / -5 is the truncation of −1.8, so it's -1.
The remainder operation a % b is defined by the identity you presented. So let's compute: (a / b) * b is -1 * -5 is 5, so 9 % -5 is 4.
By contrast,-9 % 5 is -4. So even though a / -b is the same as -a / b, a % -b is in general different from -a % b. (Similarly, the mathematical notion of modular equivalence, where two integers are congruent modulo n if they differ by an integral multiple of n, is invariant under replacing n with -n.)
TL;DR: There is a difference between modulo operator which is used in math and C++ % operator.
For example, let f(x) = x % 4. Then:
x : -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9
f(x) in math : 3 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 0 1
f(x) in C : -1 -0 -3 -2 -1 0 -3 -2 -1 0 1 2 3 0 1 2 3 0 1
^~~~~~~~~~~~~~~~~~~~~~~~^
This part is different
You don't need any special tricks to compute C++-style % of a negative number.
Just use a%b == a - (a/b)*b, which is derived from (a/b)*b + a%b == a.
Long answer:
Cppreference says following:
The binary operator % yields the remainder of the integer division of the first operand by the second (after usual arithmetic conversions; note that the operand types must be integral types). If the quotient a/b is representable in the result type, (a/b)*b + a%b == a. If the second operand is zero, the behavior is undefined. If the quotient a/b is not representable in the result type, the behavior of both a/b and a%b is undefined (that means INT_MIN%-1 is undefined on 2's complement systems)
Note: Until C++11, if one or both operands to binary operator % were negative, the sign of the remainder was implementation-defined, as it depends on the rounding direction of integer division. The function std::div provided well-defined behavior in that case.
The important parts are:
(a/b)*b + a%b == a.
"Until C++11, if one or both operands to binary operator % were negative, the sign of the remainder was implementation-defined."
This implies that since C++11 the operator is well-defined for negative operands too.
There is no mention of any special handling of negative operands, thus we can say that above identity words for them too.
From (a/b)*b + a%b == a we can easily derive a formula for a%b:
a%b == a - (a/b)*b
If you think about it, this formula ignores the sign of b, works as if the modulo was computed with the absolute value of a, with the sign of a appended to the result.
If you want to compute the "classical" modulo, you may use something like following function:
template <typename T, typename TT> constexpr T true_mod(T a, TT b)
{
static_assert(std::is_integral<T>::value &&
std::is_integral<TT>::value, "Argument types must be integral.");
if (a >= 0)
return a % b;
else
return (b >= 0 ? b : -b) - 1 + (a + 1) % b;
}
(a/b)*b + a%b is equal to a
Even if this statement is true, the result can change from a language to an other.
This difference also depends on the result of the division.
for example, in python, I have:
>>> # in python, use "//" for the floor division
>>> 3 // 4 # 0.75 is rounded to 0 : OK
0
>>> 3 % 4 # still as expected
3
>>> 0 * 4 + 3 # standard valided
3
>>> (-3) // 4 # -0.75 is rounded to -1, not 0 (the floor)
-1
>>> (-3) % 4 # the result is consistant; a modulo garanteed to be between 0 and b is very usefull
1
>>> (-1) * 4 + 1 # standard valided
-3
>>> 3 // (-4) # -0.75 is rounded to -1, not 0 (the floor)
-1
>>> 3 % (-4) # still a number between 0 and b
-1
>>> (-1) * (-4) + (-1) # standard valided
3
SUMMARY:
MODULO TEST: language=python
a=3 b=4 a/b=0 a%b=3 standard:true
a=-3 b=4 a/b=-1 a%b=1 standard:true
a=3 b=-4 a/b=-1 a%b=-1 standard:true
a=-3 b=-4 a/b=0 a%b=-3 standard:true
If my memory is good, the modulo doesn't work like that in C, even if the standard is valided. It can be very disturbing.
I've juste written a little programm to test the results in C:
#include <stdio.h>
void test(int a, int b) {
int q = a/b;
int r = a%b;
int ok = q*b+r == a;
printf("a=%-2d b=%-2d a/b=%-2d a%%b=%-2d standard:%s\n", a, b, q, r, ok?"true":"false");
}
int main(int argc, char const *argv[]) {
printf("MODULO TEST: language=c\n");
test( 3, 4);
test(-3, 4);
test( 3,-4);
test(-3,-4);
return 0;
}
which gives:
MODULO TEST: language=c
a=3 b=4 a/b=0 a%b=3 standard:true
a=-3 b=4 a/b=0 a%b=-3 standard:true
a=3 b=-4 a/b=0 a%b=3 standard:true
a=-3 b=-4 a/b=0 a%b=-3 standard:true
So yes, the standard is not enough to fix a unique method for the modulo of two (negative) numbers.
You could use this code when the left number has an unknown sign :
int mod = a % b;
if (mod*b < 0) mod += b;
This code will give you a number between 0 and b all the time, like in python (0 <= mod < b, or b < mod <= 0 if b is negative).
The * b is useless if b is strictly positive (in most of the cases).
EDIT
Using a XOR is better than a multiplication, as it prevents overflow.
int mod = a % b;
if ((mod < 0) ^ (b < 0)) mod += b;
And when b is strictly positive:
int mod = a % b;
if (mod < 0) mod += b;
EDIT 2 (2018-10-09)
Better use this, to use python-style division (modulo between 0 included and b excluded) in C:
int q = a / b;
int r = a % b;
if ((b<0) ? (r<0) : (r>0)) {
q -= 1;
r += b;
}
It prevents "extremes" cases like b is negative and divides a (like 6 % (-3)). The result must be 0.
The division algorithm states that given two integers a and d, with d ≠ 0, there exists unique integers q and r such that a = qd + r and 0 ≤ r < |d|, where |d| denotes the absolute value of d. The integer q is the quotient, r is the remainder, d is the divisor, and a is the dividend. prompt the user for a dividend and divisor and then display the division algorithm's results:
If a = 17 and d = 3, then q = 5 and r = 2, since 17 = 5 * 3 + 2.
If a = 17 and d = -3, then q = -5 and r = 2, since 17 = -5 * -3 + 2.
The C++ operators for integer division do not conform to the division algorithm. Explain in output displayed to the user of the program when to expect results that disagree with the division algorithm. The program should not attempt to resolve this issue.
Ok, so I've been trying to figure this out for a few days now and I just can't crack it. The only way I can think of solving this is by using mod to find r or using a successive subtraction to find q. However, I'm pretty sure both of those solution don't really count. Is there some other way to solve this problem?
[Edit] I don't think successive subtraction works because that's just Euclid's algorithm so I really wouldn't be using this algorithm and using modulus would just like using the C++ division operator.
Here's a hint. For positive numbers, everything works out ok.
The C++ expression q = 17/3 results in q == 5.
But for negative numbers:
The expression q = -17/3 results in q == -5
With the way the question is worded, I'm pretty sure you are supposed to use mod to find r and the division operator to find q. It says straight up "The C++ operators for integer division do not conform to the division algorithm. Explain in output displayed to the user of the program when to expect results that disagree with the division algorithm. The program should not attempt to resolve this issue." This means, don't over-think it, and instead just demonstrate how the operators don't conform to the algorithm without trying to fix anything.
Suppose your code naively uses C++ operators to calculate q and r as follows:
int q = a / d;
int r = a % d;
You then get wrong (in this case, wrong just means they don't "conform" to your algorithm) values for both q and r in the following two cases:
a = -17, b = 3
The code will result in:
q = -5, r = 2
This does not conform to the division algorithm because
-5 * 3 + 2 = -13 so you clearly have the wrong q and r.
a = -17, b = -3
The code will result in:
q = 5, r = -2
This does not conform to the division algorithm because there is a rule that 0 <=
r < |d|. In other words, r must be positive.
In the other cases, i.e. where both a and d are positive or when a is positive and d is negative, the C++ operators will correctly "conform" to your algorithm.
The C++ integer division operator gives the correct result only if both dividend and divisor are positive. This because it always rounds toward zero
As the question doesn't clearly states the problem I presume you want to output the devision like the examples provided, so a simple code would be:
int a=17;
int b=-3;
cout<<"If a="<<a<<"and b="<<b<<"then q="<<a/b<<"and r="<<a%b;
Math:
If you have an equation like this:
x = 3 mod 7
x could be ... -4, 3, 10, 17, ..., or more generally:
x = 3 + k * 7
where k can be any integer. I don't know of a modulo operation is defined for math, but the factor ring certainly is.
Python:
In Python, you will always get non-negative values when you use % with a positive m:
#!/usr/bin/python
# -*- coding: utf-8 -*-
m = 7
for i in xrange(-8, 10 + 1):
print(i % 7)
Results in:
6 0 1 2 3 4 5 6 0 1 2 3 4 5 6 0 1 2 3
C++:
#include <iostream>
using namespace std;
int main(){
int m = 7;
for(int i=-8; i <= 10; i++) {
cout << (i % m) << endl;
}
return 0;
}
Will output:
-1 0 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 0 1 2 3
ISO/IEC 14882:2003(E) - 5.6 Multiplicative operators:
The binary / operator yields the quotient, and the binary % operator
yields the remainder from the division of the first expression by the
second. If the second operand of / or % is zero the behavior is
undefined; otherwise (a/b)*b + a%b is equal to a. If both operands are
nonnegative then the remainder is nonnegative; if not, the sign of the
remainder is implementation-defined 74).
and
74) According to work underway toward the revision of ISO C, the
preferred algorithm for integer division follows the rules defined in
the ISO Fortran standard, ISO/IEC 1539:1991, in which the quotient is
always rounded toward zero.
Source: ISO/IEC 14882:2003(E)
(I couldn't find a free version of ISO/IEC 1539:1991. Does anybody know where to get it from?)
The operation seems to be defined like this:
Question:
Does it make sense to define it like that?
What are arguments for this specification? Is there a place where the people who create such standards discuss about it? Where I can read something about the reasons why they decided to make it this way?
Most of the time when I use modulo, I want to access elements of a datastructure. In this case, I have to make sure that mod returns a non-negative value. So, for this case, it would be good of mod always returned a non-negative value.
(Another usage is the Euclidean algorithm. As you could make both numbers positive before using this algorithm, the sign of modulo would matter.)
Additional material:
See Wikipedia for a long list of what modulo does in different languages.
On x86 (and other processor architectures), integer division and modulo are carried out by a single operation, idiv (div for unsigned values), which produces both quotient and remainder (for word-sized arguments, in AX and DX respectively). This is used in the C library function divmod, which can be optimised by the compiler to a single instruction!
Integer division respects two rules:
Non-integer quotients are rounded towards zero; and
the equation dividend = quotient*divisor + remainder is satisfied by the results.
Accordingly, when dividing a negative number by a positive number, the quotient will be negative (or zero).
So this behaviour can be seen as the result of a chain of local decisions:
Processor instruction set design optimises for the common case (division) over the less common case (modulo);
Consistency (rounding towards zero, and respecting the division equation) is preferred over mathematical correctness;
C prefers efficiency and simplicitly (especially given the tendency to view C as a "high level assembler"); and
C++ prefers compatibility with C.
Back in the day, someone designing the x86 instruction set decided it was right and good to round integer division toward zero rather than round down. (May the fleas of a thousand camels nest in his mother's beard.) To keep some semblance of math-correctness, operator REM, which is pronounced "remainder", had to behave accordingly. DO NOT read this: https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_73/rzatk/REM.htm
I warned you. Later someone doing the C spec decided it would be conforming for a compiler to do it either the right way or the x86 way. Then a committee doing the C++ spec decided to do it the C way. Then later yet, after this question was posted, a C++ committee decided to standardize on the wrong way. Now we are stuck with it. Many a programmer has written the following function or something like it. I have probably done it at least a dozen times.
inline int mod(int a, int b) {int ret = a%b; return ret>=0? ret: ret+b; }
There goes your efficiency.
These days I use essentially the following, with some type_traits stuff thrown in. (Thanks to Clearer for a comment that gave me an idea for an improvement using latter day C++. See below.)
<strike>template<class T>
inline T mod(T a, T b) {
assert(b > 0);
T ret = a%b;
return (ret>=0)?(ret):(ret+b);
}</strike>
template<>
inline unsigned mod(unsigned a, unsigned b) {
assert(b > 0);
return a % b;
}
True fact: I lobbied the Pascal standards committee to do mod the right way until they relented. To my horror, they did integer division the wrong way. So they do not even match.
EDIT: Clearer gave me an idea. I am working on a new one.
#include <type_traits>
template<class T1, class T2>
inline T1 mod(T1 a, T2 b) {
assert(b > 0);
T1 ret = a % b;
if constexpr ( std::is_unsigned_v<T1>)
{
return ret;
} else {
return (ret >= 0) ? (ret) : (ret + b);
}
}
What are arguments for this specification?
One of the design goals of C++ is to map efficiently to hardware. If the underlying hardware implements division in a way that produces negative remainders, then that's what you'll get if you use % in C++. That's all there is to it really.
Is there a place where the people who create such standards discuss about it?
You will find interesting discussions on comp.lang.c++.moderated and, to a lesser extent, comp.lang.c++
Others have described the why well enough and unfortunately the question which asks for a solution is marked a duplicate of this one and a comprehensive answer on that aspect seems to be missing. There seem to be 2 commonly used general solutions and one special-case I would like to include:
// 724ms
inline int mod1(int a, int b)
{
const int r = a % b;
return r < 0 ? r + b : r;
}
// 759ms
inline int mod2(int a, int b)
{
return (a % b + b) % b;
}
// 671ms (see NOTE1!)
inline int mod3(int a, int b)
{
return (a + b) % b;
}
int main(int argc, char** argv)
{
volatile int x;
for (int i = 0; i < 10000000; ++i) {
for (int j = -argc + 1; j < argc; ++j) {
x = modX(j, argc);
if (x < 0) return -1; // Sanity check
}
}
}
NOTE1: This is not generally correct (i.e. if a < -b). The reason I included it is because almost every time I find myself taking the modulus of a negative number is when doing math with numbers that are already modded, for example (i1 - i2) % n where the 0 <= iX < n (e.g. indices of a circular buffer).
As always, YMMV with regards to timing.
The problem is to derive a formula for determining number of digits a given decimal number could have in a given base.
For example: The decimal number 100006 can be represented by 17,11,9,8,7,6,8 digits in bases 2,3,4,5,6,7,8 respectively.
Well the formula I derived so far is like this : (log10(num) /log10(base)) + 1.
in C/C++ I used this formula to compute the above given results.
long long int size = ((double)log10(num) / (double)log10(base)) + 1.0;
But sadly the formula is not giving correct answer is some cases,like these :
Number 8 in base 2 : 1,0,0,0
Number of digits: 4
Formula returned: 3
Number 64 in base 2 : 1,0,0,0,0,0,0
Number of digits: 7
Formula returned: 6
Number 64 in base 4 : 1,0,0,0
Number of digits: 4
Formula returned: 3
Number 125 in base 5 : 1,0,0,0
Number of digits: 4
Formula returned: 3
Number 128 in base 2 : 1,0,0,0,0,0,0,0
Number of digits: 8
Formula returned: 7
Number 216 in base 6 : 1,0,0,0
Number of digits: 4
Formula returned: 3
Number 243 in base 3 : 1,0,0,0,0,0
Number of digits: 6
Formula returned: 5
Number 343 in base 7 : 1,0,0,0
Number of digits: 4
Formula returned: 3
So the error is by 1 digit.I just want somebody to help me to correct the formula so that it work for every possible cases.
Edit : As per the input specification I have to deal with cases like 10000000000, i.e 10^10,I don't think log10() in either C/C++ can handle such cases ? So any other procedure/formula for this problem will be highly appreciated.
There are fast floating operations in your compiler settings. You need precise floation operations. The thing is that log10(8)/log10(2) is always 3 in math. But may be your result is 2.99999, for expample. It is bad. You must add small additive, but not 0.5. It should be about .00001 or something like that.
Almost true formula:
int size = static_cast<int>((log10((double)num) / log10((double)base)) + 1.00000001);
Really true solution
You should check the result of your formula. Compexity is O(log log n) or O(log result)!
int fast_power(int base, int s)
{
int res = 1;
while (s) {
if (s%2) {
res*=base;
s--;
} else {
s/=2;
base*=base;
}
}
return res;
}
int digits_size(int n, int base)
{
int s = int(log10(1.0*n)/log10(1.0*base)) + 1;
return fast_power(base, s) > n ? s : s+1;
}
This check is better than Brute-force test with base multiplications.
Either of the following will work:
>>> from math import *
>>> def digits(n, b=10):
... return int(1 + floor(log(n, b))) if n else 1
...
>>> def digits(n, b=10):
... return int(ceil(log(n + 1, b))) if n else 1
...
The first version is explained at mathpath.org. In the second version the + 1 is necessary to yield the correct answer for any number n that is the smallest number with d digits in base b. That is, those numbers which are written 10...0 in base b. Observe that input 0 must be treated as a special case.
Decimal examples:
>>> digits(1)
1
>>> digits(9)
1
>>> digits(10)
2
>>> digits(99)
2
>>> digits(100)
3
Binary:
>>> digits(1, 2)
1
>>> digits(2, 2)
2
>>> digits(3, 2)
2
>>> digits(4, 2)
3
>>> digits(1027, 2)
11
Edit: The OP states that the log solution may not work for large inputs. I don't know about that, but if so, the following code should not break down, because it uses integer arithmetic only (this time in C):
unsigned int
digits(unsigned long long n, unsigned long long b)
{
unsigned int d = 0;
while (d++, n /= b);
return d;
}
This code will probably be less efficient. And yes, it was written for maximum obscurity points. It simply uses the observation that every number has at least one digit, and that every divison by b which does not yield 0 implies the existence of an additional digit. A more readable version is the following:
unsigned int
digits(unsigned long long n, unsigned long long b)
{
unsigned int d = 1;
while (n /= b) {
d++;
}
return d;
}
Number of digits of a numeral in a given base
Since your formula is correct (I just tried it), I would think that it's a rounding error in your division, causing the number to be just slightly less than the integer value it should be. So when you truncate to an integer, you lose 1. Try adding an additional 0.5 to your final value (so that truncating is actually a round operation).
What you want is ceiling ( = smallest integer not greater than) logb (n+1), rather than what you're calculating right now, floor(1+logb(n)).
You might try:
int digits = (int) ceil( log((double)(n+1)) / log((double)base) );
As others have pointed out, you have rounding error, but the proposed solutions simply move the danger zone or make it smaller, they don't eliminate it. If your numbers are integers then you can verify -- using integer arithmetic -- that one power of the base is less than or equal to your number, and the next is above it (the first power is the number of digits). But if you use floating point arithmetic anywhere in the chain then you will be vulnerable to error (unless your base is a power of two, and maybe even then).
EDIT:
Here is crude but effective solution in integer arithmetic. If your integer classes can hold numbers as big as base*number, this will give the correct answer.
size = 0, k = 1;
while(k<=num)
{
k *= base;
size += 1;
}
Using your formula,
log(8)/log(2) + 1 = 4
the problem is in the precision of the logarithm calculation. Using
ceil(log(n+1)/log(b))
ought to resolve that problem. This isn't quite the same as
ceil(log(n)/log(b))
because this gives the answer 3 for n=8 b=2, nor is it the same as
log(n+1)/log(b) + 1
because this gives the answer 4 for n=7 b=2 (when calculated to full precision).
I actually get some curious resulting implementing and compiling the first form with g++:
double n = double(atoi(argv[1]));
double b = double(atoi(argv[2]));
int i = int(std::log(n)/std::log(b) + 1.0);
fails (IE gives the answer 3), while,
double v = std::log(n)/std::log(b) + 1.0;
int i = int(v);
succeeds (gives the answer 4). Looking at it some more I think a third form
ceil(log(n+0.5)/log(b))
would be more stable, because it avoids the "critical" case when n (or n+1 for the second form) is an integer power of b (for integer values of n).
It may be beneficial to wrap a rounding function (e.g. + 0.5) into your code somewhere: it's quite likely that the division is producing (e.g.) 2.99989787, to which 1.0 is added, giving 3.99989787 and when that's converted to an int, it gives 3.
Looks like the formula is right to me:
Number 8 in base 2 : 1,0,0,0
Number of digits: 4
Formula returned: 3
log10(8) = 0.903089
log10(2) = 0.301029
Division => 3
+1 => 4
So it's definitely just a rounding error.
Floating point rounding issues.
log10(216) / log10(6) = 2.9999999999999996
But you cannot add 0.5 as suggested, because it would not work for the following
log10(1295) = log10(6) = 3.9995691928566091 // 5, 5, 5, 5
log10(1296) = log10(6) = 4.0 // 1, 0, 0, 0, 0
Maybe using the log(value, base) function would avoid these rounding errors.
I think that the only way to get the rounding error eliminated without producing other errors is to use or implement integer logarithms.
Here is a solution in bash:
% digits() { echo $1 $2 opq | dc | sed 's/ .//g;s/.//' | wc -c; }
% digits 10000000000 42
7
static int numInBase(int num, int theBase)
{
if(num == 0) return 0;
if (num == theBase) return 1;
return 1 + numInBase(num/theBase,theBase);
}