Why is there a loop in this division as multiplication code? - c++

I got the js code below from an archive of hackers delight (view the source)
The code takes in a value (such as 7) and spits out a magic number to multiply with. Then you bitshift to get the results. I don't remember assembly or any math so I'm sure I'm wrong but I can't find the reason why I'm wrong
From my understanding you could get a magic number by writing ceil(1/divide * 1<<32) (or <<64 for 64bit values, but you'd need bigger ints). If you multiple an integer with imul you'd get the result in one register and the remainder in another. The result register is magically the correct result of a division with this magic number from my formula
I wrote some C++ code to show what I mean. However I only tested with the values below. It seems correct. The JS code has a loop and more and I was wondering, why? Am I missing something? What values can I use to get an incorrect result that the JS code would get correctly? I'm not very good at math so I didn't understand any of the comments
#include <cstdio>
#include <cassert>
int main(int argc, char *argv[])
{
auto test_divisor = 7;
auto test_value = 43;
auto a = test_value*test_divisor;
auto b = a-1; //One less test
auto magic = (1ULL<<32)/test_divisor;
if (((1ULL<<32)%test_divisor) != 0) {
magic++; //Round up
}
auto answer1 = (a*magic) >> 32;
auto answer2 = (b*magic) >> 32;
assert(answer1 == test_value);
assert(answer2 == test_value-1);
printf("%lld %lld\n", answer1, answer2);
}
JS code from hackers delight
var two31 = 0x80000000
var two32 = 0x100000000
function magic_signed(d) { with(Math) {
if (d >= two31) d = d - two32// Treat large positive as short for negative.
var ad = abs(d)
var t = two31 + (d >>> 31)
var anc = t - 1 - t%ad // Absolute value of nc.
var p = 31 // Init p.
var q1 = floor(two31/anc) // Init q1 = 2**p/|nc|.
var r1 = two31 - q1*anc // Init r1 = rem(2**p, |nc|).
var q2 = floor(two31/ad) // Init q2 = 2**p/|d|.
var r2 = two31 - q2*ad // Init r2 = rem(2**p, |d|).
do {
p = p + 1;
q1 = 2*q1; // Update q1 = 2**p/|nc|.
r1 = 2*r1; // Update r1 = rem(2**p, |nc|.
if (r1 >= anc) { // (Must be an unsigned
q1 = q1 + 1; // comparison here).
r1 = r1 - anc;}
q2 = 2*q2; // Update q2 = 2**p/|d|.
r2 = 2*r2; // Update r2 = rem(2**p, |d|.
if (r2 >= ad) { // (Must be an unsigned
q2 = q2 + 1; // comparison here).
r2 = r2 - ad;}
var delta = ad - r2;
} while (q1 < delta || (q1 == delta && r1 == 0))
var mag = q2 + 1
if (d < 0) mag = two32 - mag // Magic number and
shift = p - 32 // shift amount to return.
return mag
}}

In the C CODE:
auto magic = (1ULL<<32)/test_divisor;
We get Integer Value in magic because both (1ULL<<32) & test_divisor are Integers.
The Algorithms requires incrementing magic on certain conditions, which is the next conditional statement.
Now, multiplication also gives Integers:
auto answer1 = (a*magic) >> 32;
auto answer2 = (b*magic) >> 32;
C CODE is DONE !
In the JS CODE:
All Variables are var ; no Data types !
No Integer Division ; No Integer Multiplication !
Bitwise Operations are not easy and not suitable to use in this Algorithm.
Numeric Data is via Number & BigInt which are not like "C Int" or "C Unsigned Long Long".
Hence the Algorithm is using loops to Iteratively add and compare whether "Division & Multiplication" has occurred to within the nearest Integer.
Both versions try to Implement the same Algorithm ; Both "should" give same answer, but JS Version is "buggy" & non-standard.
While there are many Issues with the JS version, I will highlight only 3:
(1) In the loop, while trying to get the best Power of 2, we have these two statements :
p = p + 1;
q1 = 2*q1; // Update q1 = 2**p/|nc|.
It is basically incrementing a counter & multiplying a number by 2, which is a left shift in C++.
The C++ version will not require this rigmarole.
(2) The while Condition has 2 Equality comparisons on RHS of || :
while (q1 < delta || (q1 == delta && r1 == 0))
But both these will be false in floating Point Calculations [[ eg check "Math.sqrt(2)*Math.sqrt(0.5) == 1" : even though this must be true, it will almost always be false ]] hence the while Condition is basically the LHS of || , because RHS will always be false.
(3) The JS version returns only one variable mag but user is supposed to get (& use) even variable shift which is given by global variable access. Inconsistent & BAD !
Comparing , we see that the C Version is more Standard, but Point is to not use auto but use int64_t with known number of bits.

First I think ceil(1/divide * 1<<32) can, depending on the divide, have cases where the result is off by one. So you don't need a loop but sometimes you need a corrective factor.
Secondly the JS code seems to allow for other shifts than 32: shift = p - 32 // shift amount to return. But then it never returns that. So not sure what is going on there.
Why not implement the JS code in C++ as well and then run a loop over all int32_t and see if they give the same result? That shouldn't take too long.
And when you find a d where they differ you can then test a / d for all int32_t a using both magic numbers and compare a / d, a * m_ceil and a * m_js.

Related

Incorrect variable range behavior in RooFit

The RooFit package allows me to import some TTree branches, but it constrains values included in those branches between the min and max value set by a RooRealVar. For instance:
RooRealVar t1("t1", "Some variable", 0.0, 1.0);
RooDataSet data("data", "My Dataset", RooArgSet(t1), Import(*myttree));
so long as the TTree myttree contains a branch called t1. This is all fine and good, until you start getting close to floating point values in the range. My particular problem occurs because I have a variable like t1 which maps to some variable with an exponential distribution. I'm trying to fit this distribution, but the fits fail for values of t1 ~ 0.0. My solution was to just change the range a bit to cut off potential events where the stored value of t1 in the tree is actually zero or close to it (all the following code is run in the ROOT interpreter, but I have confirmed it works in compiled code as well):
root[0] RooRealVar t1("t1", "Some variable", 0.001, 1.0);
However, note the following annoying behavior:
root[1] t1 = 0.000998;
root[2] t1.getVal();
(double) 0.0010000000
// this is correct, as 0.000998 < 0.001 so RooFit set it as the lower limit
root[3] t1 = 0.000999;
root[4] t1.getVal();
(double) 0.00099900000
// this is incorrect.
Yes, the extra zero is printed in the second case, which I also don't understand, but I'm mostly concerned with the failure to recognize that 0.000999 < 0.001. When I compare these values later in an if-statement, I find that C++ can tell the difference. Everything appears to be double precision here, and I've been tracing through the code to see where the precision error seems to crop up. Correct me if I'm wrong, but a float should still hold these numbers up to comparison precision, right? What's going on here? If this is some floating point error problem, what's the best way to resolve it? I have several events with values like t1 = 0.000999874, and changing the bounds to something like 0.0001 doesn't really help either, there are still events which live on this edge.
Edit: I want to emphasize that while this is probably a floating point problem, it really shouldn't be. For instance, the following code works:
root[0] RooRealVar t1("t1", "Some variable", 0.001, 1.0);
root[1] t1 = 0.000999;
root[2] t1.getVal() < 0.001;
(bool) true
Well everyone, I found the answer (and I hate it). It actually has very little to do with floating point arithmetic, and I'm honestly not sure why the code was written this way. From the source code that determines if a value is "in range":
bool RooAbsRealLValue::inRange(double value, const char* rangeName, double* clippedValPtr) const
{
// double range = getMax() - getMin() ; // ok for +/-INIFINITY
double clippedValue(value);
bool isInRange(true) ;
const RooAbsBinning& binning = getBinning(rangeName) ;
double min = binning.lowBound() ;
double max = binning.highBound() ;
// test this value against our upper fit limit
if(!RooNumber::isInfinite(max) && value > (max+1e-6)) {
clippedValue = max;
isInRange = false ;
}
// test this value against our lower fit limit
if(!RooNumber::isInfinite(min) && value < min-1e-6) {
clippedValue = min ;
isInRange = false ;
}
if (clippedValPtr) *clippedValPtr=clippedValue ;
return isInRange ;
}
As we can see here, RooFit doesn't actually check if min < val < max or even min <= val <= max but rather min - 1e-6 < value < max + 1e-6! I couldn't find a single place where this was documented explicitly, but I'm even more concerned that there is a separate implementation of inRange which takes a variable name (or comma separated list of variable names) and returns a result which is incompatible with the prior implementation:
bool RooAbsRealLValue::inRange(const char* name) const
{
const double val = getVal() ;
const double epsilon = 1e-8 * fabs(val) ;
if (!name || name[0] == '\0') {
const auto minMax = getRange(nullptr);
return minMax.first - epsilon <= val && val <= minMax.second + epsilon;
}
const auto& ranges = ROOT::Split(name, ",");
return std::any_of(ranges.begin(), ranges.end(), [val,epsilon,this](const std::string& range){
const auto minMax = this->getRange(range.c_str());
return minMax.first - epsilon <= val && val <= minMax.second + epsilon;
});
}
Here, we can see the creation of an epsilon = 1e-8 * fabs(val) rather than the arbitrary 1e-6 given in the first definition. This comparison uses a <= rather than a < also. It should be noted that the method used to filter trees when imported in this way uses the first implementation (source here).
Somewhere along the way (I'm not entirely sure where actually), some of these arbitrary comparisons lead to the following paradoxical behavior:
root[0] RooRealVar t1("t1", "Some variable", 0.001, 1.0);
root[1] t1 = 0.001 - 1e-6;
(RooAbsArg &) RooRealVar::t1 = 0.000999 L(0.001 - 1)
root[2] t1 = 0.001 - 1e-8 * 0.001;
(RooAbsArg &) RooRealVar::t1 = 0.001 L(0.001 - 1)
root[3] t1 = 0.00099999999;
(RooAbsArg &) RooRealVar::t1 = 0.001 L(0.001 - 1)
I would classify this as a bug. Under no circumstances should 0.00099900000 be classified as within the range of (0.001 - 1) where 0.00099999999 is not!

Variable changes on it's own in C++

I have a loop going through an array trying to find which index is a string. It should solve for what that value should be.
I can't figure out why, but as soon as the if statements start i becomes 1 which gives my code an error.
I'm not very fluent in C++.
for(int i = 0; i < 4; i++) {
if(auto value = std::get_if<std::string>(&varArr[i])) {
solvedIndex = i;
auto value0 = std::get_if<float>(&varArr[0]);
auto value1 = std::get_if<float>(&varArr[1]);
auto value2 = std::get_if<float>(&varArr[2]);
auto value3 = std::get_if<float>(&varArr[3]);
//i changes to 1 when this if starts??
if(i = 0) {
solvedVar = (*value3 / *value1) * *value2;
} else if (i = 1) {
solvedVar = *value3 / (*value0 / *value2);
} else if (i = 2) {
solvedVar = *value0 / (*value3 / *value1);
} else {
solvedVar = *value1 * (*value0 / *value2);
}
break;
}
}
Note that these variables are declared above. Also, varArr is filled with values:
std::variant<std::string, float> varArr[4];
int solvedIndex;
float solvedVar;
As has been noted, in your if statements, you are using the assignment operator (=) but want the equality comparison operator (==). For your variable i the first if statement sets i equal to 0 and if(0) is the same as if(false). So your program goes to the first else-if which sets i equal to 1 and if(1) evaluates to true. Your code then finishes the block within else if (i = 1) {...} and then ends.
That's because operator= is the assignment operator in C++ (and most languages, actually). That changes the value of the variable to the value on the other side. So, for instance:
x = 0
will change the value of x to 0. Doesn't matter if it's in an if statement. It will always change the value to 0 (or whatever the right hand side value is).
What you are looking for is operator==, which is the comparison (aka relational) operator in C++/ That asks the question "Are these two things equal?" So, for instance:
x == 0
asks is x is equal to 0.

How to store output of very large Fibonacci number?

I am making a program for nth Fibonacci number. I made the following program using recursion and memoization.
The main problem is that the value of n can go up to 10000 which means that the Fibonacci number of 10000 would be more than 2000 digit long.
With a little bit of googling, I found that i could use arrays and store every digit of the solution in an element of the array but I am still not able to figure out how to implement this approach with my program.
#include<iostream>
using namespace std;
long long int memo[101000];
long long int n;
long long int fib(long long int n)
{
if(n==1 || n==2)
return 1;
if(memo[n]!=0)
return memo[n];
return memo[n] = fib(n-1) + fib(n-2);
}
int main()
{
cin>>n;
long long int ans = fib(n);
cout<<ans;
}
How do I implement that approach or if there is another method that can be used to achieve such large values?
One thing that I think should be pointed out is there's other ways to implement fib that are much easier for something like C++ to compute
consider the following pseudo code
function fib (n) {
let a = 0, b = 1, _;
while (n > 0) {
_ = a;
a = b;
b = b + _;
n = n - 1;
}
return a;
}
This doesn't require memoisation and you don't have to be concerned about blowing up your stack with too many recursive calls. Recursion is a really powerful looping construct but it's one of those fubu things that's best left to langs like Lisp, Scheme, Kotlin, Lua (and a few others) that support it so elegantly.
That's not to say tail call elimination is impossible in C++, but unless you're doing something to optimise/compile for it explicitly, I'm doubtful that whatever compiler you're using would support it by default.
As for computing the exceptionally large numbers, you'll have to either get creative doing adding The Hard Way or rely upon an arbitrary precision arithmetic library like GMP. I'm sure there's other libs for this too.
Adding The Hard Way™
Remember how you used to add big numbers when you were a little tater tot, fresh off the aluminum foil?
5-year-old math
1259601512351095520986368
+ 50695640938240596831104
---------------------------
?
Well you gotta add each column, right to left. And when a column overflows into the double digits, remember to carry that 1 over to the next column.
... <-001
1259601512351095520986368
+ 50695640938240596831104
---------------------------
... <-472
The 10,000th fibonacci number is thousands of digits long, so there's no way that's going to fit in any integer C++ provides out of the box. So without relying upon a library, you could use a string or an array of single-digit numbers. To output the final number, you'll have to convert it to a string tho.
(woflram alpha: fibonacci 10000)
Doing it this way, you'll perform a couple million single-digit additions; it might take a while, but it should be a breeze for any modern computer to handle. Time to get to work !
Here's an example in of a Bignum module in JavaScript
const Bignum =
{ fromInt: (n = 0) =>
n < 10
? [ n ]
: [ n % 10, ...Bignum.fromInt (n / 10 >> 0) ]
, fromString: (s = "0") =>
Array.from (s, Number) .reverse ()
, toString: (b) =>
b .reverse () .join ("")
, add: (b1, b2) =>
{
const len = Math.max (b1.length, b2.length)
let answer = []
let carry = 0
for (let i = 0; i < len; i = i + 1) {
const x = b1[i] || 0
const y = b2[i] || 0
const sum = x + y + carry
answer.push (sum % 10)
carry = sum / 10 >> 0
}
if (carry > 0) answer.push (carry)
return answer
}
}
We can verify that the Wolfram Alpha answer above is correct
const { fromInt, toString, add } =
Bignum
const bigfib = (n = 0) =>
{
let a = fromInt (0)
let b = fromInt (1)
let _
while (n > 0) {
_ = a
a = b
b = add (b, _)
n = n - 1
}
return toString (a)
}
bigfib (10000)
// "336447 ... 366875"
Expand the program below to run it in your browser
const Bignum =
{ fromInt: (n = 0) =>
n < 10
? [ n ]
: [ n % 10, ...Bignum.fromInt (n / 10 >> 0) ]
, fromString: (s = "0") =>
Array.from (s) .reverse ()
, toString: (b) =>
b .reverse () .join ("")
, add: (b1, b2) =>
{
const len = Math.max (b1.length, b2.length)
let answer = []
let carry = 0
for (let i = 0; i < len; i = i + 1) {
const x = b1[i] || 0
const y = b2[i] || 0
const sum = x + y + carry
answer.push (sum % 10)
carry = sum / 10 >> 0
}
if (carry > 0) answer.push (carry)
return answer
}
}
const { fromInt, toString, add } =
Bignum
const bigfib = (n = 0) =>
{
let a = fromInt (0)
let b = fromInt (1)
let _
while (n > 0) {
_ = a
a = b
b = add (b, _)
n = n - 1
}
return toString (a)
}
console.log (bigfib (10000))
Try not to use recursion for a simple problem like fibonacci. And if you'll only use it once, don't use an array to store all results. An array of 2 elements containing the 2 previous fibonacci numbers will be enough. In each step, you then only have to sum up those 2 numbers. How can you save 2 consecutive fibonacci numbers? Well, you know that when you have 2 consecutive integers one is even and one is odd. So you can use that property to know where to get/place a fibonacci number: for fib(i), if i is even (i%2 is 0) place it in the first element of the array (index 0), else (i%2 is then 1) place it in the second element(index 1). Why can you just place it there? Well when you're calculating fib(i), the value that is on the place fib(i) should go is fib(i-2) (because (i-2)%2 is the same as i%2). But you won't need fib(i-2) any more: fib(i+1) only needs fib(i-1)(that's still in the array) and fib(i)(that just got inserted in the array).
So you could replace the recursion calls with a for loop like this:
int fibonacci(int n){
if( n <= 0){
return 0;
}
int previous[] = {0, 1}; // start with fib(0) and fib(1)
for(int i = 2; i <= n; ++i){
// modulo can be implemented with bit operations(much faster): i % 2 = i & 1
previous[i&1] += previous[(i-1)&1]; //shorter way to say: previous[i&1] = previous[i&1] + previous[(i-1)&1]
}
//Result is in previous[n&1]
return previous[n&1];
}
Recursion is actually discommanded while programming because of the time(function calls) and ressources(stack) it consumes. So each time you use recursion, try to replace it with a loop and a stack with simple pop/push operations if needed to save the "current position" (in c++ one can use a vector). In the case of the fibonacci, the stack isn't even needed but if you are iterating over a tree datastructure for example you'll need a stack (depends on the implementation though). As I was looking for my solution, I saw #naomik provided a solution with the while loop. That one is fine too, but I prefer the array with the modulo operation (a bit shorter).
Now concerning the problem of the size long long int has, it can be solved by using external libraries that implement operations for big numbers (like the GMP library or Boost.multiprecision). But you could also create your own version of a BigInteger-like class from Java and implement the basic operations like the one I have. I've only implemented the addition in my example (try to implement the others they are quite similar).
The main idea is simple, a BigInt represents a big decimal number by cutting its little endian representation into pieces (I'll explain why little endian at the end). The length of those pieces depends on the base you choose. If you want to work with decimal representations, it will only work if your base is a power of 10: if you choose 10 as base each piece will represent one digit, if you choose 100 (= 10^2) as base each piece will represent two consecutive digits starting from the end(see little endian), if you choose 1000 as base (10^3) each piece will represent three consecutive digits, ... and so on. Let's say that you have base 100, 12765 will then be [65, 27, 1], 1789 will be [89, 17], 505 will be [5, 5] (= [05,5]), ... with base 1000: 12765 would be [765, 12], 1789 would be [789, 1], 505 would be [505]. It's not the most efficient, but it is the most intuitive (I think ...)
The addition is then a bit like the addition on paper we learned at school:
begin with the lowest piece of the BigInt
add it with the corresponding piece of the other one
the lowest piece of that sum(= the sum modulus the base) becomes the corresponding piece of the final result
the "bigger" pieces of that sum will be added ("carried") to the sum of the following pieces
go to step 2 with next piece
if no piece left, add the carry and the remaining bigger pieces of the other BigInt (if it has pieces left)
For example:
9542 + 1097855 = [42, 95] + [55, 78, 09, 1]
lowest piece = 42 and 55 --> 42 + 55 = 97 = [97]
---> lowest piece of result = 97 (no carry, carry = 0)
2nd piece = 95 and 78 --> (95+78) + 0 = 173 = [73, 1]
---> 2nd piece of final result = 73
---> remaining: [1] = 1 = carry (will be added to sum of following pieces)
no piece left in first `BigInt`!
--> add carry ( [1] ) and remaining pieces from second `BigInt`( [9, 1] ) to final result
--> first additional piece: 9 + 1 = 10 = [10] (no carry)
--> second additional piece: 1 + 0 = 1 = [1] (no carry)
==> 9542 + 1 097 855 = [42, 95] + [55, 78, 09, 1] = [97, 73, 10, 1] = 1 107 397
Here is a demo where I used the class above to calculate the fibonacci of 10000 (result is too big to copy here)
Good luck!
PS: Why little endian? For the ease of the implementation: it allows to use push_back when adding digits and iteration while implementing the operations will start from the first piece instead of the last piece in the array.

Fast inner product of ternary vectors

Consider two vectors, A and B, of size n, 7 <= n <= 23. Both A and B consists of -1s, 0s and 1s only.
I need a fast algorithm which computes the inner product of A and B.
So far I've thought of storing the signs and values in separate uint32_ts using the following encoding:
sign 0, value 0 → 0
sign 0, value 1 → 1
sign 1, value 1 → -1.
The C++ implementation I've thought of looks like the following:
struct ternary_vector {
uint32_t sign, value;
};
int inner_product(const ternary_vector & a, const ternary_vector & b) {
uint32_t psign = a.sign ^ b.sign;
uint32_t pvalue = a.value & b.value;
psign &= pvalue;
pvalue ^= psign;
return __builtin_popcount(pvalue) - __builtin_popcount(psign);
}
This works reasonably well, but I'm not sure whether it is possible to do it better. Any comment on the matter is highly appreciated.
I like having the 2 uint32_t, but I think your actual calculation is a bit wasteful
Just a few minor points:
I'm not sure about the reference (getting a and b by const &) - this adds a level of indirection compared to putting them on the stack. When the code is this small (a couple of clocks maybe) this is significant. Try passing by value and see what you get
__builtin_popcount can be, unfortunately, very inefficient. I've used it myself, but found that even a very basic implementation I wrote was far faster than this. However - this is dependent on the platform.
Basically, if the platform has a hardware popcount implementation, __builtin_popcount uses it. If not - it uses a very inefficient replacement.
The one serious problem here is the reuse of the psign and pvalue variables for the positive and negative vectors. You are doing neither your compiler nor yourself any favors by obfuscating your code in this way.
Would it be possible for you to encode your ternary state in a std::bitset<2> and define the product in terms of and? For example, if your ternary types are:
1 = P = (1, 1)
0 = Z = (0, 0)
-1 = M = (1, 0) or (0, 1)
I believe you could define their product as:
1 * 1 = 1 => P * P = P => (1, 1) & (1, 1) = (1, 1) = P
1 * 0 = 0 => P * Z = Z => (1, 1) & (0, 0) = (0, 0) = Z
1 * -1 = -1 => P * M = M => (1, 1) & (1, 0) = (1, 0) = M
Then the inner product could start by taking the and of the bits of the elements and... I am working on how to add them together.
Edit:
My foolish suggestion did not consider that (-1)(-1) = 1, which cannot be handled by the representation I proposed. Thanks to #user92382 for bringing this up.
Depending on your architecture, you may want to optimize away the temporary bit vectors -- e.g. if your code is going to be compiled to FPGA, or laid out to an ASIC, then a sequence of logical operations will be better in terms of speed/energy/area than storing and reading/writing to two big buffers.
In this case, you can do:
int inner_product(const ternary_vector & a, const ternary_vector & b) {
return __builtin_popcount( a.value & b.value & ~(a.sign ^ b.sign))
- __builtin_popcount( a.value & b.value & (a.sign ^ b.sign));
}
This will lay out very well -- the (a.value & b.value & ... ) can enable/disable an XOR gate, whose output splits into two signed accumulators, with the first pathway NOTed before accumulation.

calculating square root for implementating a fixed point function

i am trying to find the square root of a fixed point and i used the following Calculation to find an approximation of the square root using an integer algorithm. The algorithm is described in Wikipedia:
http://en.wikipedia.org/wiki/Methods_of_computing_square_roots
uint32_t SquareRoot(uint32_t a_nInput)
{
uint32_t op = a_nInput;
uint32_t res = 0;
uint32_t one = 1uL << 30; // The second-to-top bit is set: use 1u << 14 for uint16_t type; use 1uL<<30 for uint32_t type
// "one" starts at the highest power of four <= than the argument.
while (one > op)
{
one >>= 2;
}
while (one != 0)
{
if (op >= res + one)
{
op = op - (res + one);
res = res + 2 * one;
}
res >>= 1;
one >>= 2;
}
return res;
}
but i am unable to follow whats happening in the code what does the comment // "one" starts at the highest power of four <= than the argument. exactly means. Can someone please hint me whats happening in the code to calculate the square root of the argument a_nInput
Thanks much
Note how one is initialized.
uint32_t one = 1uL << 30;
That's 230, or 1073741824. Which is also 415.
This line:
one >>= 2;
Is equivalent to
one = one / 4;
So the pseudocode for what's happening is:
one = 415
if one is more than a_nInput
one = 414
if one is still more than a_nInput
one = 413
(and so on...)
Eventually, one will not be more than a_nInput.
// "one" starts at the highest power of four less than or equal to a_nInput