PHP's unpack in Coldfusion - coldfusion

I need to convert this php function into a coldfusion function for an api and I'm not having much luck. I'm not familiar enough with php or a coldfusion unpack equivalent and have just hit a brick wall.
function i32hash($str) {
$h = 0;
foreach (unpack('C*', $str) as &$p) { $h = (37 * $h + $p) % 4294967296; }
return ($h - 2147483648);
}
The end result should be i32hash('127.0.0.1:1935/vod/sample.mp4') = 565817233
this is the code I've been working with but its not working. I get an error back of "Cannot convert the value 4.294967296E9 to an integer because it cannot fit inside an integer." This happens at the modulus.
function i32hash(str) {
var h = 0;
// php unpack equivalent
str = toBinary(toBase64(str));
for(p in str) {
h = (37 * h + p) % 4294967296;
}
return h-2147483648;
}
Thanks for the help.
Updated answer, supplied by #Leigh in the comments below
function i32hash(str) {
var h = 0;
var strArray = charsetDecode(arguments.str, "us-ascii");
for(var p in strArray) {
h = precisionEvaluate((37 * h + p));
h = h.remainder( javacast("bigdecimal", 4294967296) );
}
return precisionEvaluate(h - 2147483648);
}

I am not a PHP guy, but my understanding is that unpack('C*',..) should translate to decoding the string using ascii encoding, ie charsetDecode(theString, "us-ascii").
I get an error back of "Cannot convert the value 4.294967296E9 to an
integer because it cannot fit inside an integer.
Unfortunately, CF's modulus operator, requires a 32 bit integer on the right side. The value 4294967296 exceeds the maximum allowed for integers. You will need to use a BigDecimal instead. The PrecisionEvaluate() function returns a BigDecimal. Use it on the first half of the expression:
firstPart = precisionEvaluate((37 * h + p));
Then obtain the modulus using the BigDecimal.remainder() method instead:
h = firstPart.remainder( javacast("bigdecimal", 4294967296) );
Finally, return the result:
precisionEvaluate(h - 2147483648)

Related

Why is there a loop in this division as multiplication code?

I got the js code below from an archive of hackers delight (view the source)
The code takes in a value (such as 7) and spits out a magic number to multiply with. Then you bitshift to get the results. I don't remember assembly or any math so I'm sure I'm wrong but I can't find the reason why I'm wrong
From my understanding you could get a magic number by writing ceil(1/divide * 1<<32) (or <<64 for 64bit values, but you'd need bigger ints). If you multiple an integer with imul you'd get the result in one register and the remainder in another. The result register is magically the correct result of a division with this magic number from my formula
I wrote some C++ code to show what I mean. However I only tested with the values below. It seems correct. The JS code has a loop and more and I was wondering, why? Am I missing something? What values can I use to get an incorrect result that the JS code would get correctly? I'm not very good at math so I didn't understand any of the comments
#include <cstdio>
#include <cassert>
int main(int argc, char *argv[])
{
auto test_divisor = 7;
auto test_value = 43;
auto a = test_value*test_divisor;
auto b = a-1; //One less test
auto magic = (1ULL<<32)/test_divisor;
if (((1ULL<<32)%test_divisor) != 0) {
magic++; //Round up
}
auto answer1 = (a*magic) >> 32;
auto answer2 = (b*magic) >> 32;
assert(answer1 == test_value);
assert(answer2 == test_value-1);
printf("%lld %lld\n", answer1, answer2);
}
JS code from hackers delight
var two31 = 0x80000000
var two32 = 0x100000000
function magic_signed(d) { with(Math) {
if (d >= two31) d = d - two32// Treat large positive as short for negative.
var ad = abs(d)
var t = two31 + (d >>> 31)
var anc = t - 1 - t%ad // Absolute value of nc.
var p = 31 // Init p.
var q1 = floor(two31/anc) // Init q1 = 2**p/|nc|.
var r1 = two31 - q1*anc // Init r1 = rem(2**p, |nc|).
var q2 = floor(two31/ad) // Init q2 = 2**p/|d|.
var r2 = two31 - q2*ad // Init r2 = rem(2**p, |d|).
do {
p = p + 1;
q1 = 2*q1; // Update q1 = 2**p/|nc|.
r1 = 2*r1; // Update r1 = rem(2**p, |nc|.
if (r1 >= anc) { // (Must be an unsigned
q1 = q1 + 1; // comparison here).
r1 = r1 - anc;}
q2 = 2*q2; // Update q2 = 2**p/|d|.
r2 = 2*r2; // Update r2 = rem(2**p, |d|.
if (r2 >= ad) { // (Must be an unsigned
q2 = q2 + 1; // comparison here).
r2 = r2 - ad;}
var delta = ad - r2;
} while (q1 < delta || (q1 == delta && r1 == 0))
var mag = q2 + 1
if (d < 0) mag = two32 - mag // Magic number and
shift = p - 32 // shift amount to return.
return mag
}}
In the C CODE:
auto magic = (1ULL<<32)/test_divisor;
We get Integer Value in magic because both (1ULL<<32) & test_divisor are Integers.
The Algorithms requires incrementing magic on certain conditions, which is the next conditional statement.
Now, multiplication also gives Integers:
auto answer1 = (a*magic) >> 32;
auto answer2 = (b*magic) >> 32;
C CODE is DONE !
In the JS CODE:
All Variables are var ; no Data types !
No Integer Division ; No Integer Multiplication !
Bitwise Operations are not easy and not suitable to use in this Algorithm.
Numeric Data is via Number & BigInt which are not like "C Int" or "C Unsigned Long Long".
Hence the Algorithm is using loops to Iteratively add and compare whether "Division & Multiplication" has occurred to within the nearest Integer.
Both versions try to Implement the same Algorithm ; Both "should" give same answer, but JS Version is "buggy" & non-standard.
While there are many Issues with the JS version, I will highlight only 3:
(1) In the loop, while trying to get the best Power of 2, we have these two statements :
p = p + 1;
q1 = 2*q1; // Update q1 = 2**p/|nc|.
It is basically incrementing a counter & multiplying a number by 2, which is a left shift in C++.
The C++ version will not require this rigmarole.
(2) The while Condition has 2 Equality comparisons on RHS of || :
while (q1 < delta || (q1 == delta && r1 == 0))
But both these will be false in floating Point Calculations [[ eg check "Math.sqrt(2)*Math.sqrt(0.5) == 1" : even though this must be true, it will almost always be false ]] hence the while Condition is basically the LHS of || , because RHS will always be false.
(3) The JS version returns only one variable mag but user is supposed to get (& use) even variable shift which is given by global variable access. Inconsistent & BAD !
Comparing , we see that the C Version is more Standard, but Point is to not use auto but use int64_t with known number of bits.
First I think ceil(1/divide * 1<<32) can, depending on the divide, have cases where the result is off by one. So you don't need a loop but sometimes you need a corrective factor.
Secondly the JS code seems to allow for other shifts than 32: shift = p - 32 // shift amount to return. But then it never returns that. So not sure what is going on there.
Why not implement the JS code in C++ as well and then run a loop over all int32_t and see if they give the same result? That shouldn't take too long.
And when you find a d where they differ you can then test a / d for all int32_t a using both magic numbers and compare a / d, a * m_ceil and a * m_js.

How to use fold statement index in function call

The fold manual gives an example:
input price = close;
input length = 9;
plot SMA = (fold n = 0 to length with s do s + getValue(price, n, length - 1)) / lenth;
This effectively calls a function iteratively like in a for loop body.
When I use this statement to call my own function as follows, then it breaks because the loop index variable is not recognized as a variable that can be passed to my function:
script getItem{
input index = 0;
plot output = index * index;
}
script test{
def total = fold index = 0 to 10 with accumulator = 0 do
accumulator + getItem(index);########## Error: No such variable: index
}
It is a known bug / limitation. Has been acknowledged without a time line for a fix. No workaround available.
Have you tried adding a small remainder to your defined variable within the fold and then pass that variable? You can strip the integer value and then use the remainder as your counter value. I've been playing around with somethin similar but it isn't working (yet). Here's an example:
script TailOverlap{
input i = 0;
def ii = (Round(i, 1) - i) * 1000;
... more stuff
plot result = result;
};
def _S = (
fold i = displace to period
with c = 0
do if
TailOverlap(i = _S) #send cur val of _S to script
then _S[1] + 1.0001 #increment variable and counter
else _S[1] + 0.0001 #increment the counter only
);
I'm going to continue playing around with this. If I get it to work I'll post the final solution. If you're able to get work this (or have discovered another solution) please do post it here so I know.
Thanks!

How to convert decimal(with float point) to binary in Swift 3? (self written code without third-party library and Foundation)

I am looking for a simple way to convert a decimal with floating point to binary with floating point in Swift 3. For example, this code converts decimal to binary without any problems.
func convertToBinary(decimal: Int) -> String {
var n = 0, c = 0, k: [String] = [], fs: String = ""
n = decimal
while n > 0 {
c = n % 2
n = n / 2
k.append("\(c)")
}
for i in k.reversed() {
fs += "\(i)"
}
return fs
}
Unfortunately, if I'm changing decimal to float it shows error message
"Cannot assign value of type 'Float' to type 'Int'"
c = n % 2
If I'm changing variable c to float it shows another error message
"'%' is unavailable: Use truncatingRemainder instead"
Okay, then I changed '%' with:
c = n.truncatingRemainder(dividingBy: 2)
And everything worked. Unfortunately program divides the decimal number infinitely(example):
0.0
0.0
1.0
0.5
0.25
1.125
1.5625
0.78125
0.390625
0.195312
0.0976562
0.0488281
0.0244141
0.012207
0.00610352
0.00305176
0.00152588
0.000762939
0.00038147
0.000190735
and etc.
After decimal to binary conversion:
1.4013e-452.8026e-454.2039e-458.40779e-451.68156e-443.50325e-447.00649e-441.4013e-432.8026e-435.60519e-431.12104e-422.24208e-424.48416e-428.96831e-421.79366e-413.58732e-417.17465e-411.43493e-402.86986e-405.73972e-401.14794e-392.29589e-394.59177e-399.18355e-391.83671e-383.67342e-387.34684e-381.46937e-372.93874e-375
and etc.
Maybe there's some kind of workaround?
Regarding this : func convertToBinary( dataName : dataType )
Unfortunately, if I'm changing decimal to float it shows error message : "Cannot assign value of type 'Float' to type 'Int'"
Look at your code..
func convertToBinary(decimal: Int) -> String {
The function parameter's data type must be changed to match the provided input's data type.
func convertToBinary(decimal: Float) -> String {
//your float (fraction) processing code
}
When calling the function, if you want to input parameters of either Int or Float type, you could try using wildcard * to accept any input. See pseudo-code example...
func convertToBinary(decimal: *) -> String {
//your Integer or Float processing code
//consider using IF condition...
//# if ( decimal.type == Int ) { ..do Integer stuff here.. }
//# else if ( decimal.type == Float ) { ..do Float stuff here.. }
}

C/C++ - "Negative" string converted to zero

I have the following .txt file:
{{1,2,3,0}, {1,1,1,2}, {0,−1,3,9}}
This is a 3x4 matrix. I'm using strtok to extract the numbers and saving on a float matrix. The problem is, when p gets -1, it's being converted to zero when saved on matrix. How could I fix it?
p = strtok(&matrix[0u], " {},");
for (i = 0; i < m + 1; i++){
for (j = 0; j < n + 1; j++) {
aux[i][j] = atoi(p);
if (p)
p = strtok(NULL, " {},");
}
}
Is there a better way to extract the numbers, one at a time? How?
Your minus sign doesn't work. Compare:
this - is the ASCII minus sign
this − is your character whixh might be called "minus sign" by Unicode, but it is not normally recognised as such by C++ library functions
Don't copy code from Word documents and like places. If in doubt, convert to ASCII with iconv or a similar utility.

Finding a Specific Digit of a Number

I'm trying to find the nth digit of an integer of an arbitrary length. I was going to convert the integer to a string and use the character at index n...
char Digit = itoa(Number).at(n);
...But then I realized the itoa function isn't standard. Is there any other way to do this?
(number/intPower(10, n))%10
just define the function intPower.
You can also use the % operator and / for integer division in a loop. (Given integer n >= 0, n % 10 gives the units digit, and n / 10 chops off the units digit.)
number = 123456789
n = 5
tmp1 = (int)(number / 10^n); // tmp1 = 12345
tmp2 = ((int)(tmp1/10))*10; // tmp2 = 12340
digit = tmp1 - tmp2; // digit = 5
You can use ostringstream to convert to a text string, but
a function along the lines of:
char nthDigit(unsigned v, int n)
{
while ( n > 0 ) {
v /= 10;
-- n;
}
return "0123456789"[v % 10];
}
should do the trick with a lot less complications. (For
starters, it handles the case where n is greater than the number
of digits correctly.)
--
James Kanze
Itoa is in stdlib.h.
You can also use an alternative itoa:
Alternative to itoa() for converting integer to string C++?
or
ANSI C, integer to string without variadic functions
It is also possible to avoid conversion to string by means of the function log10, int cmath, which returns the 10th-base logarithm of a number (roughly its length if it were a string):
unsigned int getIntLength(int x)
{
if ( x == 0 )
return 1;
else return std::log10( std::abs( x ) ) +1;
}
char getCharFromInt(int n, int x)
{
char toret = 0;
x = std::abs( x );
n = getIntLength( x ) - n -1;
for(; n >= 0; --n) {
toret = x % 10;
x /= 10;
}
return '0' + toret;
}
I have tested it, and works perfectly well (negative numbers are a special case). Also, it must be taken into account that, in order to find tthe nth element, you have to "walk" backwards in the loop, subtracting from the total int length.
Hope this helps.
A direct answer is:
char Digit = 48 + ((int)(Number/pow(10,N)) % 10 );
You should include the <math> library
const char digit = '0' + number.at(n);
Assuming number.at(n) returns a decimal digit in the range 0...9, that is.
A more general approach:
template<int base>
int nth_digit(int value, int digit)
{
return (value / (int)pow((double)base, digit)) % base;
}
Just lets you do the same thing for different base numbers (e.g. 16, 32, 64, etc.).
An alternative to itoa is the std::to_string method. So, you could simply do:
char digit = to_string(number)[index]