When using Adobe ColdFusion and trying to convert a hex string to a decimal we don't seem to be getting the result we are wanting/expecting.
<cfset i = #InputBaseN("A1000050", 16)# >
<cfdump var="#i#">
it is outputting -1593835440
We were expecting 2701131856
In windows calculator when we convert A1000050 to dec qword it gives us our expected result. However, if we use dword it gives us the save value ColdFusion gives us.
In ColdFusion what are we doing wrong? How can we get the expected value?
Binary of expected value (according to windows calc programmer mode)
0000 0000 0000 0000 0000 0000 0000 0000
1010 0001 0000 0000 0000 0000 0101 0000
= 2701131856
Binary value we are actually getting
1010 0001 0000 0000 0000 0000 0101 0000
= -1593835440
My guess is you are using CF10 or 11? This appears to be a bug in those versions that was fixed in CF2016, but would break backwards-compatibility in 10/11.
https://tracker.adobe.com/#/view/CF-3712098
https://tracker.adobe.com/#/view/CF-4175842
Those bug logs do contain workarounds that may work for you.
I was able to verify the behavior.
-1593835440
CF10: https://trycf.com/gist/ab0e93b1d690401778a57b443ff42a3e/acf?theme=monokai
CF11: https://trycf.com/gist/45db48930b2cfbeec600d6d840521470/acf11?theme=monokai
Railo 4.2: https://trycf.com/gist/dee04bec7b7983bfd97dac69ea3bc930/railo?theme=monokai
Lucee 4.5: https://trycf.com/gist/31497d2b3a35ed69e9c95081ea5bd83d/lucee?theme=monokai
2701131856
CF2016: https://trycf.com/gist/73b81b7184f47275503ab57d5ee5eeaa/acf2016?theme=monokai
Lucee 5: https://trycf.com/gist/f73bd8fbe652f5c5675c658d5cd356f3/lucee5?theme=monokai
Just to expand on Shawn's answer that it is a bug... Traditionally, most (though not all) CF numeric functions were limited to 32 bit signed integers. CF 2016 changed that by having inputBaseN() return a 64 bit integer, or Long. Most of the workarounds mentioned in the bug reports are trying to do the opposite of what you need (replicate the old behavior under CF2016). To replicate the new behavior under CF10/11, try using Long.parseLong() instead:
// Option 1. Using javacast. Returns 2701131856
value = javacast("long", 0).parseLong("A1000050", 16);
// Option 2. Using createObject. Returns 2701131856
value = createObject("java", "java.lang.Long").parseLong("A1000050", 16);
For CF servers running Java8, technically you could also invoke toUnsignedLong() on the resulting Integer, but .. it's brittle. Only works with CF10/11 and Java8+.
// ONLY works under CF10/11 and Java8+. Returns -2701131856
origValue = InputBaseN(input, 16);
newValue = origValue .toUnsignedLong(origValue );
Example on trycf.com
This is an old one. After some legacy work recently, I had to refactor some window.atob on the client side and ran into this (on the server). The CF parsing issue also fails with extra spaces. I used trim:
<cfset strBase64Value = ToString( ToBase64( trim(myvalue) ) ) />
Related
I have a Python program which is accessing one of the devices on our Solar Power system. I can read the registers which are supposed to conform to the SunSpec conventions. I have been able to decode most of the values, but I'm stuck on decoding the TCP_Address and gateway which are sourced from these two registers:
TCP Address:
reg 22 value 49320 in HEX 0xc0a8
reg 23 value 64 in HEX 0x40
Gataway Address:
reg 24 value 49320 in HEX 0xc0a8
reg 25 value 1 in HEX 0x1
the documentation says that the format for these values is "uint32", which I interpret to mean unsigned 32 bit integer. The result of decoding should be something like 192.168.0.?.
Can anyone assist to understand how to convert the above to that format in Python? Thanks...RDK
I would say that
0xc0 0xa8 (0x00) 0x01
is 192.168.0.1, your gateway. Seems you've just missed to note that both registers are 16 bits so you've neglected the high byte..
Here is my solution to this problem:
def Decode_TCPIP(reg1,reg2):
# print("Reg1 = "+ reg1 + " Reg2 = " + reg2)
UpperMask = 0xff00
LowerMask = 0x00ff
First = (reg1 & UpperMask)/256
Second = (reg1 & LowerMask)
Third = (reg2 & UpperMask)/256
Forth = (reg2 & LowerMask)
return First, Second, Third, Forth
the returned values are then the four digits in the IP address....RDK
SetItemIcon() a Carbon API function which is used to set an icon for a menu item apparently have stopped working on macOS 10.13 High Sierra. I mean it now displays a bunch of distorted pixels instead of an actual icon. It was working fine since 10.0 through 10.12. Function is defined in Menus.h
/*
* SetItemIcon()
*
* Mac OS X threading:
* Not thread safe
*
* Availability:
* Mac OS X: in version 10.0 and later in Carbon.framework
* CarbonLib: in CarbonLib 1.0 and later
* Non-Carbon CFM: in InterfaceLib 7.1 and later
*/
extern void
SetItemIcon(
MenuRef theMenu,
MenuItemIndex item,
short iconIndex) AVAILABLE_MAC_OS_X_VERSION_10_0_AND_LATER;
And is used in code like this:
SetItemIcon((MenuHandle)hMenu, uItem+1, hbmp-256);
And is built with XCode Version 3.2.5.
Is there a way to fix or workaround this issue?
P.S. Icons are stored in an .r resource file in some old strange HEX text format:
resource 'ICON' (300) {
$"0000 0000 0000 0000 0000 0000 0000 0000"
$"0000 0000 0000 0000 0000 0000 0000 0000"
$"0000 0000 0000 0000 0000 0000 0000 0F00"
$"0000 FE00 000F FC00 00FF F800 03FF FF80"
$"00FF F800 000F FC00 0000 FE00 0000 0F"
};
A working alternative to using SetItemIcon() is to use SetMenuItemIconHandle() which still works fine on 10.13.
SetMenuItemIconHandle((MenuHandle)hMenu, uItem+1, kMenuIconResourceType, (Handle) CFSTR("myicon.icns"));
Icons will need to be converted from the HEX text format to an .icns file and added into project and app bundle.
This has stumped a few of us. It's VS2013, and the code itself builds correctly as you can see from the image. We've run this test on 2 different machines with the same results.
I did copy/paste the code originally into and from MS OneNote, so possibly there is a reason there. But as you can see from Notepad++ there don't appear to be any special characters.
Ideas?
To expand on this, the following version also fails:
//Note: Why this does not pass is baffling
[TestMethod]
public void FunnyTestThatFailsForSomeReason()
{
const string expectedErrorMessage = "Web Session Not Found.";
var a = "Web Session Not Found.";
string b = "Web Session Not Found.";
Assert.AreEqual(expectedErrorMessage, a);
//Assert.AreEqual(expectedErrorMessage, b);
Assert.AreEqual(expectedErrorMessage.ToString(), b.ToString());
}
You're using Assert.AreEqual(Object, Object) which (in this case) is looking for reference equality. It's not going to work the way you want it to.
Verifies that two specified objects are equal. The assertion fails if the objects are not equal.
Use Assert.AreEqual(String, String, Boolean).
Verifies that two specified strings are equal, ignoring case or not as specified. The assertion fails if they are not equal.
Or, more simply, your strings are subtly different. Copy and pasting appears to have yielded different results:
(Here for formatting purposes; the existing answer also explains what's happened. This is just the hex dump of your question's code.)
00000000: 2020 2020 7661 7220 6120 3d20 2257 6562 c2a0 5365 7373 696f : var a = "Web..Sessio
00000018: 6ec2 a04e 6f74 c2a0 466f 756e 642e 223b 0a20 2020 2020 2020 :n..Not..Found.";.
00000030: 2020 2020 2073 7472 696e 6720 6220 3d20 2257 6562 2053 6573 : string b = "Web Ses
00000048: 7369 6f6e 204e 6f74 2046 6f75 6e64 2e22 3b0a :sion Not Found.";.
The strings aren't the same.
I've a string with some Spanish characters and when I use the U2's OCONV() function with the code 'MCT', it changes the Spanish Character to something else. Does anyone know?
STRING: T r L=16 `CITáN, MOR 32000'
TEST.MCT: 5: STR2 = OCONV(STR,'MCT')
:: S
TEST.MCT: 6: CRT STR2
:: S
Cit?9: Mor 32000
Note that I created the following program, and I do not see the issue.
CT BP SO
SO
0001 STR = "CIT":CHAR(225):"N, MOR 3200"
0002 STR2 = OCONV(STR, "MCT")
0003 PRINT STR
0004 PRINT STR2
0005 PRINT SEQ(STR2[4,1])
When I compile and run it, I get the following:
CITáN, MOR 3200
Citán, Mor 3200
225
>
Note that I tested UniVerse11.2.2, on Windows. Can you try the sample code I provided from the HS.SALES account, and let me know what it does?
If it still has an issue, please let us know the full UniVerse version, and the OS you are running this on.
added info:
note tested on UniVerse 11.1.1 on AIX 6.1 and it worked for me. If you are still having issues, I suggest you contact you UniVerse Maintenance provider.
It is hard to read your output since it combines lines.
My run through RAID shows the correct information.
RAID BP SO
SO: 1: STR = "CIT":CHAR(225):"N, MOR 3200"
:: S
SO: 2: STR2 = OCONV(STR, "MCT")
:: S
SO: 3: PRINT STR
:: S
CITáN, MOR 3200
SO: 4: PRINT STR2
:: S
Citán, Mor 3200
SO: 5: PRINT SEQ(STR2[4,1])
:: S
225
Yet, I have LANG=en_US in my UNIX environment variables.
So, there may be an issue with your environment based on what LANG is set, I suggest you contact you U2 Maintenance provider.
I am trying to use a cent sign in my ColdFusion program. It appears to be ascii 155. The function Chr() only interprets values up to 127, although the documentation says otherwise. I found a clue in that I may need to enable high ascii characters in the ColdFusion administrator, but I could not find a place to do that. This codes works:
<cfset x = Chr(127)>
<cfoutput> this is what you get with #x# </cfoutput>
I get a nice box. But this returns only a blank:
<cfset x = Chr(155)>
<cfoutput> this is what you get with #x# </cfoutput>
How do I get Chr() working with higher numbers?
The "cent sign" is ¢, which is chr(162) (which works fine) or ¢ as a HTML entity.
If you want the › symbol then use chr(8250) or ›.
Looks like the standard american ascii chr that we are all used to, and coldfusion chr numbering do not match up.
According to the Livedocs (version 8)
ColdFusion MX: Changed Unicode support: ColdFusion supports the Java UCS-2 representation of Unicode characters, up to a value of 65535. (Earlier releases supported 1-255.)
If you look here, this blog shows some of CF's and the HTML equiv... so you can find some of them more easily.
cf and html entities
Out of interest, i made a simple loop, and thought i'd look through them, and there are plenty of chars... the hard part is finding the right one.
162 is a cent though, as stated in another answer, but this might help explain why.
<cfoutput>
<cfloop index="i" from="1" to="10000">
<pre>Chr #i# = #chr(i)#</pre>
</cfloop>
</cfoutput>
Java UCS-2 has lots of weird characters, as you can see here.
Some Sample output:
Chr 2922 = ୪
Chr 2923 = ୫
Chr 2924 = ୬
Chr 2925 = ୭
Chr 2926 = ୮
Chr 2927 = ୯
Chr 2928 = ୰
Chr 2929 = ୱ
Chr 3207 = ಇ
Chr 3208 = ಈ
Chr 3209 = ಉ
Chr 3210 = ಊ
Chr 3211 = ಋ
Chr 3212 = ಌ