SetItemIcon() a Carbon API function which is used to set an icon for a menu item apparently have stopped working on macOS 10.13 High Sierra. I mean it now displays a bunch of distorted pixels instead of an actual icon. It was working fine since 10.0 through 10.12. Function is defined in Menus.h
/*
* SetItemIcon()
*
* Mac OS X threading:
* Not thread safe
*
* Availability:
* Mac OS X: in version 10.0 and later in Carbon.framework
* CarbonLib: in CarbonLib 1.0 and later
* Non-Carbon CFM: in InterfaceLib 7.1 and later
*/
extern void
SetItemIcon(
MenuRef theMenu,
MenuItemIndex item,
short iconIndex) AVAILABLE_MAC_OS_X_VERSION_10_0_AND_LATER;
And is used in code like this:
SetItemIcon((MenuHandle)hMenu, uItem+1, hbmp-256);
And is built with XCode Version 3.2.5.
Is there a way to fix or workaround this issue?
P.S. Icons are stored in an .r resource file in some old strange HEX text format:
resource 'ICON' (300) {
$"0000 0000 0000 0000 0000 0000 0000 0000"
$"0000 0000 0000 0000 0000 0000 0000 0000"
$"0000 0000 0000 0000 0000 0000 0000 0F00"
$"0000 FE00 000F FC00 00FF F800 03FF FF80"
$"00FF F800 000F FC00 0000 FE00 0000 0F"
};
A working alternative to using SetItemIcon() is to use SetMenuItemIconHandle() which still works fine on 10.13.
SetMenuItemIconHandle((MenuHandle)hMenu, uItem+1, kMenuIconResourceType, (Handle) CFSTR("myicon.icns"));
Icons will need to be converted from the HEX text format to an .icns file and added into project and app bundle.
Related
When using Adobe ColdFusion and trying to convert a hex string to a decimal we don't seem to be getting the result we are wanting/expecting.
<cfset i = #InputBaseN("A1000050", 16)# >
<cfdump var="#i#">
it is outputting -1593835440
We were expecting 2701131856
In windows calculator when we convert A1000050 to dec qword it gives us our expected result. However, if we use dword it gives us the save value ColdFusion gives us.
In ColdFusion what are we doing wrong? How can we get the expected value?
Binary of expected value (according to windows calc programmer mode)
0000 0000 0000 0000 0000 0000 0000 0000
1010 0001 0000 0000 0000 0000 0101 0000
= 2701131856
Binary value we are actually getting
1010 0001 0000 0000 0000 0000 0101 0000
= -1593835440
My guess is you are using CF10 or 11? This appears to be a bug in those versions that was fixed in CF2016, but would break backwards-compatibility in 10/11.
https://tracker.adobe.com/#/view/CF-3712098
https://tracker.adobe.com/#/view/CF-4175842
Those bug logs do contain workarounds that may work for you.
I was able to verify the behavior.
-1593835440
CF10: https://trycf.com/gist/ab0e93b1d690401778a57b443ff42a3e/acf?theme=monokai
CF11: https://trycf.com/gist/45db48930b2cfbeec600d6d840521470/acf11?theme=monokai
Railo 4.2: https://trycf.com/gist/dee04bec7b7983bfd97dac69ea3bc930/railo?theme=monokai
Lucee 4.5: https://trycf.com/gist/31497d2b3a35ed69e9c95081ea5bd83d/lucee?theme=monokai
2701131856
CF2016: https://trycf.com/gist/73b81b7184f47275503ab57d5ee5eeaa/acf2016?theme=monokai
Lucee 5: https://trycf.com/gist/f73bd8fbe652f5c5675c658d5cd356f3/lucee5?theme=monokai
Just to expand on Shawn's answer that it is a bug... Traditionally, most (though not all) CF numeric functions were limited to 32 bit signed integers. CF 2016 changed that by having inputBaseN() return a 64 bit integer, or Long. Most of the workarounds mentioned in the bug reports are trying to do the opposite of what you need (replicate the old behavior under CF2016). To replicate the new behavior under CF10/11, try using Long.parseLong() instead:
// Option 1. Using javacast. Returns 2701131856
value = javacast("long", 0).parseLong("A1000050", 16);
// Option 2. Using createObject. Returns 2701131856
value = createObject("java", "java.lang.Long").parseLong("A1000050", 16);
For CF servers running Java8, technically you could also invoke toUnsignedLong() on the resulting Integer, but .. it's brittle. Only works with CF10/11 and Java8+.
// ONLY works under CF10/11 and Java8+. Returns -2701131856
origValue = InputBaseN(input, 16);
newValue = origValue .toUnsignedLong(origValue );
Example on trycf.com
This is an old one. After some legacy work recently, I had to refactor some window.atob on the client side and ran into this (on the server). The CF parsing issue also fails with extra spaces. I used trim:
<cfset strBase64Value = ToString( ToBase64( trim(myvalue) ) ) />
I am trying to replace a hex value in one of my files.
However, I am not sure how to write it back out in its normal form. I am not sure if the below is the right way to do it. I am changing the hex 25 to hex 15.
The file example_2018-02-02-14-51-47_US.txt is in UTF-8.
cat example_2018-02-02-14-51-47_US.txt | sed 's/\x25\x15/ /g' | od -x > example_2018-02-02-14-51-47_US_Convert.txt
Here is what the end of my example file looks like when I do xxd. I am trying to replace the hex 25 at the end with hex 15:
05ff020: a289 9585 a2a2 1ed3 f4c2 d7f5 f8d3 f1f3 ................
05ff030: e7c4 1e95 a493 931e d985 98a4 85a2 a396 ................
05ff040: 991e 95a4 9393 1e95 a493 931e 95a4 9393 ................
05ff050: 1ef2 f0f2 f2f2 f2f4 f0f4 f3f1 1ed2 d3e8 ................
05ff060: e6f8 f7f8 c2e6 c1d2 1ee2 8599 a589 8389 ................
05ff070: 9587 1ec2 9696 9240 9686 40c2 a4a2 8995 .......#..#.....
05ff080: 85a2 a21e c689 9985 1ec2 a4a2 8995 85a2 ................
05ff090: a240 d6a6 9585 99a2 1e95 a493 931e 95a4 .#..............
05ff0a0: 9393 1e95 a493 931e f0f0 f0f5 1ec9 d31e ................
05ff0b0: f0f0 f0f0 f0f0 f0f0 f0f0 f0f0 f3f5 f825 ...............%
EDIT: now the question reflect changing a binary file, so I've changed the answer accordingly:
We can replace bytes in a binary file using \x notation directly. I made a testfile:
$ echo -ne "\xf0\xf0\xf3\xf5\xf8\x25" > binfile
$ cat binfile | xxd
00000000: f0f0 0c33 0c35 0c38 0c32 35 ...3.5.8.25
Ok, so, we just pipe this through sed and it works:
$ cat binfile | sed -e 's/\x25/\x15/g' > newbinfile
$ cat newbinfile | xxd
00000000: f0f0 f3f5 f815 ......
This is the approach I ended up taking that worked.
tr '\045' '\025' < example_2018-02-02-14-51-47_US.txt > example_2018-02-02-14-51-47_US_replaced.txt
Suppose I've written the following:
enum class Color { Red, Green, Blue, };
template <Color c> Color foo() { return c; }
template Color foo<Color::Green>();
and compiled it. When I look at an objdump of my compiled code, I get:
[einpoklum#myhost /tmp]$ objdump -t f.o | grep "\.text\." | sed 's/^.*\.text\.//;' | c++filt
Color foo<(Color)1>()
Color foo<(Color)1>() 000000000000000b Color foo<(Color)1>()
And if I use abi::__cxa_demangle() for <cxxabi.h> (GCC; maybe it's different with your compiler), it's also similar - (Color)0 or Color)1 are the template parameters, not Red or Green nor Color::Red or Color::Green.
Obviously, I can't have names mangled the way I like them. But - I would really like to be able to obtain (or write?) a variant of the demangling call which instead of "Color foo<(Color)1>()" returns "Color foo<(Color:Green>()" (or "Color foo<(Green>()". Is this doable?
It might be possible for object files with debug info - section .debug_info contains info about enum class Color, it requires some tool to read ELF debug info, parse data semantically and apply/pass info to the c++filt. I don't know if such tools exist or not (maybe, in the GDB it is all glued together)
It is pretty much impossible in general with object files compiled with optimization, or with stripped debug info - information about enum class Color is just NOT there...
From optimized build
objdump -s aaa.o
aaa.o: file format pe-x86-64
Contents of section .text$_Z3fooIL5Color1EES0_v:
0000 554889e5 b8010000 005dc390 90909090 UH.......]......
Contents of section .xdata$_Z3fooIL5Color1EES0_v:
0000 01040205 04030150 .......P
Contents of section .pdata$_Z3fooIL5Color1EES0_v:
0000 00000000 0b000000 00000000 ............
Contents of section .rdata$zzz:
0000 4743433a 20287838 365f3634 2d706f73 GCC: (x86_64-pos
0010 69782d73 65682d72 6576302c 20427569 ix-seh-rev0, Bui
0020 6c742062 79204d69 6e47572d 57363420 lt by MinGW-W64
0030 70726f6a 65637429 20352e33 2e300000 project) 5.3.0..
Debug build has partial contents of section .debug_info:
0070 00000000 00000000 00000002 436f6c6f ............Colo
0080 720004a3 00000001 01a30000 00035265 r.............Re
0090 64000003 47726565 6e000103 426c7565 d...Green...Blue
00a0 00020004 0405696e 74000566 6f6f3c28 ......int..foo<(
00b0 436f6c6f 7229313e 0001065f 5a33666f Color)1>..._Z3fo
00c0 6f494c35 436f6c6f 72314545 53305f76 oIL5Color1EES0_v
00d0 007b0000 00000000 00000000 000b0000 .{..............
00e0 00000000 00019c06 63007b00 00000100 ........c.{.....
00f0 00
my hope for this question (see: bottom) is to lay out as much as I know about the deflate process, and I can receive corrections on areas where I am (perhaps very) misinformed. Hopefully, at the end of it, this question could be a handy resource.
Zlib header
The first two bytes equate to a header for the zlib compression used with the format of (credit)
---CMF--- ---FLG---
0111.1000 1101.0101
CINF -CM- +-||
| |+- FCHECK
| +-- FDICT
+---- FLEVEL
From RFC 1950, right to left:
FCHECK (1.0101) - validates that CMF & FLG as a 16 bit unsigned int is a multiple of 31
FDICT (0) - if set, indicates preset DICT following immediately after FLG
FLEVEL (11) - compression "intensity" [0-3]
CM (1000) - for compression method, where CM = 8 == "deflate" compression method
CINF (0111) - indicates the size of the sliding window used, where CINF = 7 == 32K sliding window
Data block header
The next three bits in the NEW BYTE equate to a header for the Huffman encoded block:
---CMF--- ---FLG--- NEW BYTE
0111.1000 1101.0101 11101100
|-|
| +- BFINAL
+--- BTYPE
From RFC 1951 right to left:
BFINAL (0) - is set (1) if this is the last block of data
BTYPE (10) - Huffman encoding : (00)none; (01)Fixed Huffman codes; (10) dynamic codes; (11) invalid
The Huffman Codes
From here I will work off the assumption of BTYPE = (10)
The following values immediately proceed:
NEW BYTE NXT BYTE
(11101)100 -> 101)(11101) -> 0111111(1
|-|
| +- BFINAL
+--- BTYPE
HLIT (11101) - 5-bit number of length/literal codes, 257 added (257-286)
HDIST (11101) - 5-bit number of distance codes, 1 added (1-32)
HCLEN (1111) - 4-bit number of code-length codes, 4 added (4-19)
Immediately following this is HCLEN(don't forget +4) 3-bit fields, where the values are assigned to this sequence, in order:
16 17 18 0 8 7 9 6 10 5 11 4 12 3 13 2 14 1 15
Since HCLEN = 19, the whole sequence is used
A code length of 0 in that sequence means the corresponding symbol is not used.
As a graphical example, after reading the 19x3 bits we have six extra bits(extra bits in brackets):
NXT BYTE 00000000 00000000 00000000 00000000 00000000 00000000 [000000](00
My Question
Do the last bits in brackets above, get thrown away?
No. The only times in a deflate stream that you skip bits to go to a byte boundary are for a stored block (00), or when the end code in the last block is read. After the bits for the code length code lengths, you continue with the subsequent bits to use the generated Huffman code to read the code lengths.
This has stumped a few of us. It's VS2013, and the code itself builds correctly as you can see from the image. We've run this test on 2 different machines with the same results.
I did copy/paste the code originally into and from MS OneNote, so possibly there is a reason there. But as you can see from Notepad++ there don't appear to be any special characters.
Ideas?
To expand on this, the following version also fails:
//Note: Why this does not pass is baffling
[TestMethod]
public void FunnyTestThatFailsForSomeReason()
{
const string expectedErrorMessage = "Web Session Not Found.";
var a = "Web Session Not Found.";
string b = "Web Session Not Found.";
Assert.AreEqual(expectedErrorMessage, a);
//Assert.AreEqual(expectedErrorMessage, b);
Assert.AreEqual(expectedErrorMessage.ToString(), b.ToString());
}
You're using Assert.AreEqual(Object, Object) which (in this case) is looking for reference equality. It's not going to work the way you want it to.
Verifies that two specified objects are equal. The assertion fails if the objects are not equal.
Use Assert.AreEqual(String, String, Boolean).
Verifies that two specified strings are equal, ignoring case or not as specified. The assertion fails if they are not equal.
Or, more simply, your strings are subtly different. Copy and pasting appears to have yielded different results:
(Here for formatting purposes; the existing answer also explains what's happened. This is just the hex dump of your question's code.)
00000000: 2020 2020 7661 7220 6120 3d20 2257 6562 c2a0 5365 7373 696f : var a = "Web..Sessio
00000018: 6ec2 a04e 6f74 c2a0 466f 756e 642e 223b 0a20 2020 2020 2020 :n..Not..Found.";.
00000030: 2020 2020 2073 7472 696e 6720 6220 3d20 2257 6562 2053 6573 : string b = "Web Ses
00000048: 7369 6f6e 204e 6f74 2046 6f75 6e64 2e22 3b0a :sion Not Found.";.
The strings aren't the same.