I'm using mingw's GDB.
When I hit a breakpoint, it gives me the argument names for the function from debug symbols:
Breakpoint 1, CApp::OnTextInput (this=0x81ab888, ch=97)
However, I can't find a way to get argument names for functions that I haven't set a breakpoint on. With info functions, I can get argument types and function names, but not argument names.
Is it possible or do I always have to hit a breakpoint to get argument names?
I could not find how to do it with gdb, but you can do it with other tools: dwarfdump or objdump. For this simple program:
int main(int argc, char * argv[])
{
return 0;
}
you can dump dwarf debug info with dwarfdump:
$ dwarfdump a.out | grep -A30 "DW_AT_name.*main"
DW_AT_name main
DW_AT_decl_file 0x00000001 /tmp/1.c
DW_AT_decl_line 0x00000001
DW_AT_decl_column 0x00000005
DW_AT_prototyped yes(1)
DW_AT_type <0x0000006e>
DW_AT_low_pc 0x00401106
DW_AT_high_pc <offset-from-lowpc>18
DW_AT_frame_base len 0x0001: 9c: DW_OP_call_frame_cfa
DW_AT_GNU_all_call_sites yes(1)
DW_AT_sibling <0x0000006e>
< 2><0x0000004f> DW_TAG_formal_parameter
DW_AT_name argc
DW_AT_decl_file 0x00000001 /tmp/1.c
DW_AT_decl_line 0x00000001
DW_AT_decl_column 0x0000000e
DW_AT_type <0x0000006e>
DW_AT_location len 0x0002: 916c: DW_OP_fbreg -20
< 2><0x0000005e> DW_TAG_formal_parameter
DW_AT_name argv
DW_AT_decl_file 0x00000001 /tmp/1.c
DW_AT_decl_line 0x00000001
DW_AT_decl_column 0x0000001b
DW_AT_type <0x00000075>
DW_AT_location len 0x0002: 9160: DW_OP_fbreg -32
< 1><0x0000006e> DW_TAG_base_type
DW_AT_byte_size 0x00000004
DW_AT_encoding DW_ATE_signed
DW_AT_name int
< 1><0x00000075> DW_TAG_pointer_type
DW_AT_byte_size 0x00000008
There you can see that function main has 2 parameters with names argc and argv.
You can also use objdump to get the same information. Here is relevant output for main and it's parameters from objdump --dwarf=info a.out output:
<1><2d>: Abbrev Number: 2 (DW_TAG_subprogram)
<2e> DW_AT_external : 1
<2e> DW_AT_name : (indirect string, offset: 0x58): main
<32> DW_AT_decl_file : 1
<33> DW_AT_decl_line : 1
<34> DW_AT_decl_column : 5
<35> DW_AT_prototyped : 1
<35> DW_AT_type : <0x6e>
<39> DW_AT_low_pc : 0x401106
<41> DW_AT_high_pc : 0x12
<49> DW_AT_frame_base : 1 byte block: 9c (DW_OP_call_frame_cfa)
<4b> DW_AT_GNU_all_call_sites: 1
<4b> DW_AT_sibling : <0x6e>
<2><4f>: Abbrev Number: 3 (DW_TAG_formal_parameter)
<50> DW_AT_name : (indirect string, offset: 0x5): argc
<54> DW_AT_decl_file : 1
<55> DW_AT_decl_line : 1
<56> DW_AT_decl_column : 14
<57> DW_AT_type : <0x6e>
<5b> DW_AT_location : 2 byte block: 91 6c (DW_OP_fbreg: -20)
<2><5e>: Abbrev Number: 3 (DW_TAG_formal_parameter)
<5f> DW_AT_name : (indirect string, offset: 0x0): argv
<63> DW_AT_decl_file : 1
<64> DW_AT_decl_line : 1
<65> DW_AT_decl_column : 27
<66> DW_AT_type : <0x75>
<6a> DW_AT_location : 2 byte block: 91 60 (DW_OP_fbreg: -32)
<2><6d>: Abbrev Number: 0
Related
I'm trying to investigate memory leak in a particular process. For that I'm trying to investigate memory footprints using below command.
adb shell perfdump meminfo .
The output i'm getting is
MEMORY OF /usr/bin/PuffinApp (pid 2376)
TOTAL MEMORY USAGE (kB):
Pss 61937
SwapPss 11576
Graphics 0
------
TOTAL (kB) 73513
OTHER MEMORY STATS (kB):
Vss 2276204
Rss 68440
Uss 60120
CachedPss 8149
NonCachedPss 53788
Swap 11576
SwapUss 11576
PROCESS STATS:
Maj faults 21475
Min faults 2453006
Threads 258
PROCESS MAPS:
PSS SwapPSS TotalPSS Private Private Shared Shared Referenced Name
Clean Dirty Clean Dirty
------ ------ ------ ------ ------ ------ ------ ------ ------
45660 4952 50612 64 45596 0 0 40924 [anon:rw-p]
4324 2076 6400 108 4216 0 0 3984 [heap]
1352 2184 3536 0 1352 0 0 1192 [anon:rwxp]
36 0 36 20 16 0 0 36 [anon:-w-p]
4 28 32 0 4 0 0 4 [stack]
1980 48 2028 1800 180 0 0 1980 /usr/lib/libavcodec.so.58.18.100
1276 108 1384 1128 148 0 0 1276 /usr/lib/libpryon.so
1112 156 1268 1020 92 0 0 1048 /usr/lib/libReggaeWidevine.so
774 4 778 432 88 576 0 1096 /usr/lib/libcrypto.so.1.1
552 0 552 524 20 16 0 552 /usr/lib/libReggaeMediaLib.so
500 0 500 0 468 0 64 532 /dev/shm/puffin-micStream
336 32 368 252 84 0 0 336 /usr/lib/libavformat.so.58.12.100
356 4 360 308 48 0 0 340 /usr/bin/PuffinApp
252 16 268 244 8 0 0 72 /usr/lib/libxml2.so.2.9.13
0 248 248 0 0 0 0 0 /usr/lib/libavfilter.so.7.16.100
219 4 223 172 28 76 0 276 /usr/lib/libssl.so.1.1
202 0 202 0 0 0 404 404 /dev/shm/audio_playback_stream_2
164 0 164 140 12 24 0 176 /usr/lib/libreggae-core.so
106 16 122 4 28 220 0 244 /usr/lib/libPuffinExternalCapabilityAPI.so
92 28 120 84 8 0 0 92 /usr/lib/libavutil.so.56.14.100
114 0 114 0 20 656 0 436 /usr/lib/libsqlite3.so.0.8.6
101 0 101 0 16 520 0 520 /usr/lib/libAVSCommon.so
93 0 93 16 12 228 0 256 /usr/lib/libcurl.so.4.7.0
80 4 84 76 4 0 0 80 /usr/lib/libtvm_modelops_ww_rhea_v3_50target_cnet.so
------ ------ ------ ------ ------ ------ ------ ------
61856 11576 73432 7036 53084 7676 640 62368 TOTAL
I'm not able understand which components should I focus on to find memory leaks.Also the sum of components is not equal to the total PSS mentioned i.e 61856. I think totalPss can help but of which component.Should i have to focus on .so also other than heap Pss ? Also is it possible that cachedPss will contributes to memory leaks.
Can someone help me in understanding the output.
I'm trying to do memory profiling on linux for that I'm using command adb shell perfdump meminfo . I'm not able to understand the output for PSS column as totalPSS is not equal to sum of individual PSS values. Below is the process map I got after running the above command.
MEMORY OF /usr/bin/PuffinApp (pid 2376)
TOTAL MEMORY USAGE (kB):
Pss 66695
SwapPss 11388
Graphics 0
------
TOTAL (kB) 78083
OTHER MEMORY STATS (kB):
Vss 2250440
Rss 76560
Uss 64132
CachedPss 12371
NonCachedPss 54324
Swap 11388
SwapUss 11388
PROCESS STATS:
Maj faults 33592
Min faults 4533273
Threads 253
PROCESS MAPS:
PSS SwapPSS TotalPSS Private Private Shared Shared Referenced Name
Clean Dirty Clean Dirty
------ ------ ------ ------ ------ ------ ------ ------ ------
45752 5208 50960 68 45684 0 0 45524 [anon:rw-p]
4452 1948 6400 224 4228 0 0 4380 [heap]
1612 1988 3600 0 1612 0 0 1568 [anon:rwxp]
36 0 36 20 16 0 0 36 [anon:-w-p]
4 28 32 0 4 0 0 4 [stack]
2116 108 2224 1968 148 0 0 1716 /usr/lib/libpryon.so
1984 4 1988 1936 48 0 0 1104 /usr/bin/PuffinApp
832 0 832 804 20 16 0 516 /usr/lib/libReggaeMediaLib.so
756 32 788 624 84 96 0 384 /usr/lib/libavformat.so.58.12.100
484 156 640 392 92 0 0 400 /usr/lib/libReggaeWidevine.so
522 48 570 280 180 124 0 568 /usr/lib/libavcodec.so.58.18.100
500 0 500 0 468 0 64 532 /dev/shm/puffin-micStream
344 0 344 332 12 0 0 136 /usr/lib/libLocaleWakewordAdapter.so
338 4 342 0 88 1148 0 1236 /usr/lib/libcrypto.so.1.1
280 8 288 276 4 0 0 132 /usr/lib/libSpotifyAdapter.so
264 0 264 184 12 136 0 208 /usr/lib/libreggae-core.so
247 8 255 8 28 976 0 784 /usr/lib/libPuffinExternalCapabilityAPI.so
4 248 252 4 0 0 0 4 /usr/lib/libavfilter.so.7.16.100
222 4 226 200 0 68 0 268 /usr/lib/libopus.so.0.8.0
202 0 202 0 0 0 404 404 /dev/shm/audio_playback_stream_2
176 16 192 168 8 0 0 96 /usr/lib/libxml2.so.2.9.13
183 0 183 120 4 128 0 64 /usr/lib/libDefaultClient.so
142 0 142 88 12 172 0 256 /usr/lib/libacsdkAudioPlayer.so
140 0 140 136 4 0 0 44 /usr/lib/libacsdkVisualCharacteristics.so
------ ------ ------ ------ ------ ------ ------ ------
66613 11388 78001 10728 53404 11912 512 71972 TOTAL
If i calculate the sum of individual PSS (1st column ) I will get 61492 which is around 5MB lesser than the total PSS. Can someone explain me why these two values are different and who is using the remaining memory ?
I am working on a project where I need to parse the DWARF output of a compiler. I am working a Debian x64 (windows WSL) with GCC 8.3.0.
I face a compiler behaviour that I think might be a bug, but I am unsure if there's a subtlety I don't get.
I order to pack a structure, I use the following directive : #pragma pack(push,1). I think that GCC doesn't produce the right debugging symbol when the following conditions occurs (but not limited to them):
struct declaration in .h file
pragma directive in the .cpp file only, before the include
instance of struct delcared in the cpp file
Here's a structure :
struct StructD
{
unsigned int bitfieldA : 1;
unsigned int bitfieldB : 9;
unsigned int bitfieldC : 3;
unsigned int bitfieldD;
};
And here's a piece of code to test it:
file1StructDInstance.bitfieldA = 1;
file1StructDInstance.bitfieldB = 0b100111011;
file1StructDInstance.bitfieldC = 0b11;
file1StructDInstance.bitfieldD = 0b101001101;
unsigned char* ptr = (unsigned char*)(&file1StructDInstance);
printf("%02x %02x %02x %02x %02x %02x %02x %02x\n", ptr[0], ptr[1], ptr[2], ptr[3], ptr[4], ptr[5], ptr[6], ptr[7]);
Scenario 1 - No pragma
If I do not use the pragma anywhere, my struct is 32bits aligned and the debugging symbol matches. The software output is like this: 77 0e 00 00 4d 01 00 00.
<2><150>: Abbrev Number: 3 (DW_TAG_member)
<151> DW_AT_name : (indirect string, offset: 0xc2): bitfieldD
<155> DW_AT_decl_file : 2
<156> DW_AT_decl_line : 33
<157> DW_AT_decl_column : 15
<158> DW_AT_type : <0x83>
<15c> DW_AT_data_member_location: 4
The debugging sumbol report that bitfieldD is at the 4th byte in the structure, no bit offset. Which is right.
Scenario 2 - Pragma before struct declaration
If I put the pragma at the top of the .h file , I get this software output :
77 0e 4d 01 00 00 00 00
And the debugging symbol is as follow
<2><150>: Abbrev Number: 3 (DW_TAG_member)
<151> DW_AT_name : (indirect string, offset: 0xc2): bitfieldD
<155> DW_AT_decl_file : 2
<156> DW_AT_decl_line : 33
<157> DW_AT_decl_column : 15
<158> DW_AT_type : <0x83>
<15c> DW_AT_data_member_location: 2
So bitfieldD is at byte 2 without offset, which is again right and matches the memory layout.
Scenario 3 - Pragma in .cpp, but omitted in .h
When I put the pragma in the .cpp file, before the include of the .h file that define StructD, but I omit to put the pragma in the .h file, I get a mismatch between the compiled code and the debugging symbol.
software output : 77 0e 00 00 4d 01 00 00
And debugging symbol
<2><150>: Abbrev Number: 3 (DW_TAG_member)
<151> DW_AT_name : (indirect string, offset: 0xc2): bitfieldD
<155> DW_AT_decl_file : 2
<156> DW_AT_decl_line : 33
<157> DW_AT_decl_column : 15
<158> DW_AT_type : <0x83>
<15c> DW_AT_data_member_location: 2
Now the debugging symbols says that bitfieldD is at byte #2, but clearly the software put it at byte #4. I recognize that the usage of the pragma might not be proper, but I would expect GCC to produce debugging symbols that matches the generated code.
Is this a bug in GCC or am I misunderstanding how DWARF works?
Ok, I'm probably doing something dumb, but I can't get libusb to let me transfer data to my device for the life of me.
Code:
#include <iostream>
#include <iomanip>
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <signal.h>
#include <libusb-1.0/libusb.h>
#define EP_DATA_IN 0x83
#define EP_DATA_OUT 0x02
#define DEVICE_CONFIGURATION 0
int main(int argc, char **argv)
{
int rc;
libusb_context *ctx = NULL;
libusb_device_handle *dev_handle;
int actual = 0;
unsigned char *data = new unsigned char[4];
data[0]='a';data[1]='b';data[2]='c';data[3]='d';
rc = libusb_init(&ctx);
if(rc < 0) {
std::cout << "Init Error " << rc << std::endl;
return 1;
}
libusb_set_debug(ctx, 6);
dev_handle = libusb_open_device_with_vid_pid(ctx, 0x03eb, 0x2423);
if (!dev_handle) {
fprintf(stderr, "Error finding USB device\n");
return 2;
}
if(libusb_kernel_driver_active(dev_handle, DEVICE_CONFIGURATION) == 1) {
std::cout << "Kernel Driver Active" << std::endl;
if(libusb_detach_kernel_driver(dev_handle, DEVICE_CONFIGURATION) == 0)
std::cout << "Kernel Driver Detached!" << std::endl;
}
rc = libusb_claim_interface(dev_handle, DEVICE_CONFIGURATION);
if(rc != 0) {
std::cout << "Cannot Claim Interface" << std::endl;
return 3;
}
std::cout << "Data->" << data << "<-" << std::endl;
std::cout << "Writing Data..." << std::endl;
std::cout << "Trying endpoint " << EP_DATA_OUT << "." << std::endl;
rc = libusb_bulk_transfer(dev_handle, EP_DATA_OUT, data, sizeof(data), &actual, 100);
if(rc == 0 && actual == 4)
{
std::cout << "Writing Successful!" << std::endl;
}
else
{
std::cout << "Write Error! Rc: " << rc << " Actual transfered bytes: " << actual << "." << std::endl;
std::cout << "Error code means: " << libusb_error_name(rc) << std::endl;
}
rc = libusb_release_interface(dev_handle, 0);
if(rc!=0) {
std::cout << "Cannot Release Interface" << std::endl;
return 1;
}
if (dev_handle)
libusb_close(dev_handle);
libusb_exit(ctx);
return 0;
}
Device in question:
pi#testpi:~$ sudo lsusb -d 03eb: -v
Bus 001 Device 004: ID 03eb:2423 Atmel Corp.
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 64
idVendor 0x03eb Atmel Corp.
idProduct 0x2423
bcdDevice 1.00
iManufacturer 1 ATMEL ASF
iProduct 2 Vendor Class Example
iSerial 0
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 69
bNumInterfaces 1
bConfigurationValue 1
iConfiguration 0
bmAttributes 0xc0
Self Powered
MaxPower 100mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 0
bInterfaceClass 255 Vendor Specific Class
bInterfaceSubClass 255 Vendor Specific Subclass
bInterfaceProtocol 255 Vendor Specific Protocol
iInterface 0
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 1
bNumEndpoints 6
bInterfaceClass 255 Vendor Specific Class
bInterfaceSubClass 255 Vendor Specific Subclass
bInterfaceProtocol 255 Vendor Specific Protocol
iInterface 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 3
Transfer Type Interrupt
Synch Type None
Usage Type Data
wMaxPacketSize 0x0040 1x 64 bytes
bInterval 1
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x02 EP 2 OUT
bmAttributes 3
Transfer Type Interrupt
Synch Type None
Usage Type Data
wMaxPacketSize 0x0040 1x 64 bytes
bInterval 1
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x83 EP 3 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0040 1x 64 bytes
bInterval 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x04 EP 4 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0040 1x 64 bytes
bInterval 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x85 EP 5 IN
bmAttributes 1
Transfer Type Isochronous
Synch Type None
Usage Type Data
wMaxPacketSize 0x0100 1x 256 bytes
bInterval 1
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x06 EP 6 OUT
bmAttributes 1
Transfer Type Isochronous
Synch Type None
Usage Type Data
wMaxPacketSize 0x0100 1x 256 bytes
bInterval 1
Device Status: 0x0001
Self Powered
And what I get when I run the code against the device:
pi#testpi:~/BatLogger/Interface/libusb_test$ ./make.sh
libusb: debug [libusb_get_device_list]
libusb: debug [libusb_get_device_descriptor]
libusb: debug [libusb_open] open 1.4
libusb: debug [usbi_add_pollfd] add fd 11 events 4
libusb: debug [libusb_kernel_driver_active] interface 0
libusb: debug [libusb_claim_interface] interface 0
Data->abcd<-
Writing Data...
Trying endpoint 2.
libusb: debug [add_to_flying_list] arm timerfd for timeout in 100ms (first in line)
libusb: debug [submit_bulk_transfer] need 1 urbs for new transfer with length 4
libusb: error [submit_bulk_transfer] submiturb failed error -1 errno=2
libusb: debug [submit_bulk_transfer] first URB failed, easy peasy
libusb: debug [disarm_timerfd]
Write Error! Rc: -1 Actual transfered bytes: 0.
Error code means: LIBUSB_ERROR_IO
libusb: debug [libusb_release_interface] interface 0
libusb: debug [libusb_close]
libusb: debug [usbi_remove_pollfd] remove fd 11
libusb: debug [libusb_exit]
libusb: debug [libusb_exit] destroying default context
As far as I know, I'm doing everything correctly. libusb_claim_interface returns OK, there isn't a pre-existing driver attached to the device since I'm using a custom VID/PID combo, and EP_DATA_OUT is a output endpoint (direction bit is 0, though to whom "out" is with respect to isn't described). Out of irritation, I've also tried every other possible endpoint (0-16, 0-16 | 1 << 7), with the exact same error for all of them.
Is there something silly I'm missing? Do I have to install a kernel module or something to make libusb play nice with me? I'm using libusb-1.0.
The error from the libusb debug message is error -1 errno=2. where errno=2 corresponds to ERNOENT, but the few things I could find about that together with libusb didn't have a decent conclusion about what's actually going on.
Code is built g++ -std=c++11 -Wall -lrt -lusb-1.0 main.cpp -o main.bin, though the fact that I'm using C++ is probably not relevant to the issue, since I'm not using one of the C++ libusb wrappers.
Ok, so I figured out the issue.
Basically, apparently, for ~reasons~, the endpoints for my device are attached to configuration 0, alternate setting 1.
I'm not sure how, or if it's even possible to determine this from the output of lsusb, but I had a bit of scripting that I had used for a different device written against PyUSB, so I had a poke around with that, and it told me:
1 pi#testpi:~/BatLogger/Interface/libusb_test$ sudo python3 test.py
INFO:Main.Gui:Device: DEVICE ID 03eb:2423 on Bus 001 Address 004 =================
bLength : 0x12 (18 bytes)
bDescriptorType : 0x1 Device
bcdUSB : 0x200 USB 2.0
bDeviceClass : 0x0 Specified at interface
bDeviceSubClass : 0x0
bDeviceProtocol : 0x0
bMaxPacketSize0 : 0x40 (64 bytes)
idVendor : 0x03eb
idProduct : 0x2423
bcdDevice : 0x100 Device 1.0
iManufacturer : 0x1 ATMEL ASF
iProduct : 0x2 Vendor Class Example
iSerialNumber : 0x0
bNumConfigurations : 0x1
CONFIGURATION 1: 100 mA ==================================
bLength : 0x9 (9 bytes)
bDescriptorType : 0x2 Configuration
wTotalLength : 0x45 (69 bytes)
bNumInterfaces : 0x1
bConfigurationValue : 0x1
iConfiguration : 0x0
bmAttributes : 0xc0 Self Powered
bMaxPower : 0x32 (100 mA)
INTERFACE 0: Vendor Specific ===========================
bLength : 0x9 (9 bytes)
bDescriptorType : 0x4 Interface
bInterfaceNumber : 0x0
bAlternateSetting : 0x0
bNumEndpoints : 0x0
bInterfaceClass : 0xff Vendor Specific
bInterfaceSubClass : 0xff
bInterfaceProtocol : 0xff
iInterface : 0x0
INTERFACE 0, 1: Vendor Specific ========================
bLength : 0x9 (9 bytes)
bDescriptorType : 0x4 Interface
bInterfaceNumber : 0x0
bAlternateSetting : 0x1
bNumEndpoints : 0x6
bInterfaceClass : 0xff Vendor Specific
bInterfaceSubClass : 0xff
bInterfaceProtocol : 0xff
iInterface : 0x0
ENDPOINT 0x81: Interrupt IN ==========================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x81 IN
bmAttributes : 0x3 Interrupt
wMaxPacketSize : 0x40 (64 bytes)
bInterval : 0x1
ENDPOINT 0x2: Interrupt OUT ==========================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x2 OUT
bmAttributes : 0x3 Interrupt
wMaxPacketSize : 0x40 (64 bytes)
bInterval : 0x1
ENDPOINT 0x83: Bulk IN ===============================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x83 IN
bmAttributes : 0x2 Bulk
wMaxPacketSize : 0x40 (64 bytes)
bInterval : 0x0
ENDPOINT 0x4: Bulk OUT ===============================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x4 OUT
bmAttributes : 0x2 Bulk
wMaxPacketSize : 0x40 (64 bytes)
bInterval : 0x0
ENDPOINT 0x85: Isochronous IN ========================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x85 IN
bmAttributes : 0x1 Isochronous
wMaxPacketSize : 0x100 (256 bytes)
bInterval : 0x1
ENDPOINT 0x6: Isochronous OUT ========================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x6 OUT
bmAttributes : 0x1 Isochronous
wMaxPacketSize : 0x100 (256 bytes)
bInterval : 0x1
The critical thing being that there are no endpoints under INTERFACE 0:, but there are endpoints under INTERFACE 0, 1:. This was enough to go on to figure out that there were more then one version of INTERFACE 0, and with that, it was pretty easy to figure out I needed to call libusb_set_interface_alt_setting() to select the right alternate configuration thingie.
Basically, I wound up adding
rc = libusb_set_interface_alt_setting(dev_handle, DEVICE_CONFIGURATION, 1);
if(rc != 0) {
std::cout << "Cannot configure alternate setting" << std::endl;
return 3;
}
after the libusb_claim_interface() call in my C(++) code, and I can now write to the device.
I have an issue when I subclass a type which bind an obj-C type. In some cases, it fails at construction time.
I can reproduce this right now with the cocos2d bindings and CCSprite. Here's my subclass
public class MySprite : CCSprite
{
public MySprite (string filename) : base (filename)
{}
}
When I instantiate it, it fails:
Stacktrace:
at (wrapper managed-to-native) MonoTouch.ObjCRuntime.Messaging.void_objc_msgSendSuper_IntPtr (intptr,intptr,intptr) <IL 0x00025, 0xffffffff>
at MonoTouch.Cocos2D.CCSprite.set_Texture (MonoTouch.Cocos2D.CCTexture2D) <IL 0x00048, 0x00137>
at (wrapper runtime-invoke) <Module>.runtime_invoke_void__this___object (object,intptr,intptr,intptr) <IL 0x00052, 0xffffffff>
at (wrapper managed-to-native) MonoTouch.ObjCRuntime.Messaging.IntPtr_objc_msgSendSuper_IntPtr (intptr,intptr,intptr) <IL 0x00027, 0xffffffff>
at MonoTouch.Cocos2D.CCSprite..ctor (string) <IL 0x00072, 0x001a3>
at Demo.MySprite..ctor (string) <IL 0x00002, 0x00027>
[...]
Native stacktrace:
0 Demo 0x00115b5c mono_handle_native_sigsegv + 284
1 Demo 0x00089c38 mono_sigsegv_signal_handler + 248
2 libsystem_c.dylib 0x962af86b _sigtramp + 43
3 ??? 0xffffffff 0x0 + 4294967295
4 Demo 0x0003b9d2 -[CCSprite setOpacityModifyRGB:] + 47
5 Demo 0x0003c18c -[CCSprite updateBlendFunc] + 267
6 Demo 0x0003c37c -[CCSprite setTexture:] + 488
7 ??? 0x11cadc94 0x0 + 298507412
8 ??? 0x11cada78 0x0 + 298506872
9 ??? 0x11cadbf6 0x0 + 298507254
10 Demo 0x0008dff2 mono_jit_runtime_invoke + 722
11 Demo 0x001f0b7e mono_runtime_invoke + 126
12 Demo 0x00293736 monotouch_trampoline + 3686
13 Demo 0x0003909e -[CCSprite initWithTexture:rect:rotated:] + 614
14 Demo 0x0003914d -[CCSprite initWithTexture:rect:] + 70
15 Demo 0x0003934c -[CCSprite initWithFile:] + 275
16 ??? 0x11cad803 0x0 + 298506243
17 ??? 0x11cad6ec 0x0 + 298505964
18 ??? 0x11cace30 0x0 + 298503728
19 ??? 0x11cac958 0x0 + 298502488
20 ??? 0x11ca7f04 0x0 + 298483460
21 ??? 0x0d7f7258 0x0 + 226456152
22 ??? 0x0d7f0a7c 0x0 + 226429564
23 ??? 0x0d7f0dc5 0x0 + 226430405
24 Demo 0x0008dff2 mono_jit_runtime_invoke + 722
25 Demo 0x001f0b7e mono_runtime_invoke + 126
26 Demo 0x00293736 monotouch_trampoline + 3686
27 UIKit 0x016c59d6 -[UIApplication _callInitializationDelegatesForURL:payload:suspended:] + 1292
28 UIKit 0x016c68a6 -[UIApplication _runWithURL:payload:launchOrientation:statusBarStyle:statusBarHidden:] + 508
29 UIKit 0x016d5743 -[UIApplication handleEvent:withNewEvent:] + 1027
30 UIKit 0x016d61f8 -[UIApplication sendEvent:] + 68
31 UIKit 0x016c9aa9 _UIApplicationHandleEvent + 8196
32 GraphicsServices 0x042bafa9 PurpleEventCallback + 1274
33 CoreFoundation 0x037231c5 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 53
34 CoreFoundation 0x03688022 __CFRunLoopDoSource1 + 146
35 CoreFoundation 0x0368690a __CFRunLoopRun + 2218
36 CoreFoundation 0x03685db4 CFRunLoopRunSpecific + 212
37 CoreFoundation 0x03685ccb CFRunLoopRunInMode + 123
38 UIKit 0x016c62a7 -[UIApplication _run] + 576
39 UIKit 0x016c7a9b UIApplicationMain + 1175
40 ??? 0x0d7ebbc5 0x0 + 226409413
41 ??? 0x0d7e5020 0x0 + 226381856
42 ??? 0x0d7e4390 0x0 + 226378640
43 ??? 0x0d7e44e6 0x0 + 226378982
44 Demo 0x0008dff2 mono_jit_runtime_invoke + 722
45 Demo 0x001f0b7e mono_runtime_invoke + 126
46 Demo 0x001f4d74 mono_runtime_exec_main + 420
47 Demo 0x001fa165 mono_runtime_run_main + 725
48 Demo 0x000eb4d5 mono_jit_exec + 149
49 Demo 0x002889f5 main + 2005
50 Demo 0x00086f81 start + 53
What worries me is that I have similar code working in a different application.
And to be complete, if I override the Texture property to proxy to base, it doesn't crash anymore, but doesn't display anything so I suspect the native object is in bad shape.
I also tried [Register]ing the class, and adding the default constructor overrides.
[UPDATE] I compared this project with the other one that was working. In fact, both are working on device, and both fails the same way in the simulator.
[UPDATE2] here's a sample triggering the behaviour: https://github.com/StephaneDelcroix/mt-subclassbug The Cocos2D.dll is a fresh one generated this morning from monotouch-bindings master
Answering my own question. The bug was not in the bindings definition, nor in the tools used to generate them, but in the 2.1rc0 version of cocos2d. upgrading to 2.1rc0a fixed it.
This then triggers a new issue, but that one could be traced down to the bindings definition, and is fixed here https://github.com/mono/monotouch-bindings/pull/97