We have an embedded board with ColdFire CPU which runs µC-OS/II. When the embebbed program crashes, the CPU dumps (or copies) the entire RAM in the embedded flash. Then, we have a procedure to retrieve the RAM content (which was dumped into the flash) into a simple .bin file.
When we want to debug, we use GDB (m68k-elf-gdb.exe) combined with the .elf file. For example :
$ gdb our_elf_file
(gdb) print some_var
Cannot access memory at address 0x30617890
(gdb) ptype some_var
type = unsigned int
(gdb)
This allows us to know the address of the variable. Then, we perform a simple offset operation with the previous given address and read the RAM dump at a specific location.
For example, if we want to read some_var located at 0x30617890, we know that the dump represent the RAM content starting from 0x20000000. After that, we read 4 bytes of the .bin file at the offset (0x30617890 - 0x20000000).
(Sometimes we also use objdump (m68k-elf-objdump.exe) for other purposes).
I am completely new to this kind of stuff so maybe my question is stupid, but, is there some way to tell gdb where the RAM content is ?
Related
Do different sections of libc (such as .text, .plt, .got, .bss, .rodata, and others) get loaded at the same offset relative to the libc base address every time?
I know the loader loads libc at a random location every time I run my program.
Thank you in advance.
I guess I found the answer to my own question. I wrote a pin-tool using Intel PIN that on every libc section get loaded outputs the section offset relative to the address of libc. Here are the sections having get loaded at same offsets with their corresponding offsets (the thing before - is the library name which is libc with its version and what comes after that is the section name):
libc.so.6-.note.gnu.build-id 0x0000000000000270
libc.so.6-.note.ABI-tag 0x0000000000000294
libc.so.6-.gnu.hash 0x00000000000002b8
libc.so.6-.dynsym 0x0000000000003d80
libc.so.6-.dynstr 0x0000000000010ff8
libc.so.6-.gnu.version 0x00000000000169d8
libc.so.6-.gnu.version_d 0x0000000000017b68
libc.so.6-.gnu.version_r 0x0000000000017ee0
libc.so.6-.rela.dyn 0x0000000000017f10
libc.so.6-.rela.plt 0x000000000001f680
libc.so.6-.plt 0x000000000001f7c0
libc.so.6-.plt.got 0x000000000001f8a0
libc.so.6-.text 0x000000000001f8b0
libc.so.6-__libc_freeres_fn 0x0000000000172b10
libc.so.6-__libc_thread_freeres_fn 0x0000000000175030
libc.so.6-.rodata 0x0000000000175300
libc.so.6-.stapsdt.base 0x0000000000196650
libc.so.6-.interp 0x0000000000196660
libc.so.6-.eh_frame_hdr 0x000000000019667c
libc.so.6-.eh_frame 0x000000000019bb38
libc.so.6-.gcc_except_table 0x00000000001bc3cc
libc.so.6-.hash 0x00000000001bc810
libc.so.6-.tdata 0x00000000003c07c0
libc.so.6-.tbss 0x00000000003c07d0
libc.so.6-.init_array 0x00000000003c07d0
libc.so.6-__libc_subfreeres 0x00000000003c07e0
libc.so.6-__libc_atexit 0x00000000003c08d8
libc.so.6-__libc_thread_subfreeres 0x00000000003c08e0
libc.so.6-.data.rel.ro 0x00000000003c0900
libc.so.6-.dynamic 0x00000000003c3ba0
libc.so.6-.got 0x00000000003c3d80
libc.so.6-.got.plt 0x00000000003c4000
libc.so.6-.data 0x00000000003c4080
libc.so.6-.bss 0x00000000003c5720
And there are indeed sections that get loaded at different offsets every time. You may see them in below. However, as I do not recognize them and to me not important, I would like to conclude that yes the sections we most concern about get loaded at the same offset each time the program runs.
libc.so.6-.note.stapsdt
libc.so.6-.gnu.warning.sigstack
libc.so.6-.gnu.warning.sigreturn
libc.so.6-.gnu.warning.siggetmask
libc.so.6-.gnu.warning.tmpnam
libc.so.6-.gnu.warning.tmpnam_r
libc.so.6-.gnu.warning.tempnam
libc.so.6-.gnu.warning.sys_errlist
libc.so.6-.gnu.warning.sys_nerr
libc.so.6-.gnu.warning.gets
libc.so.6-.gnu.warning.getpw
libc.so.6-.gnu.warning.re_max_failures
libc.so.6-.gnu.warning.lchmod
libc.so.6-.gnu.warning.getwd
libc.so.6-.gnu.warning.sstk
libc.so.6-.gnu.warning.revoke
libc.so.6-.gnu.warning.mktemp
libc.so.6-.gnu.warning.gtty
libc.so.6-.gnu.warning.stty
libc.so.6-.gnu.warning.chflags
libc.so.6-.gnu.warning.fchflags
libc.so.6-.gnu.warning.__compat_bdflush
libc.so.6-.gnu.warning.__memset_zero_constant_len_parameter
libc.so.6-.gnu.warning.__gets_chk
libc.so.6-.gnu.warning.inet6_option_space
libc.so.6-.gnu.warning.inet6_option_init
libc.so.6-.gnu.warning.inet6_option_append
libc.so.6-.gnu.warning.inet6_option_alloc
libc.so.6-.gnu.warning.inet6_option_next
libc.so.6-.gnu.warning.inet6_option_find
libc.so.6-.gnu.warning.getmsg
libc.so.6-.gnu.warning.putmsg
libc.so.6-.gnu.warning.fattach
libc.so.6-.gnu.warning.fdetach
libc.so.6-.gnu.warning.setlogin
libc.so.6-.gnu_debuglink
libc.so.6-.shstrtab
I am trying to make a small kernel for 80386 processor mainly for learning purpose and want to get the full memory map of the available RAM.
I have read that it is possible and better to do so with the help of GRUB than directly querying the BIOS.
Can anybody tell me how do I do it ?
Particularly, for using bios functionality in real mode we use bios interrupts and get the desired values in some registers , what is the actual equivalent way when we want to use GRUB provided functions ?
Here is the process I use in my kernel (note that this is 32bit). In my bootstrap assembly file, I tell GRUB to provide me with a memory map:
.set MEMINFO, 1 << 1 # Get memory map from GRUB
Then, GRUB loads the address of the multiboot info structure into ebx for you (this structure contains the address of the memory map). Then I call into C code to handle the actual iteration and processing of the memory map. I do something like this to iterate over the map:
/* Macro to get next entry in memory map */
#define MMAP_NEXT(m) \
(multiboot_memory_map_t*)((uint32_t)m + m->size + sizeof(uint32_t))
void read_mmap(multiboot_info_t* mbt){
multiboot_memory_map_t* mmap = (multiboot_memory_map_t*) mbt->mmap_addr;
/* Iterate over memory map */
while((uint32_t)mmap < mbt->mmap_addr + mbt->mmap_length) {
// process the current memory map entry
mmap = MMAP_NEXT(mmap);
}
}
where multiboot_info_t and multiboot_memory_map_t are defined as in the Gnu multiboot.h file. As Andrew Medico posted in the comments, here is a great link for getting started with this.
As stated on this site. When I want to dump memory in gdb.
The start point is 0x1000 and end 0x2000.
For lldb start is 0x1000 and end 0x1200 .
Is there a reason for this or is just a mistake ?
Main question is: How do I dump a memory area from 0x1000 to 0x2000 in lldb?
The following works fine for me:
(lldb) memory read --outfile /tmp/mem.txt 0x6080000fe680 0x6080000fe680+1000
Dumps 1000 bytes of memory, from the given start address, in hex format, to /tmp/mem.txt. Use --binary for binary format.
You could also use 'count' to state how many bytes you want to dump:
(lldb) memory read --outfile /tmp/mem.txt --count 1000 0x6080000fe680
If you are in Xcode debugging environment and have a variable named 'note1', you can also use:
(lldb) memory read --outfile /tmp/mem.bin note1 note1+100
Reads at the actual location 0x1000 fail in Xcode for me ("memory read failed"), must be protected in some way.
As to the difference between 0x1200 and 0x2000 in the documentation, I think it's simply a small mistake.
I have a program that I need to patch using GDB. The issue is there is a line of code that makes a "less than or equal test" and fails causing the program to end with a Segmentation fault. The program is already compiled and I do not have the source so I cannot change the source code obviously. However, using GDB, I was able to locate where the <= test is done and then I was able to locate the memory address which you can see below.
(gdb) x/100i $pc
... removed extra lines ...
0x7ffff7acb377: jle 0x7ffff7acb3b1
....
All I need to do is change the test to a 'greater than or equal to' test and then the program should run fine. The opcode for jle is 0x7e and I need to change it to 0x7d. My assignment gives instructions on how to do this as follows:
$ gdb -write -q programtomodify
(gdb) set {unsigned char} 0x8040856f = 0x7d
(gdb) quit
So I try it and get...
$ gdb -write -q player
(gdb) set {unsigned char} 0x7ffff7acb377 = 0x7d
Cannot access memory at address 0x7ffff7acb377
I have tried various other memory addresses and no matter what I try I get the same response. That is my only problem, I don't care if it's the wrong address or wrong opcode instruction at this point, I just want to be able to modify the memory.
I am running Linux Mint 14 via VMware Player
Thank
Cannot access memory at address 0x7ffff7acb377
You are trying to write to an address where some shared library resides. You can find out which library that is with info sym 0x7ffff7acb377.
At the time when you are trying to perform the patch, the said shared library has not been loaded yet, which explains the message you get.
Run the program to main. Then you should be able to write to the address. However, you'll need to have write permission on the library to make your write "stick".
Short of digging through GDB source, where can I find documentation about the format used to create core files?
The ELF specification leaves the core file format open, so I guess this should be part of the GDB specifications! Sadly, I did not find any help in this regard from GNU's gdb documentation.
Here's what I am trying to do: Map virtual addresses to function names in executable/libraries that comprised the running process. To do that, I would first like to figure out, from the core file, the map from virtual address space to the name of the executable file/libraries, and then dig into the relevant file to get the symbolic information.
Now 'readelf -a core' tells me that nearly all the segments in the core file are of the type 'load' -- I'd guess these are the .text and .bss/.data segments from all the participating files, plus a stack segment. Barring these load segments, there is one note segment, but that does not seem to contain the map. So how is the information about which file a segment corresponds to, stored in the core file? Are those 'load' segments format in a particular way to include the file information?
The core dump file format is using the ELF format but is not described in the ELF standard. AFAIK, there is no authoritative reference on this.
so how is the information about which file a segment corresponds to, stored in the core file?
A lot of extra information is contained within the ELF notes. You can use readelf -n to see them.
The CORE/NT_FILE note defines the association between memory address range and file (+ offset):
Page size: 1
Start End Page Offset
0x0000000000400000 0x000000000049d000 0x0000000000000000
/usr/bin/xchat
0x000000000069c000 0x00000000006a0000 0x000000000009c000
/usr/bin/xchat
0x00007f2490885000 0x00007f24908a1000 0x0000000000000000
/usr/share/icons/gnome/icon-theme.cache
0x00007f24908a1000 0x00007f24908bd000 0x0000000000000000
/usr/share/icons/gnome/icon-theme.cache
0x00007f24908bd000 0x00007f2490eb0000 0x0000000000000000
/usr/share/fonts/opentype/ipafont-gothic/ipag.ttf
[...]
For each thread, you should have a CORE/NT_PRSTATUS note which gives you the registers of the thread (including the stack pointer). You might be able to infer the position of the stacks from this.
More information about format of ELF core files:
Anatomy of an ELF core file (disclaimer: I wrote this one)
A brief look into core dumps
A core dump is the in-memory image of the process when it crashed. It includes the program segments, the stack, the heap and other data. You'll still need the original program in order to make sense of the contents: the symbol tables and other data make the raw addresses and structures in the memory image meaningful.
Additional information about the process that generated the core file is stored in an ELF note section, albeit in an operating system specific manner. For example, see the core(5) manual page for NetBSD.
Not so much gdb as the bfd library used by gdb, binutils, etc.
A simpler solution to your problem may be to parse text from /proc/$pid/maps to determine what file a given virtual address maps to. You could then parse the corresponding file.
Kenshoto's open source VDB (debugger) uses this approach, if you can read python it is a good example.