Eclipse CDT & STM32: force predefined program memory - gdb

I have an uncommon but in my eyes reasonable use case:
I have to build two STM32 firmware images: a boot loader and an application (by using the latest Eclipse CDT based IDE from ST Microelectronics, called: "STM32CubeIDE").
Because my constraints are mostly low power consumption and not security, therefore I have only the requirement for data integrity for the DFU (Device Firmware Upgrade) scenario and for this I implemented a CRC32 check over the complete FW images. The tricky part is, that the firmware itself contains its actually size within a C-struct at a fixed offset address 0x200 in the code memory (the benefit for this design is, that not the complete code memory has to be transmitted, but the FW is always protected by the CRC32):
The layout of a firmware is something like this:
<ISR Table> <FW-Header#FixedAddress0x200> <RestFWCode> " + CRC32
FW Header contains FW size
The complete FW size which is used by the booloader to flash the application is the stored FW size (see 1.) + 4 byte of the appended CRC32
For my implementation I need to replace a memory area within the "FW Header" area with the actual FW size (which is only available after the build process).
For this I made a python script which patches the binary "*.bin" file, but it seems, that Eclipse/GDB uses for debugging the ELF-file, which looks for me much more complicated for a custom patch compared to the binary image, since I found no easy way to do this (to replace the actual FW size and append the 4 bytes of the CRC32).
Therefore, I thought the easiest way would be to patch the code memory right after the firmware got loader from the debugger.
I tested successfully a command-line tool from ST which can manipulate arbitrary memory even in the code memory (my code memory starts at 0x08000000 + application at offset 0x4000 and FW header at öffset 0x200 -> 0x08004200):
ST-LINK_CLI.exe -c SWD -w32 0x08004200 0xAABBCCDD
(see: https://www.st.com/resource/en/user_manual/cd00262073.pdf)
My Problem is, I don't know how to initiate this simple EXE call right before the debugger got attached to the MCU... I tried the "Debug Configuration"-> "Startup" -> "Run Commands", but without success...
Does anybody know a way how to achieve this?

Running a program before starting a debug session can be done using Eclipse's "Launch group" located under Debug Configurations, e.g. top menu -> Run -> Debug Configurations.
However before doing that you should go to project properties -> Builders and add your program invocation there - path to the executable plus its arguments. Make sure it's NOT checked so that it doesn't run when you build your project. Then you can go to the Launch Groups described above and create a group that contains the program you've defined in the project Builders section, then after that your regular debug session which you should already have available on the list.

I am going to recommend a different workflow to achieve the same image format:
<ISR Table> <FW-Header#FixedAddress0x200> <RestFWCode> " + CRC32
Once in place, your build process will be:
Build FW image to produce a raw binary, without a CRC-32 value
Calculate CRC-32 on the raw binary image, third-party tool
Insert the calculated CRC-32 into linker script, rebuild FW image
Note: For STM32CubeIDE, you will want to have your own *.elf project that includes the STM32CubeMX project as a static library. Otherwise, STM32CubeMX will overwrite your linker script each time it generates new code.
Since each project tends to have a slightly different linker script, I am going to demonstrate using a .test_crc sector. Insert the following into your *.ld linker script:
.test_crc :
{
_image_start = .;
BYTE( 0x31)
BYTE( 0x32)
BYTE( 0x33)
BYTE( 0x34)
BYTE( 0x35)
BYTE( 0x36)
BYTE( 0x37)
BYTE( 0x38)
BYTE( 0x39)
/* FW Image Header */
LONG( _image_end - _image_start ) /* Size of FW image */
LONG( _image_start ) /* Base address to load FW image */
/* Place this at the end of the FW image */
_image_end = .;
_IMAGE_CRC = ABSOLUTE(0x0); /* Using CRC-32C (aka zip checksum) */
/*LONG( (-_IMAGE_CRC) - 1 ) /* Uncomment to append value, comment to calculate new value */
} >FLASH
Add the following as a post-build step in STM32CubeIDE (generates the raw binary image):
arm-none-eabi-objcopy -S -O binary -j .test_crc ${ProjName}.elf ${ProjName}.bin
Now your ready to test/evaluate the process:
Rebuild your project to generate a *.bin file.
Using a third-party tool, calculate a CRC-32 checksum. I use 7-Zip command-line interface to generate the CRC-32C value for the *.bin file
Append the calculated CRC-32C. In the linker script, set the 0x0 in the following _IMAGE_CRC = ABSOLUTE(0x0) to match your calculated value. Uncomment the following:
LONG( (-_IMAGE_CRC) - 1 ) /* Uncomment to append value, comment to calculate new value */
Rebuild your image and run the third-party CRC utility, it should now report 0xFFFFFFFF for a CRC-32C value.
When you are ready to apply this to your actual FW image, do the following:
Change the post-build step to dump the full binary: arm-none-eabi-objcopy -S -O binary ${ProjName}.elf ${ProjName}.bin
Move _image_start = .; in front of your vector table.
Move the following after your vector table:
/* FW Image Header */
LONG( _image_end - _image_start ) /* Size of FW image */
LONG( _image_start ) /* Base address to load FW image */
Move the following to end of the last sector:
/* Place this at the end of the FW image */
_image_end = .;
_IMAGE_CRC = ABSOLUTE(0x0); /* Using CRC-32C (aka zip checksum) */
/*LONG( (-_IMAGE_CRC) - 1 ) /* Uncomment to append value, comment to calculate new value */
You may find that you actually do not need the CRC value built into the image, and can just append the CRC value to the *.bin. Then provide the *.bin to your bootloader. The *.bin will still contain the load address and size of FW image, +4 bytes for the appended CRC value.

Related

Making an executable by running an executable

I wanted to write a brainfuck compiler, but when I went to write one I was stuck at this problem
I want to create an ELF executable (using C/C++) that reads a brainfuck code from a file and generates an executable. Just like GCC/clang
I can read and parse the code, but I don't know how to write an executable that can run on the same system (say x86)?
I want this behavior:
my_bf_compiler ./source.bf -o bin.out
./bin.out
EDIT: I do not want to know how to write a compiler. Read this, compiler part was just for context as to where I will use it
I want to create a binary executable (say maker.out) which when ran creates a executable file (say foo.out). For simplicity let's keep foo.out very simple, when executed it returns 7; So, this is what is expected:
./maker.out # Creates the foo.out executable
./foo.out && echo $ # Runs the executable and prints return value, in this case 7;
So how do I write maker.cpp?
Your initial message was about creating a an executable from a brainfuck code, so this is what this answer focuses on. Your current question is way too broad.
As you have linked in one of your previous posts there is already an implementation that does this here: https://github.com/skeeto/bf-x86/blob/master/bf-x86.c
It basically does 3 steps:
1) Parse the BF code into a intermediate representation (which is here https://github.com/skeeto/bf-x86/blob/master/bf-x86.c#L55)
2) Compile this intermediate representation into machine code (which can be found here https://github.com/skeeto/bf-x86/blob/master/bf-x86.c#L496)
3) Compose the ELF binary according to the specification. The example program does this here. https://github.com/skeeto/bf-x86/blob/master/bf-x86.c#L622 .
Steps 1 and 2 are up to you to find a good implementation, for step 3 the simplest way is to write the ELF header and program header in such a way, that it only has the programs machine code as content and point the entrypoint of the program to the machine code generated in step 2.
The full specification for the ELF format can be found here: https://refspecs.linuxfoundation.org/elf/elf.pdf
#Yanick's answer contains enough information about the ELF format and how to create an elf executable.
However, it seems to me that your question is about how to open/create an executable file. There is a function called chmod/fchmod which might help you.
The following text is taken from the man-page for chmod (run man 2 chmod to see this page):
#include <sys/stat.h>
int chmod(const char *pathname, mode_t mode);
int fchmod(int fd, mode_t mode);
The new file mode is specified in mode, which is a bit mask created by ORing together zero or more of the
following:
S_ISUID (04000) set-user-ID (set process effective user ID on execve(2))
S_ISGID (02000) set-group-ID (set process effective group ID on execve(2); mandatory locking, as described
in fcntl(2); take a new file's group from parent directory, as described in chown(2) and
mkdir(2))
S_ISVTX (01000) sticky bit (restricted deletion flag, as described in unlink(2))
S_IRUSR (00400) read by owner
S_IWUSR (00200) write by owner
S_IXUSR (00100) execute/search by owner ("search" applies for directories, and means that entries within
the directory can be accessed)
S_IRGRP (00040) read by group
S_IWGRP (00020) write by group
S_IXGRP (00010) execute/search by group
S_IROTH (00004) read by others
S_IWOTH (00002) write by others
S_IXOTH (00001) execute/search by others
In your case, running chmod("foo.out", S_IRUSR | S_IXUSR) should give you(the owner) the permission to read and execute foo.out. Assuming that you have written foo.out to be a proper elf file, this will make it executable.

FreeType Glyph Metrics Caching of multiple Font sizes

Situation:
I have a project that renders product information onto a given template (custom XML format), then renders and converts it in a custom binary LCD format (steps simplified)
Our customers now want auto-fitting text container. (customer gives a box of specific size and all kinds of strings have to get auto-resized to fit into that container
For that I have to calculate the width of the string (freetype: each char/glyph) for multiple font-sizes (e.g. 100pt doesnt fit, 99pt doesnt fit, 98pt doesnt..., ..., 65pt fits!)
Problem:
The Problem is that freetype takes a lot of time (~20-30 ms) for each auto-fit element and I have only ~100ms for my whole application to use. (so when customer adds 5 more autofit elements it's already guaranteed to exceed ~100 ms)
Attempts:
A selfmade font-cache-generator which takes a font-file and calculates the widths of each unicode-character for font-sizes from 1pt to 100pt. Then it generates C source code out of the data like this:
//
#define COUNT_SIZES 100 // Font-Size 1-100
#define COUNT_CHARS 65536 // Full Unicode Table
int char_sizes[COUNT_SIZES][COUNT_CHARS] =
{
{1,1,2,2,3,1,1,2,2,3,1,2,2,1,2,2,3,1,2,.......// 65536
{2,2,3,3,4,2,1,3,3,4,2,3,3,2,3,3,4,2,3,.......// 65536
{2,3,4,3,5,2,2,4,4,5,2,4,4,2,4,3,5,3,3,.......// 65536
// ...
// 100 font sizes
};
That compiled in a dynamic lib (.so) is 25 MB in size and takes ~50ms to "dlload" and ~10ms to "dlsym" (WAAAAAAY too much!)
The same way but only ASCII table (so only 128 of 65536) compiles into a 58 KB .so file and takes ~500µs for "dlload" and ~100µs for "dlsym" (very nice!)
My next attempt would be to integrate the font-cache-generator into my project and cache only the glyphs I need for the specific customer (customer in europe needs ~500 glyphs, one in asia (e.g. traditional chinese) needs ~2500 (only examples, not exactly sure, maybe even more needed)
But before I take on that hard-work journey (:() I wanted to ask you if you know a better way of doing it? A library/project that does just that?
I cannot believe that it's not possible, how should a browser show lorem ipsum without loading seconds otherwise? :D
Any idea on how to solve this performance issue?
Any informative link on data caching with extremly fast access to cache (somewhat <1ms)?
System Info:
Unix (Ubuntu 16.04) 64bit
x86 AND arm architectures exist!
I found one possible way using these libraries:
ICU (for unicode)
Freetype (for the Glyphs)
Harfbuzz (for layout)
Github Project:
Harfbuzz-ICU-Freetype
Loose build instructions:
Search options in CMakeLists.txt option(WITH_XX "DESCRIPT." ON/OFF)
Enable CMake options with -D: cmake -DWITH_ZLIB=ON -DWITH_Harfbuzz=ON ..
mkdir build && cd build && cmake [option [option [...]]] ..
make -j $count_of_cpu_cores && sudo make install
Google for some Harfbuzz Layout tutorials / guides

LMDB increase map_size

I was working with LMDB++ (the C++ wrapper for LMDB) and I got this error:
terminate called after throwing an instance of 'lmdb::map_full_error'
what(): mdb_put: MDB_MAP_FULL: Environment mapsize limit reached
Some googling told me that the default map_size is set low in LMDB. How do I go about increasing map_size?
The default LMDB map size is 10 MiB, which is indeed too small for most uses.
To set the LMDB map size using the C++ wrapper, you ought to call lmdb::env#set_mapsize() right after creating your LMDB environment and prior to opening the environment or creating your transaction.
Here's a basic example that increases the map size to 1 GiB:
/* Create and open the LMDB environment: */
auto env = lmdb::env::create();
env.set_mapsize(1UL * 1024UL * 1024UL * 1024UL);
env.open("./example.mdb", 0, 0664);
If you are calculating a large map size as in the above example, take care to include the appropriate type suffix (UL or ULL) on your integer literals, or else you may encounter silent integer overflow and be left wondering why the map size did not increase to what you expected.
See also the documentation for LMDB's underlying C function mdb_env_set_mapsize() for the authoritative word on how the map size works.

How can i track a specific loop in binary instrumentation by using pin tool?

I am fresh in using intel pin tool, and want to track a certain loop in a binary file, but i found in each run the address of the instructions changed in each run, how can i find a specific instruction or a specific loop even it change in each run ? Edit 0: I have the following address, which one of them is the RVA:( the first section of address(small address) are constant for each run, but the last section(big address) changed for each run) Address loop_repeation No._of_Instruction_In_Loop
4195942 1 8
4195972 1 3
....... ... ...
140513052566480 1 2
...... ... ...
the address of the instructions changed in each run, how can i find a specific instruction or a specific loop even it change in each run ?
This is probably because you have ASLR enabled (which is enabled by default on Ubuntu). If you want your analyzed program to load at the same address in each run, you might want to:
1) Disable ASLR:
Disable it system-wide: sysctl -w kernel.randomize_va_space=0 as explained here.
Disable it per process: $> setarch $(uname -m) -R /bin/bash as explained here.
2) Calculate delta (offsets) in your pintool:
For each address that you manipulate, you need to use a RVA (Relative Virtual Address) rather than a full VA (Virtual Address).
Example:
Let's say on your first run your program loads at 0x80000000 (this is the "Base Address"), and a loop starts at 0x80000210
On the second run, the program loads at 0x90000000 ("Base Address") and the loops starts at 0x90000210
Just calculate the offsets of the loops from the Base Address:
Base_Address - Program_Address = offset
0x80000210 - 0x80000000 = 0x210
0x90000210 - 0x90000000 = 0x210
As both resulting offsets are the same, you know you have the exactly the same instruction, independently of the base address of the program.
How to do that in your pintool:
Given an (instruction) address, use IMG_FindByAddress to find the corresponding image (module).
From the image, use IMG_LowAddress to get the base address of the module.
Subtract the module base from the instruction: you have the RVA.
Now you can compare RVA between them and see if they are the same (they also must be in the same module).
Obviously this doesn't work for JITed code as JITed code has no executable module (think mmap() [linux] or VirtualAlloc() [windows])...
Finally there's a good paper (quite old now, but still applicable) on doing a loop detection with pin, if that can help you.

What specifies the history length of the Clojure REPL?

Where's it set ? I'm using Lein REPL and the LaClojure REPL, I can't find where the history length is set.
There is the idea.cycle.buffer.size option the idea.properties file (located in the bin directory):
---------------------------------------------------------------------
This option controls console cyclic buffer: keeps the console output size not higher than the specified buffer size (Kb)
Older lines are deleted. In order to disable cycle buffer use idea.cycle.buffer.size=disabled
--------------------------------------------------------------------
idea.cycle.buffer.size=1024
Also in https://stackoverflow.com/a/10793230/151650 Micah mentions the set history-size 10000 setting in the .inputrc file