What is LLVM Comdat? - llvm

What does comdat in LLVM represents? You can find the source here:Comdata
An example from source level program representation (c++) would be very much helpful.
If you need more info, please feel free to ask. I find it in many places in llvm code base, but I can't able to figure it what exactly it is, and it's uses
Thanks for your help!

I'm also learning about Comdat and see the below explanation from this blog.
A Comdat section is a section in the object file, in which objects are placed, which can be duplicated in other object files. Each object has information for the linker, indicating what it must do when duplicates are detected. The options can be: Any — do anything, ExactMatch — duplicates must completely match, otherwise an error occurs, Largest — take the object with the largest value, NoDublicates — there should not be a duplicate, SameSize — duplicates must have the same size, otherwise an error occurs.
In LLVM, Comdat data is represented by an enumeration:
enum SelectionKind {
Any, ///< The linker may choose any COMDAT.
ExactMatch, ///< The data referenced by the COMDAT must be the same.
Largest, ///< The linker will choose the largest COMDAT.
NoDuplicates, ///< No other Module may specify this COMDAT.
SameSize, ///< The data referenced by the COMDAT must be the same size.
};
and the class Comdat actually represents a pair (Name, SelectionKind). (In fact, everything is more complicated.) All variables that for some reason cannot be deleted are placed in a set of NotDiscardableComdats. With functions and global aliases, we do the same — something that can not be deleted is placed in NotDiscardableComdats. Then, separate optimization functions for global constructors, global functions, global variables, global aliases, and global destructors are called. Optimizations continue in the loop until no optimization is performed. At each iteration of the loop, the set of NotDiscardableComdats is set to zero.

Related

Why does gdb show a different parameter order for a function

Looking through a core file(generated by C code) with gdb, I am unable to understand one particular thing between these 2 frames
#2 increment_counter (nsteps=2, steps=0x7f3fbad26790) at gconv_db.c:393
#3 find_derivation (...) at gconv_db.c:426
This code is from open source glibc where find_derivation calls increment_counter as:
result = increment_counter (*handle, *nsteps);
The *handle and steps are of the same type and increment_counter function is defined as static
Why does gdb show that the 2 parameters have different order ?
I am pretty sure that glibc was taken as is without modification
Why does gdb show that the 2 parameters have different order ?
GDB doesn't know anything about the source (except possibly where on disk it was located at build time).
It is able to display parameters (and their values) because the compiler told it (by embedding debug info into the object file) what parameters are, in what order they appear, their types, and how to compute their value.
So why would a compiler re-order function arguments?
The function is static, so it can't be called from outside of the current translation unit. Thus the compiler is free to re-order the parameters, so long as it also re-orders the arguments at every call site.
Still, why would it do that? General answer: optimization (compiler found it more convenient to pass them in this order). Detailed answer would require digging into GCC (or whatever compiler was used to build this code) source.

AVR G++: Executing a function that is past the 128 Kb ROM boundary

AVR g++ has a pointer size of 16 bits. However, my particular chip (the ATMega2560) has 256 KB of RAM. To support this, the compiler automatically generates trampoline sections in the same section of ROM as the current executing code that then contains the extended assembly code to jump into high memory or back. In order for trampolines to be generated, you must take the address of something that sits in high memory.
In my scenario, I have a bootloader that I have written sitting in high memory. The application code needs to be able to call a function in the bootloader. I know the address of this function and need to be able to directly address it by hard-coding the address in my code.
How can I get the compiler/linker to generate the appropriate trampoline for an arbitrary address?
Compiler and linker will only generate trampoline code when the far address is a symbolic address rather than a literal constant number already in code. something like (assuming the address you want to jump to is 0x20000).
extern void (*farfun)() = 0x20000;
farfun ();
Will definitely not work, it doesn't cause the linker to do anything because the address is already resolved.
You should be able to inject the symbol address in the linker command line like so:
extern void farfun ();
farfun ();
compiling "normally" and linking with
-Wl,--defsym,farfun=0x20000
I think it's clear that you need to make sure yourself that something sensible sits at farfun.
You will most probably also need --relax.
EDIT
Never tried this myself, but maybe:
You could probably try to store the function address in a table in high memory and declare it like this:
extern void (*farfunctable [10])();
(farfunctable [0])();
and use the very same linker command to resolve the external symbol (now your table at 0x20000 (in the bootloader) needs to look like this:
extern void func1();
extern void func2();
void ((*farfunctab [10])() = {
func1,
func2,....
};
I would recommend to put func1() ... func10() in a different module from farfunctab in order to make the linker know it has to generate trampolins.
I was planning on putting a dispatch struct (that is, a struct with function pointers to all the various functions). Your solution works well, but requires knowing all of the locations of all of the functions ahead of time. Is there a way to execute a function call to a far address that isn't known at compile time?
[...] My goal was to put the struct with pointers to the functions in a fixed location. That way, it would be a single thing that needed a fixed address rather than every external function.
So you have two applications, let's call them App and Boot, where Boot provides some functionalities that App wants to use. The following problems have to be addressed:
How to get addresses from Boot into App.
How to build a jump table for Boot.
Avoid constructs that will crash when App tries to use code from Boot, like: Using indirect calls or jumps, using static constructors or using static storage in Boot.
App uses Addresses of boot.elf directly
Linking with -Wl,-R,boot.elf
A simple way would be to just link app.elf against boot.elf be means of -Wl,-R,boot.elf. Option -R instructs the linker to use symbol values from the specified file without dragging any code. Problem is that there's no way to specify which symbols to use, for example this might lead to a situation where App uses libgcc functions from Boot.
Defining Symbols by means of -Wl,--defsym,symbol=value
A bit more control over which symbols are being defined can be implemented by following a specific naming convention. Suppose that all symbols from Boot that have "boot" in their name should be "exported", then you could just
> avr-nm -g boot.elf | grep ' T ' | awk '/boot/ { printf("--defsym %s=0x%s\n",$3,$1) }' > syms.opt
This prints global symbol values, and grep filters out symbols in the text section. awk then transforms lines like 00020102 T boot1 to lines like
--defsym boot1=0x00020102 which are written to an option file syms.opt. The option file can then be provided to the linker by means of -Wl,#syms.opt.
The advantage of an option file is that it is easier to provide than plain options in a build environment like make: app.elf would depend (amongst others) on syms.opt, which in turn would depend on boot.elf.
Defining Symbols in a Linker Script Snippet
An alternative would be to define the symbols in a linker script augmentation, which you would provide by means of -T syms.ld during link and which would contain
"boot1"=ABSOLUTE(0x00020102);
"boot2"=...
...
INSERT AFTER .text
Defining Symbols in an Assembly Module
Yet another way to define the symbols would be by means of an assembly module which contains definitions like .global boot1 together with boot1 = 0x00020102.
All these approaches have in common that all symbols must be defined, or otherwise the linker will throw an undefined symbol error. This means boot.elf must be available, and it does not matter whether just one symbol is undefined or whether dozends of symbols are undefined.
Let Boot provide a Dispatch Table
The problem with using boot.elf directly, like lined out in the previous section, is that it introduces a direct dependency. This means that if Boot is improved or refactored, then you'll also have to re-compile App each time, even if the interface did not change.
A solution is to let Boot provide a dispatch table whose position and layout are known ahead of time. Only when the interface itself changes, App will have to be rebuilt. Just refactoring Boot will not require to re-build App.
The Assembly Module with the Jump Table
As explained in the "Crash" section below, addresses in a dispatch table (and hence indirect jumps) won't work because EIND has a wrong value. Therefore, let's assume we have a table of JMPs to the desired Boot functions, like in an assembly module boot-table.sx that reads:
;;; Linker description file boot.ld locates input section .boot.table
;;; right after .vectors, hence the address of .boot_table will be
;;; text-section-start + _VECTORS_SIZE, where the latter is
;;; #define'd in <avr/io.h>.
;;; No "x" section flag so that the linker won't relax JMPs to RJMPs.
.section .boot.table,"a",#progbits
.global .boot_table
.type .boot_table,#object
boot_table:
jmp boot1
jmp boot2
.size boot_table, .-boot_table
In this example, we are going to locate the jump table right after .vectors, so that its location is known ahead of time. The respective symbol definitions in App's syms.opt will then read
--defsym boot1=0x20000+vectors_size+0*4
--defsym boot2=0x20000+vectors_size+1*4
provided Boot is located at 0x20000. Symbol vectors_size can be defined in a C/C++ module, here by abusing avr-gcc attribute "address":
#include <avr/io.h>
__attribute__((__address__(_VECTORS_SIZE)))
char vectors_size;
Locating the Jump Table
In order to locate input section .boot.table, we need an own linker description file, which you might already use for Boot anyways. We start with a linker script from avr-gcc installation at ./avr/lib/ldscripts/avr6.xn, copy it to boot.ld, and add the following 2 lines after vectors:
...
.text :
{
*(.vectors)
KEEP(*(.vectors))
*(.boot.table)
KEEP(*(.boot.table))
/* For data that needs to reside in the lower 64k of progmem. */
*(.progmem.gcc*)
...
Auto-Generating Boot's Jump Table Module and the Symbols for App
It's highly advisable to have an interface description used by both App and Boot, say common.h. Moreover, in order to keep Boot's boot-table.sx and App's syms.opt in sync with the interface, it's agood idea to auto-generate these two files from common.h. To that end, assume that common.h reads:
#ifndef COMMON_H
#define COMMON_H
#define EX __attribute__((__used__,__externally_visible__))
EX int boot1 /* #boot_table:0 */ (int);
EX int boot2 /* #boot_table:1 */ (void);
#endif /* COMMON_H */
For the matter of simplicity, let's assume that this is C code or the interfaces are extern "C" so that the symbols in source code match the assembly names, and there's no need to use mangled names. It' easy enough to generate boot-table.sx and syms.opt from common.h using the magic comments. The magic comment follows directly after the symbol, so a regex would retrieve the token left of the magic comment, something like Python:
# ... symbol /* #boot_table:index */...
pat = re.compile (r".*(\b\w+\b)\s*/\* #boot_table:(\d+) \*/.*")
for line in sys.stdin.readlines():
match = re.match (pat, line)
if match:
index = int (match.group(2))
symbol = match.group(1)
Output template for syms.opt would be something like:
asm_line = "--defsym {symbol}=0x20000+vectors_size+4*{index}\n"
Code that will crash
Using Boot code from App is subject to several restrictions:
Indirect Calls and Jumps
These will crash because the start addresses of App resp. Boot are in different 128KiB segments of flash. When the address of a code symbol is taken, the compiler does this per gs(symbol) which instructs the linker to generate a stub and resolve gs() to that stub in .trampolines if the target address is outside the 128KiB segment where the trampolines are located. An explanation of gs() can be found in this answer, there is however more to it: The startup code will effectively initialize
EIND = __vectors >> 17;
see gcrt1.S, the AVR-LibC bits of start-up code crt<device>.o. The compiler assumes EIND never changes during execution, see EIND and more than 128KiB of Flash in the GCC documentation.
This means code in Boot assumes EIND = 1 but is called with EIND = 0 and hence EICALL resp. EIJMP will target the wrong address. This means common code must avoid indirect calls and jumps, and should be compiled with -fno-jump-tables so that switch/case won't generate such tables.
This also implies that the dispatch table described above won't work if it would just held gs(symbol) entries, because App and Boot will disagree on EIND.
Data in Static Storage
If common Boot code is using data in static storage, the data might collide with App's static storage. One way out is to avoid static storage in respective parts of Boot and pass addresses to, say, some data buffer by means of pointer erguments of respective functions.
One could have completely separate RAM areas; one for Boot and one for App, but that would be a waste of RAM because the applications will never run at the same time.
Static Constructors
Boot's static constructors will be bypassed if App uses code from Boot. This includes:
C++ code in Boot that explicitly or implicitly generates such constructors.
C/C++ code in Boot that relies on __attribute__((__constructor__)) or code in section .initN which is supposed to run prior to main.
Start-up code that initializes static storage, EIND etc., which is also run by locating it in some .initN sections, but will be bypassed if App calls Boot code.

How to place a function at a particular address in C?

I want to place a function void loadableSW (void) at a specific location:0x3FF802. In another function residentMain() I will jump to this location using pointer to function. How to declare function
loadableSW to accomplish this. I have attached the skeleton of residentMain for clarity.
Update: Target hardware is TMS320C620xDSP. Since this is an aerospace project, deterministic
behaviour is a desirable design objective. Ideally, they would like to know what portion of memory contains what at a particular time. The solution as I just got to know is to define a section in memory in the linker file. The section shall start at 0x3FF802 (Location where to place the function). Since the size of the loadableSW function is known, the size of the memory section can also be determined. And then the directive #pragma CODESECTION ("function_name", "section_name") can place that function in the specified section.
Since pragma directives are not permissible in test scripts, I am wondering if there is any other way to do this without using any linker directives.
Besides I am curious. Is there any placement syntax for functions in C++? I know there is one for objects, but functions?
void residentMain (void)
{
void (*loadable_p) (void) = (void (*) (void)) 0x3FF802;
int hardwareOK = 0;
/*Code to check hardware integrity. hardwareOK = 1 if success*/
if (hardwareOK)
{
loadable_p (); /*Jump to Loadable Software*/
}
else
{
dspHalt ();
}
}
I'm not sure about your OS/toolchain/IDE, but the following answer should work:
How to specify a memory location at which function will get stored?
There is just one way I know of and it is shown in the first answer.
UPDATE
How to define sections in gcc:
variables:
http://mcuoneclipse.com/2012/11/01/defining-variables-at-absolute-addresses-with-gcc/
methods (section ("section-name")): http://gcc.gnu.org/onlinedocs/gcc-3.2/gcc/Function-Attributes.html#Function%20Attributes
How to place a function at a particular address in C?
Since pragma directives are not permissible in test scripts, I am wondering if there is any other way to do this without using any linker directives.
If your target supports PC-relative addressing and you can ensure it is pure, then you can use a memcpy() to relocate the routine.
How to run code from RAM... has some hints on this. If you can not generate PC-relative/relocatable code, then you absolutely can not do this with out the help of the linker. That is the definition of a linker/loader, to fix up addresses.
Which can take you to a different concept. Do not fully link your code. Instead defer the address fixup until loading. Then you must write a loader to place the code at run-time; but from your aerospace project comment, I think that complexity and analysis are also important so I don't believe you would accept that. You also need double the storage, etc.

declaring an extern C function on template instantiation

I'm working on an MCU (STM32F4).
In the current system, all the interrupts handlers are declared as weak symbols in a link file, and if we want to use one, we just declare a function with the same name in C and it replaces the weak one at link time.
I'm trying to convert my system to C++, I envision a system where instantiating a certain interrupt type would declare the corresponding C function in the module.
I have no clue how to achieve that considering that extern "C" is forbidden for member functions.
any idea or alternative ?
My aim is to try to statically check some things and try to use some modern C++ in the field.
here is the current situation in C.
I have a assembly file with this thing in it:
g_pfnVectors:
.word _estack
.word Reset_Handler
(...)
.word SysTick_Handler
(...)
/*******************************************************************************
*
* Provide weak aliases for each Exception handler to the Default_Handler.
* As they are weak aliases, any function with the same name will override
* this definition.
*
*******************************************************************************/
(...)
.weak SysTick_Handler
.thumb_set SysTick_Handler,Default_Handler
in my C code, I have that :
main() {
(...)
SysTick_Config(SystemCoreClock / cncMemory.parameters.clockFrequency);
while (1);
}
void SysTick_Handler(void) {
cncMemory.tick++;
}
And I envision something like:
int main() {
MCU<mySystickHandler, ...> mcu;
mcu.start();
}
static void mySystickHandler(void) {
cncMemory.tick++;
}
or something approaching (probably without the still global function, but I try to separate the problems).
I know of nothing standard for that.
If you want to stay in the language, you'll have to look at extensions as compilers have provided pragmas and attributes to control such things for a long time. In the case of gcc, asm labels seems designed for that problem. I've not used it and I've the a priori that it can't be used with templates excepted perhaps for explicit specializations.
The alternative is obviously playing with linker level tricks.
AFAIK a weak symbol doesn't make an object file providing it to be extracted from a static library if it is the only symbol provided by the object. You could arrange that the object file providing from C to your template provide also another unique symbol which is needed by the instantiation you want.
Linker scripts have a lot of power and if things haven't changed too much since the last time I took care of embedded systems (that's so long ago that they must changed, I just don't know if they have changed in that aspect) custom linker scripts are still pretty much mandatory in that field.

How to make Visual C++ 9 not emit code that is actually never called?

My native C++ COM component uses ATL. In DllRegisterServer() I call CComModule::RegisterServer():
STDAPI DllRegisterServer()
{
return _Module.RegisterServer(FALSE); // <<< notice FALSE here
}
FALSE is passed to indicate to not register the type library.
ATL is available as sources, so I in fact compile the implementation of CComModule::RegisterServer(). Somewhere down the call stack there's an if statement:
if( doRegisterTypeLibrary ) { //<< FALSE goes here
// do some stuff, then call RegisterTypeLib()
}
The compiler sees all of the above code and so it can see that in fact the if condition is always false, yet when I inspect the linker progress messages I see that the reference to RegisterTypeLib() is still there, so the if statement is not eliminated.
Can I make Visual C++ 9 perform better static analysis and actually see that some code is never called and not emit that code?
Do you have whole program optimization active [/GL]? This seems like the sort of optimization the compiler generally can't do on its own.
Are you sure the code isn't eliminated later in the compilation/linking process? Have you checked the generated ASM?
How is the RegisterTypeLib function defined? Of course anything marked dllexportcan't be eliminated by the linker, but also any function not marked static (or placed in an anonymous namespace) can be referenced by multiple translation units, so the compiler won't be able to eliminate the function.
The linker can do it, but that might be one of the last optimizations it performs (I have no clue of the order in which it applies optimizations), so the symbols might still be there in the messages you're looking at, even if they're eliminated afterwards.
any code that is successfully inlined will only be generated if called. That's a simple way to do it, as long as the compiler will take the hint. inline is only a suggestion though
The inner call to AtlComModuleRegisterServer has external linkage, which generally prevents the optimizer from propagating the bRegTypeLib value down the call graph. Some of this can be better reasoned with in the disassembly.
So DllInstall(...) calls CAtlDllModuleT::RegisterServer(0). This is the start of the problem:
push 0
call ?DllRegisterServer#?$CAtlDllModuleT#VCAtlTestModule###ATL##QAEJH#Z
Let's just say for arguments sake that the compiler has verified CAtlDllModuleT::DllRegisterServer is only called once and it's very safe to push the 0/FALSE down one more level... the external linkage prevents discarding AtlComModuleRegisterServer, inlining it has a high cost (code duplication) and doesn't allow any additional whole-program optimizations. It is probably safer to keep the signature as-is and bail out early with a regular cdecl call...
?DllRegisterServer#?$CAtlDllModuleT#VCAtlTestModule###ATL##QAEJH#Z proc near
<trimmed>
push 0
push edx
push offset ATL::_AtlComModule
call _AtlComModuleRegisterServer#12
This code can be improved in size due to the two constants, but it's likely to cost about the same amount of runtime. If performance is an issue consider explicitly setting the function layout order, you might save a page fault.
Turns out the key is to enable link-time code generator all the way through the compiler settings.
It must be enabled on the General tab - Whole program optimization must be set to "Use link-time code generation". It must also be enabled on the C++ -> Optimization tab - "Whole program optimization* must be set to "Enable link-time code generation". It must also be enabled on the Linker -> Optimization tab - Link Time Code Generation must be set to "Use Link Time Code Generation". Then /OPT:REF and /OPT:ICF (again, *Linker -> Optimization" tab) must both be enabled.
And this effectively removes the calls to RegisterTypeLib() - it is no longer in the list of symbols imported.