How can I provide semantic information about a ROM to a RAM project in IAR - iar

I am trying to make an IAR project for a RAM based image containing code that will call into ROM code.
ROM here is literally ROM, not flash. I know that I can do this as I have the source ELF file that was used to generate the ROM and I have extracted the symbols from the ROM's elf and provided them to IAR which keeps the linker happy. The problem is that the symbol information I am providing to IAR this way is just a symbol name to address mapping.
What I'd like to be able to achieve is to provide more semantic information to IAR such that when I am debugging the RAM image and it steps into ROM, I retain the ability to do source level debugging.
Kind of like the ability to retain full semantic debugging when single stepping through a DLL in application land. Is such a thing possible in IAR?
Not as important but still very valuable would be the ability to have the linker check against signature discrepancies between the ROM and the calling RAM.
Out of curiosity is this possible in other tools like with ARM GCC, Keil etc?

This appears to be possible using isymexport.
This PDF includes info on the tool:
http://supp.iar.com/FilesPublic/UPDINFO/004916/arm/doc/EWARM_DevelopmentGuide.ENU.pdf

Related

Mapping C/C++ source code to assembly code similar to godbolt

Our Android application has lot of code written in C/C++ and whenever a crash is reported from production users we get to know the callstack along with the registers state at the time of crash. When the crash doesn't look so evident, registers state and assembly code of crashed functions will be helpful to some extent.
We pack stripped version of libraries into application and we keep unstripped libraries with us so that whenever crash is reported we get to know file and line numbers with help of unstripped libraries (using tools llvm-addr2line, llvm-objdump, llvm-readelf...)
Today we have to manually run llvm-objdump on each symbol against unstripped library to read into symbol's assembly code.
However, to improve developer productivity, we are planning to develop an application for which if we give filename as input the application prints assembly code for all of input file.
Is it possible to print assembly code so
i.e. given a source code file and unstripped library, is it possible to map and print source code and corresponding assembly code?
Similar to godbolt where we can see source code on left pane and corresponding assembly code on right pane.
Thanks.

Why does C++ linking use virtually no CPU?

On a native C++ project, linking right now can take a minute or two. Yet, during this time CPU drops from 100% during compilation to virtually zero. Does this mean linking is primarily a disk activity?
If so, is this the main area an SSD would make big changes? But, why aren't all my OBJ files (or as many as possible) kept in RAM after compilation to avoid this? With 4 GB of RAM I should be able to save a lot of disk access and make it CPU-bound again, no?
Update: so the obvious follow-up is, can the VC++ compiler and linker talk together better to streamline things and keep OBJ files in memory, similar to how Delphi does it?
Linking is indeed primarily a disk-based activity. Borland Pascal (back in the day) would keep the entire program in memory, which is why it would link so fast.
Your OBJ files aren't kept in RAM because the compiler and linker are separate programs. If your development environment had an integrated compiler and linker (instead of running them as a separate processes), it could indeed keep everything in RAM.
But you would lose the ability to separate the development environment from the compilers and/or linkers - you would have to use the same compiler/linker, and you wouldn't be able to run the compiler outside the environment.
You can try installing some of those RAM disks utilities and keep your obj directory on the RAM disk or even whole project directory. That should speed it up considerably.
Don't forget to make it permanent afterwards :-D
The Visual Studio linker is largely I/O bound, but how much so depends on a few variables.
Incremental linking (common in Debug builds) generally requires a lot less I/O.
Writing a PDB file (for symbols) can consume a lot of the time. It's a specific bottleneck that Microsoft targeted in VS 2010. The PDB writing is now done asynchronously. I haven't tried it, but I've heard it can help link times quite a bit.
If you using link-time code generation (LTCG) (common in Release builds), you have all the usual I/O initially. Then, the linker re-invokes the compiler to re-generate code for sections that can be further optimized. This portion is generally much more CPU-intensive. Off hand, I don't know if the linker actually spins up the compiler in a separate process and waits (in which case you'll still see low CPU usage for the linker process), or if the compilation is done in the linker process (in which case you'll see the linker go through phases of heavy-I/O then heavy-CPU).
Using an SSD can help with the I/O bound portions. Simply having a second drive can help, too. For example, if your source and objects are all on one drive, and you write your PDB to a separate drive, the linker should spend less time waiting for the PDB writer. Having a second spinning drive has helped my current team's link times dramatically.
In debug builds in Visual Studio you can use incremental linking which allows you to usually avoid a lot of the time spent on linking. Basically it means that instead of linking the whole EXE (or DLL) file from scratch it builds upon the one you last linked, replacing only the things that changed.
This is however not recommended for release builds since it adds some overhead in runtime and can result in an EXE file that is several times larger than the usual.
It's hard to say what exactly is taking the linker so long without knowing how it is interacting with the OS. Thankfully, Microsoft provides Process Monitor so you can do just that.
It's helped me diagnose bugs with the Visual Studio IDE and debugger without access to source.

Windows/C++: Is it possible to find the line of code where exception was thrown having "Exception Offset"

One of our users having an Exception on our product startup.
She has sent us the following error message from Windows:
Problem Event Name: APPCRASH
Application Name: program.exe
Application Version: 1.0.0.1
Application Timestamp: 4ba62004
Fault Module Name: agcutils.dll
Fault Module Version: 1.0.0.1
Fault Module Timestamp: 48dbd973
Exception Code: c0000005
Exception Offset: 000038d7
OS Version: 6.0.6002.2.2.0.768.2
Locale ID: 1033
Additional Information 1: 381d
Additional Information 2: fdf78cd6110fd6ff90e9fff3d6ab377d
Additional Information 3: b2df
Additional Information 4: a3da65b92a4f9b2faa205d199b0aa9ef
Is it possible to locate the exact place in the source code where the exception has occured having this information?
What is the common technique for C++ programmers on Windows to locate the place of an error that has occured on user computer?
Our project is compiled with Release configuration, PDB file is generated.
I hope my question is not too naive.
Yes, that's possible. Start debugging with the exact same binaries as ran by your user, make sure the DLL is loaded and you've got a matching PDB file for it. Look in Debug + Windows + Modules for the DLL base address. Add the offset. Debug + Windows + Disassembly and enter the calculated address in the Address field (prefix with 0x). That shows you the exact machine code instruction that caused the exception. Right-click + Go To Source code to see the matching source code line.
While that shows you the statement, this isn't typically good enough to diagnose the cause. The 0xc0000005 exception is an access violation, it has many possible causes. Often you don't even get any code, the program may have jumped into oblivion due to a corrupted stack. Or the real problem is located far away, some pointer manipulation that corrupted the heap. You also typically really need a stack trace that shows you how the program ended up at the statement that bombed.
What you need is a minidump. You can easily get one from your user if she runs Vista or Win7. Start TaskMgr.exe, Processes tab, select the bombed program while it is still displaying the crash dialog. Right-click it and Create Dump File.
To make this smooth, you really want to automate this procedure. You'll find hints in my answer in this thread.
If you have a minidump, open it in Visual Studio, set MODPATH to the appropriate folders with the original binaries and PDBs, and tell it to "run". You may also need to tell it to load symbols from the Microsoft symbol servers. It will display the call stack at the error location. If you try to look at the source code for a particular stack location, it may ask you where the source is; if so, select the appropriate source folder. MODPATH is set in the debug command-line properties for the "project" that has the name of the minidump file.
I know this thread is very old, but this was a top Google response, so I wanted to add my $.02.
Although a mini-dump is most helpful, as long as you have compiled your code with symbols enabled (just send the file without the .pdb, and keep the .pdb!) you can look up what line this was using the MSVC Debugger or Windows Debugger. MSN article on that:
http://blogs.msdn.com/b/danielvl/archive/2010/03/03/getting-the-line-number-for-a-faulting-application-error.aspx
Source code information isn't preserved in compiled C++ code, unlike in runtime-based metadata-aware languages (such as .NET or Java). The PDB file is a symbol index which can help a debugger map compiled code backwards to source, but it has to be done during program execution, not from a crash dump. Even with a PDB, Release-compiled code is subject to a number of optimizations that can prevent the debugger from identifying the source code.
Debugging problems which only manifest on end-user machines is usually a matter of careful state logging and a lot of detail-oriented time and effort combing over the source. Depending on your relationship with the user (for example, if you're internal corporate IT development), you may be able to make a virtual machine image of the user's machine and use it for debugging, which can help speed the process tremendously by precisely replicating the installed software and standard running processes on the user's workstation.
There are several ways to find the crash location after the fact.
Use a minidump. See the answers above.
Use the existing executable in a debugger. See the answers above.
If you have PDB files (Visual Studio, Visual Basic 6), use DbgHelpBrowser to load the PDB file and query it for the crash location.
If you have TDS files (separate TDS file, or embedded in the exe, Delphi, C++ Builder 32 bit), use TDS Browser to load the TDS/DLL/EXE file and query it for the crash location.
If you have DWARF symbols (embedded in the EXE, C++ Builder 64 bit, gcc, g++), use DWARF Browser to load the DLL/EXE and query it for the crash location.
If you have MAP files, use MAP File Browser to load the MAP file and query it for the crash location.
I wrote these tools for use in house. We've made them available for free.

What's the use of .map files the linker produces?

What is the use of .map files VC++ linker produces when /MAP parameter or "Generate map file" project setting is used? When do I need them and how do I benefit from them?
A nice article on how to use map files for finding crashes.
http://www.codeproject.com/KB/debug/mapfile.aspx
Manually doing all this is very uninteresting.
I am not aware of any tools which can read map file and help in finding the crash location. If anybody knows please update us.
For embedded systems, map files are a lot more useful. (Although you wouldn't be using Visual C++ for that ;)
Things like knowing how close you are to running out of program/data memory, and what location a particular variable resides in, are important.
WinDBG uses .map and .pdb files to help debug crashes, when analysing .hdmp and .mdmp crash dumps.
Basically they map memory address offsets to functions and variables within the .exe (and/or loaded .dlls). Very useful in general if you need to figure out why a customer is upset. Even more useful when they prove it was not your fault.
The most useful way to debug "post-mortem" crashes is using WinDbg (Windows platform). Open it up, and open the crash dump. Then set the source path to point at the code (if you have it), the symbol path to point at your .map and .pdb and the image path to the .exe, and type "!analyse -v" in the command line. Now you have a full stack trace with lines of code and everything. Of course you need to have the correct version of the source code for the version of the exe's and DLLs you are debugging.
It's even better if you have the MS symbol server in the path, and if the full page heap was turned on or adplus was running. With ADPlus in particular you will likely have variable values captured as well.
Some favourite WinDbg resources of mine:
First stop :: http://www.microsoft.com/whdc/devtools/debugging/debugstart.mspx
Force load the symbols :: http://www.osronline.com/ShowThread.cfm?link=182377
Useful site :: http://www.dumpanalysis.org/blog/index.php/category/windbg-tips-and-tricks/page/7/
You need them rarely, but they can be handy debugging some problems because they give information on the location of functions and data.
For example:
detailed information on all segments (code, data and other).
link line numbers to code
You can use map files for debugging tools.
Linker maps can be very useful in large projects when you need to track dependencies between compilation units and libraries. Typically, a linker will report a symbol which caused problems, and more often than not, a simple search for this symbol name won't return any results (or will return tons of false positives for symbols like read).
Without a linker map, the only option you have is to analyze all available source files (after preprocessing pass if macros were used, which is typically the case) and hope that you find the relevant spot.
Linker maps usually have a section called "reference by file/symbol" which tells you which object file was required by another object file of your project, and which symbol exactly was referenced.
I was once working on a project which had to be ported on a system without locale support. The linker was reporting "undefined reference to _localeconv_r" errors, which would have been a pain to track down by searching through the sources. Luckily, a GCC linker map file generated with -Map=output.map revealed all problematic functions with a single search.
amap cross-platform GUI tool allows you to examine MAP files produced by the GCC, Visual Studio and some other compilers. You can find out, for example, how much every source file and every external dependency contribute to size of your executable.

how do I specify the source code directory in VS when looking at the call stack of a memory dump?

I am analyzing a .dmp file that was created and I have a call stack which gives me a lot of info. But I'd like to double click on the call stack and have it bring me to the source code.
I can right click on the call stack and select symbol settings.. where I can put the location to the PDB. But there is no option for the source code directory.
The source code directory is unfortunately hard coded into the pdb's however if you know the folders required you can use windows concept of symbolic links, junctions.
I use the tool Junction Link Magic
Read this article about how to set up a Source Server (aka SrcSrv) integration at your site.
I took the time to follow these steps for our codebase, and now we are able to take a .dmp file from any build of our software in the past 6 months... get a stack trace with symbols... and view the exact source code lines in the debugger. Since the steps are integrated into our automated builds, there's very little overhead now.
I did need to write a custom indexer for ClearCase, but they have pre-existing ones for Perforce, TFS, and maybe others.
It is worth noting that the .dmp support in VS2005 is a little shaky.. it's quite a bit more stable in VS2008.
You'll also need to configure Visual Studio to grab the symbols for the MS products from here in addition to your own symbol server:
http://msdl.microsoft.com/download/symbols
That is described in a few places such as on the Debugging Tools for Windows site.
Windbg allows you to setup source paths same as PDB's paths.
After loading the PDB, manually navigate to the source file that matches the current execution location. A PDB contains the path and filename of the source files that built its associated binary, and I suspect the debugger is smart enough to hook things up when it notices that the filename being displayed and the filename associated with with current binary location, match.