Many of the instrumentation options for gcc save data to a file during/after runtime:
When the compiled program exits it saves this data to a file called auxname.gcda for each source file.
However, I'm running on a custom C++-based RTOS which doesn't have a filesystem "natively" like Linux.
QUESTION
How do I use these gcc-instrumentation options that output results to a file?
Do I have to provide a file-writer interface - which in my case would write to a RAM buffer - which would be called whenever the instrumentation code needs to "write to file"?
A web search for "gcc gprof arm-cortex-m" produces: https://mcuoneclipse.com/2015/08/23/tutorial-using-gnu-profiling-gprof-with-arm-cortex-m/
It appears to use semihosting to write profiling data to host machine.
Semihosting is a common way for ARM to communicate with debugger on host (through JTAG/SWD). It is also supported on emulators like qemu.
I'm trying to use NetBeans (7.3.1) to work on a remote project.
My configuration is following:
My local machine is a Windows 7 laptop. It doesn't have any tools. In particular neither compiler nor debugger. But it does have NetBeans IDE and PuTTY for example.
Source code, Make scripts and (eventually) build results are located on a remote storage shared across servers and "locals". (I might switch to a single server only storage as it is faster but I don't think that it matters at all.)
I'm accessing it using SSHFS Manager. SSHFS Manager takes server name, path on the server, user name and SSH private key. In result it mounts that directory on the server as a disk on Windows. This works fine. (Although some directories, possibly links, are represented as files in Windows Explorer, I don't know if that matters...)
NetBeans project is located on local machine but I don't think that it matters and I could place it remotely as well. But I would prefer to keep it "off source" so that I don't have to add any ignores to version control.
In NetBeans I did procedure described in Remote Developement Tutorial. It seems to be successful. NetBeans connected to the server and found GNU Compiler Collection.
Then I added the project using File | New Project..., there C/C++ | C/C++ Project with Existing Sources. It seems to be successful. All files are visible and all that staff.
The issue is however that our work "procedure" requires us to setup the environment first. So when I log in with PuTTY for example I have to first call setsee with proper argument. And that heavily influences the environment by adding lots of variables for example including:
GCC_HOME which is set to /opt/gcc/linux64/ix86/gcc_4.3.2-7p3, as opposed to /user/bin/g++ which is shown by NetBeans in its GNU Compiler Collection for C++ Compiler and
CPLUS_INCLUDE_PATH which points to some path (while NetBeans doesn't see many includes, probably lacking that path).
So is there a way to tell NetBeans to call setsee on the remote server before doing anything else?
It turned out that setsee is more or less an internal tool. Yet “the core question” remains: how to have an arbitrary script executed on behalf of an SSH session created by IDE, before IDE actually uses that script.
Answer to the “How can I set environment variables for a remote rsync process?” question on Super User says it all.
To summarize it shortly: in ~/.ssh/authorized_keys one has to modify an entry corresponding to the key with which the IDE will log in. There one has to add a command parameter that is a script to be executed after logging in but before “returning control”.
There is also another thing associated with that solution. If the script to be executed within the command option outputs anything the it will break many tools (possibly the IDE as well). Such tools often expect on the output whatever the called tool does. Parsing such output fail then on what the command script outputs.
A simple solution is to use tail. But disadvantage of that is that you loos “progress”. A lengthy operation will look like hung and then output everything in one shot. Also in some cases it simply doesn’t work (for example doing git clone --progress through SSH on Tortoise Git will fail if the command script outputs anything).
I'm used to building my source in an IDE and having good feedback in the environment. Currently however I'm writing source code in notepad++, ftp'ing it to another machine with specific environment settings, and then building it there and reading the Makefile output to see that it all checks out. After that, I scp the built executable to the actual device to test it.
I'm curious if there are environments that can simplify this. I suppose I could write a script that ftp's changed files and then runs a command through ssh to build them. But I'd like an environment that will parse the makefile output and give me an build report like in most IDE's. I'm not sure how specific this problem is, or if a lot of embedded systems have similar set ups.
Ideally I suppose I would have a machine with the correct build environment, but that isn't the case :/
I tend to put the file transfer, remote make invocation and whatever else is necessary into some script (having a one-click build is important anyway) and then set that as the build command in my editor. I happen to use Sublime Text 2, which works fine with the error messages I get from building C++ code via make; personally, I don't find editors not supporting this kind of workflow worth using. There are lots of editors which do.
Oh, and I'd try replacing the ftp with rsync over ssh. It's probably faster, definitely easier to automate, and safer.
Say I have a build machine and test machine and the source code is only on the build machine. (Linux)
I have a debug build C/C++ executable and I want to run it with gdb on the test machine.
In the debugger running on the test machine it is still looking for the actual source files which are not there.
Is there a way to have g++ actual include the source in the executable itself with the other debug information so files are not needed?
There is no way to have the source compiled into the binary to allow gdb debugging in this manner.
Probably the best mechanism in this case is to use gdbserver - which allows you to run the application remotely and debug it on the build machine.
If you can't use remote debugging, then an alternative is to mount the directory containing the source on the test machine, and then use the set substitute-path to map the directory that the test machine has vs. the build machine.
No, but the good news is that is no necessary. You should set your source path. It should accept a network path.
One of our users having an Exception on our product startup.
She has sent us the following error message from Windows:
Problem Event Name: APPCRASH
Application Name: program.exe
Application Version: 1.0.0.1
Application Timestamp: 4ba62004
Fault Module Name: agcutils.dll
Fault Module Version: 1.0.0.1
Fault Module Timestamp: 48dbd973
Exception Code: c0000005
Exception Offset: 000038d7
OS Version: 6.0.6002.2.2.0.768.2
Locale ID: 1033
Additional Information 1: 381d
Additional Information 2: fdf78cd6110fd6ff90e9fff3d6ab377d
Additional Information 3: b2df
Additional Information 4: a3da65b92a4f9b2faa205d199b0aa9ef
Is it possible to locate the exact place in the source code where the exception has occured having this information?
What is the common technique for C++ programmers on Windows to locate the place of an error that has occured on user computer?
Our project is compiled with Release configuration, PDB file is generated.
I hope my question is not too naive.
Yes, that's possible. Start debugging with the exact same binaries as ran by your user, make sure the DLL is loaded and you've got a matching PDB file for it. Look in Debug + Windows + Modules for the DLL base address. Add the offset. Debug + Windows + Disassembly and enter the calculated address in the Address field (prefix with 0x). That shows you the exact machine code instruction that caused the exception. Right-click + Go To Source code to see the matching source code line.
While that shows you the statement, this isn't typically good enough to diagnose the cause. The 0xc0000005 exception is an access violation, it has many possible causes. Often you don't even get any code, the program may have jumped into oblivion due to a corrupted stack. Or the real problem is located far away, some pointer manipulation that corrupted the heap. You also typically really need a stack trace that shows you how the program ended up at the statement that bombed.
What you need is a minidump. You can easily get one from your user if she runs Vista or Win7. Start TaskMgr.exe, Processes tab, select the bombed program while it is still displaying the crash dialog. Right-click it and Create Dump File.
To make this smooth, you really want to automate this procedure. You'll find hints in my answer in this thread.
If you have a minidump, open it in Visual Studio, set MODPATH to the appropriate folders with the original binaries and PDBs, and tell it to "run". You may also need to tell it to load symbols from the Microsoft symbol servers. It will display the call stack at the error location. If you try to look at the source code for a particular stack location, it may ask you where the source is; if so, select the appropriate source folder. MODPATH is set in the debug command-line properties for the "project" that has the name of the minidump file.
I know this thread is very old, but this was a top Google response, so I wanted to add my $.02.
Although a mini-dump is most helpful, as long as you have compiled your code with symbols enabled (just send the file without the .pdb, and keep the .pdb!) you can look up what line this was using the MSVC Debugger or Windows Debugger. MSN article on that:
http://blogs.msdn.com/b/danielvl/archive/2010/03/03/getting-the-line-number-for-a-faulting-application-error.aspx
Source code information isn't preserved in compiled C++ code, unlike in runtime-based metadata-aware languages (such as .NET or Java). The PDB file is a symbol index which can help a debugger map compiled code backwards to source, but it has to be done during program execution, not from a crash dump. Even with a PDB, Release-compiled code is subject to a number of optimizations that can prevent the debugger from identifying the source code.
Debugging problems which only manifest on end-user machines is usually a matter of careful state logging and a lot of detail-oriented time and effort combing over the source. Depending on your relationship with the user (for example, if you're internal corporate IT development), you may be able to make a virtual machine image of the user's machine and use it for debugging, which can help speed the process tremendously by precisely replicating the installed software and standard running processes on the user's workstation.
There are several ways to find the crash location after the fact.
Use a minidump. See the answers above.
Use the existing executable in a debugger. See the answers above.
If you have PDB files (Visual Studio, Visual Basic 6), use DbgHelpBrowser to load the PDB file and query it for the crash location.
If you have TDS files (separate TDS file, or embedded in the exe, Delphi, C++ Builder 32 bit), use TDS Browser to load the TDS/DLL/EXE file and query it for the crash location.
If you have DWARF symbols (embedded in the EXE, C++ Builder 64 bit, gcc, g++), use DWARF Browser to load the DLL/EXE and query it for the crash location.
If you have MAP files, use MAP File Browser to load the MAP file and query it for the crash location.
I wrote these tools for use in house. We've made them available for free.