How do I determine what functions are being called in a binary? - c++

The answer to this is not "see the import address table".
I am looking to do some analysis on a few binaries that I am generating, specifically to get a better idea of what libraries and windows API functions I am using. I have used Dependency Walker to take a look at this, but some of the testing I have done indicates to me that there might be a lot of extra function calls put into the IAT, even if they arent called.
What I am looking for is a way to determine what functions are being called... not just what is being put in the IAT.
The best way would probably be to reverse it and look at all of the 'CALL's but I dont know a good way to do that either.
What is the best way to do this?

Launch WinDbg (Debugging tools of windows)
Open the executable you want to analyse.
run the following commands
!logexts.loge
!logexts.logo e v (enables verbose logging)
!logexts.logo e t (enables text logging)
g
Open the logviewer tool come along with debugging tools of windows to see the api's,
Default logs path is desktop\logexts

If you are using link.exe to link your binary, pass /MAP flag at the time of linking.
This will generate a MAP file(binary.map)...it will have functions which are used(not all functions).

I don't know if it's the "best way", but I would kinda agree to your suggestion that all the CALLs give a good overview.
With the "Ollydbg" debugger you can load your program, go the the exe module of your process and rightclick -> search for -> all intermodular calls.
This gives you a nice sortable, searchable list of all "CALL"s that appear in your module and lead to other modules.

Related

What could be the simplest way to incorporate Windows WPP Software Tracing into SCons builds?

I ask my question in such a specific way because I am afraid that a more generic form could lead to excessively theoretic discussions of how the things should be done best and in the most appropriate way (like a question about pre and post-process actions in SCons).
WPP incorporation actually requires execution of an additional command (commands) before compilation of a file and only even if the build process finds necessity to compile the file without any regard to WPP.
I would remark that this is easily achieved with few lines of definitions in a shared Visual Studio property page file making this work for multiple files in multiple projects, folders, etc. in an absolutely transparent for developers way.
Thus I am wondering whether this can be done in a similarly simple way with SCons? I do not have any deep knowledge of either SCons or MSBuild frameworks; I work with them for simple practical use so I would truly appreciate a practical and useful advise.
Here's what I'd suggest.
SCons builds command lines from Environment() variables.
For example the compile command line for building shared object for c++ is stored in SHCXXCOM (and the variable for what is displayed to user when the command is run defaults to SHCXXCOM, but can be changed by modifying SHCXXCOMSTR).
Back to the problem at hand.
Assuming you have a limited number of build steps you want to wrap, you can do something like.
env['SHCXXCOM'] = [ 'MPP PRE COMMAND LINE', env['SHCXXCOM'], 'MPP POST COMMAND LINE']
You'll have to figure out which variables you need to do this with, but take a look at the manpage to figure that out.
https://scons.org/doc/production/HTML/scons-man.html
p.s. I've not tried this, but in theory it should work. Let us know if not.

download windows symbols programmatically

I want to programmatically download symbols from the micrsoft symbol server (http://msdl.microsoft.com/download/symbols).
E.g. given the name "ntdll.dll" I want to save the .pdb into any directory.
The APIs from dbghelp.dll seems to solve this. (http://msdn.microsoft.com/en-us/library/windows/desktop/ms679291%28v=vs.85%29.aspx)
But I don't know how to use them in a right way.
Does anyone did something like this before and can show me some example code?
thanks!
I never did something exactly like this, but I was intrigued enough to look. Your friends are the SymXxx functions, within dbghelp.dll.
Start with SymSetOptions followed by SymInitialize.
Then, the function that does the heavy lifting of the work is SymFindFileInPath. The second arguments (SearchPath) is a semicolon-separated search path, that may include SRV*.
The utility that does exactly what you want (pretty much, with nothing less and nothing more) is symchk.exe. Take a look at its imports table, notice it uses no more than 9 functions from dbghelp (and no 'networking' DLL such as winhttp or the like) - so that should give you a good clue how to proceed, and which methods you should use.

Is it possible to regenerate symbols for an exe?

One of my co-workers shipped a hot fix build to a customer, and subsequently deleted the pdb file. The build in question is crashing (intermittently) and we have a couple of crash dumps. We have all the source code in version control, and can compile it to an equivalent .exe and get symbols for that one. However, those symbols don't match the crash dump exactly. It seems like several of the functions are off by some constant offset, but we've only looked at a handful.
I'd love to be able to do the following (I can fake parts of this manually, but it's a huge amount of work): get a stack trace for each thread in the dump and cast pointers in the dump to the appropriate type and have them show up in the Visual Studio debugger. I'm using 2005, if that matters.
Is there a tool to let us recreate a pdb given the source code, all the .obj files, and the original .exe? Or is there a setting when we compile/link to say "make it exactly like this other exe you just did" or something like that?
Quick update, based on answers so far: I have the exe file that we sent to the customer, just not the pdb that corresponds to it, if that helps. I'd just as soon not send them a new build (if possible), because it takes about a week of running to get the crash dumps, and the customer is already at the "why isn't this already fixed?" stage. (If we do send another build, I'd prefer it to be one that either fixes the problem or has additional debugging in the area of interest, not just the same code.) I know it's possible to do some of this manually with a lot of guesswork; that's what we're currently doing. But it's a pain, so I'm hoping there's a way to automate it.
You cannot recreate a PDB to match a pre-existing executable. The PDB contains a "finger print" that is unique for each compilation. Unless you can make the old PDB magically reappear, you should whack your cow-orker in the back of the head (Gibbs-style, if you watch NCIS), recompile the whole thing, store the PDB somewhere safe, and ship a new executable to your customer, and let the crashes come.
If your build system enables you to recreate any binary from any revision you have in your history, then you should be able to get the build ID from the customer, and regenerate that same exact build ID, along with all the binaries and so forth. That will take a while if you have a large project, of course, but it will also yield the debugging file that you need.
If you have no way to perform an exact reproduction of a build, then look at this situation, think hard about some others that might crop up, and start moving to make it possible to regenerate all successful builds and associated files in the project's history. This will make it much easier to be able to work problems like this in the future.
When you have the sources, it's quite easy to find the correspondence between them and the exe file. Just ask them to send you the exe file along with the crash log and use IDA.
What you are asking is much more difficult than that, considering also that you need it for "one use only".

Include pdbs in installer?

Is there any reason to not include pdb files in an installer? I have C++ logging functionality that walks the stack, and reports line numbers and file names. It would be great if my customers could send me logs with this information. However, they would need the pdb files. Is there any downside (other than installer package size) to deploying them?
Two possible downsides:
The PDB file might make it easier for someone to reverse-engineer your application.
As a result of the previous, someone might come to expect to be able to call undocumented functions in your DLLs.
If those don't bother you, I can't see any downside. Note though that you don't really need this. As John Seigel says, you should be able to reconstruct the stack trace from a crash dump.
You should be able to achieve "line numbers and file names" without PDB files. Try using _FUNCTION_, _LINE_, and _FILE_. Read more here:
http://msdn.microsoft.com/en-us/library/b0084kay.aspx
Instead of shipping the PDB files, your error handling code can create minidumps. See function MiniDumpWriteDump. Minidumps are very small and can easily be send via e-mail.
If you get the dump file from the customer, only you need the PDB files.
IMHO, it is a very good idea to catch asserts or unexpected errors in your application, create a minidump automatically and let your application send this dump to you. If you get really fancy, you build yourself an automated bug tracking database in which these minidumps are stored. Then, you can find out which bugs are most common and need to be fixed first. Accidentally, you will find out a lot about the environment your application runs in. Which operating system versions are most common, which virus scanners hook into your application etc.
Obviously, this requires the consent of your users since the minidump may contain private information (however little information there is on the stack). It is not trivial to implement a working error handler that can catch, e.g., stack overflow exceptions.

C++ vim IDE. Things you'd need from it

I was going to create the C++ IDE Vim extendable plugin. It is not a problem to make one which will satisfy my own needs.
This plugin was going to work with workspaces, projects and its dependencies.
This is for unix like system with gcc as c++ compiler.
So my question is what is the most important things you'd need from an IDE? Please take in account that this is Vim, where almost all, almost, is possible.
Several questions:
How often do you manage different workspaces with projects inside them and their relationships between them? What is the most annoying things in this process.
Is is necessary to recreate "project" from the Makefile?
Thanks.
Reason to create this plugin:
With a bunch of plugins and self written ones we can simulate most of things. It is ok when we work on a one big "infinitive" project.
Good when we already have a makefile or jam file. Bad when we have to create our owns, mostly by copy and paste existing.
All ctags and cscope related things have to know about list of a real project files. And we create such ones. This <project#get_list_of_files()> and many similar could be a good project api function to cooperate with an existing and the future plugins.
Cooperation with an existing makefiles can help to find out the list of the real project files and the executable name.
With plugin system inside the plugin there can be different project templates.
Above are some reasons why I will start the job. I'd like to hear your one.
There are multiple problems. Most of them are already solved by independent and generic plugins.
Regarding the definition of what is a project.
Given a set of files in a same directory, each file can be the unique file of a project -- I always have a tests/ directory where I host pet projects, or where I test the behaviour of the compiler. On the opposite, the files from a set of directories can be part of a same and very big project.
In the end, what really defines a project is a (leaf) "makefile" -- And why restrict ourselves to makefiles, what about scons, autotools, ant, (b)jam, aap? And BTW, Sun-Makefiles or GNU-Makefiles ?
Moreover, I don't see any point in having vim know the exact files in the current project. And even so, the well known project.vim plugin already does the job. Personally I use a local_vimrc plugin (I'm maintaining one, and I've seen two others on SF). With this plugin, I just have to drop a _vimrc_local.vim file in a directory, and what is defined in it (:mappings, :functions, variables, :commands, :settings, ...) will apply to each file under the directory -- I work on a big project having a dozen of subcomponents, each component live in its own directory, has its own makefile (not even named Makefile, nor with a name of the directory)
Regarding C++ code understanding
Every time we want to do something complex (refactorings like rename-function, rename-variable, generate-switch-from-current-variable-which-is-an-enum, ...), we need vim to have an understanding of C++. Most of the existing plugins rely on ctags. Unfortunately, ctags comprehension of C++ is quite limited -- I have already written a few advanced things, but I'm often stopped by the poor information provided by ctags. cscope is no better. Eventually, I think we will have to integrate an advanced tool like elsa/pork/ionk/deshydrata/....
NB: That's where, now, I concentrate most of my efforts.
Regarding Doxygen
I don't known how difficult it is to jump to the doxygen definition associated to a current token. The first difficulty is to understand what the cursor is on (I guess omnicppcomplete has already done a lot of work in this direction). The second difficulty will be to understand how doxygen generate the page name for each symbol from the code.
Opening vim at the right line of code from a doxygen page should be simple with a greasemonkey plugin.
Regarding the debugger
There is the pyclewn project for those that run vim under linux, and with gdb as debugger. Unfortunately, it does not support other debuggers like dbx.
Responses to other requirements:
When I run or debug my compiled program, I'd like the option of having a dialog pop up which asks me for the command line parameters. It should remember the last 20 or so parameters I used for the project. I do not want to have to edit the project properties for this.
My BuildToolsWrapper plugin has a g:BTW_run_parameters option (easily overridden with project/local_vimrc solutions). Adding a mapping to ask the arguments to use is really simple. (see :h inputdialog())
work with source control system
There already exist several plugins addressing this issue. This has nothing to do with C++, and it must not be addressed by a C++ suite.
debugger
source code navigation tools (now I am using http://www.vim.org/scripts/script.php?script_id=1638 plugin and ctags)
compile lib/project/one source file from ide
navigation by files in project
work with source control system
easy acces to file changes history
rename file/variable/method functions
easy access to c++ help
easy change project settings (Makefiles, jam, etc)
fast autocomplette for paths/variables/methods/parameters
smart identation for new scopes (also it will be good thing if developer will have posibility to setup identation rules)
highlighting incorrect by code convenstion identation (tabs instead spaces, spaces after ";", spaces near "(" or ")", etc)
reformating selected block by convenstion
Things I'd like in an IDE that the ones I use don't provide:
When I run or debug my compiled program, I'd like the option of having a dialog pop up which asks me for the command line parameters. It should remember the last 20 or so parameters I used for the project. I do not want to have to edit the project properties for this.
A "Tools" menu that is configurable on a per-project basis
Ability to rejig the keyboard mappings for every possible command.
Ability to produce lists of project configurations in text form
Intelligent floating (not docked) windows for debugger etc. that pop up only when I need them, stay on top and then disappear when no longer needed.
Built-in code metrics analysis so I get a list of the most complex functions in the project and can click on them to jump to the code
Built-in support for Doxygen or similar so I can click in a Doxygen document and go directly to code. Sjould also reverse navigate from code to Doxygen.
No doubt someone will now say Eclipse can do this or that, but it's too slow and bloated for me.
Adding to Neil's answer:
integration with gdb as in emacs. I know of clewn, but I don't like that I have to restart vim to restart the debugger. With clewn, vim is integrated into the debugger, but not the other way around.
Not sure if you are developing on Windows, but if you are I suggest you check out Viemu. It is a pretty good VIM extension for Visual Studio. I really like Visual Studio as an IDE (although I still think VC6 is hard to beat), so a Vim extension for VS was perfect for me. Features that I would prefer worked better in a Vim IDE are:
The Macro Recording is a bit error prone, especially with indentation. I find I can easily and often record macros in Vim while I am editing code (eg. taking an enum defn from a header and cranking out a corresponding switch statement), but found that Viemu is a bit flakey in that deptartment.
The VIM code completion picks up words in the current buffer where Viemu hooks into the VS code completion stuff. This means if I have just created a method name and I want to ctrl ] to auto complete, Vim will pick it up, but Viemu won't.
For me, it's just down to the necessities
nice integration with ctags, so you can do jump to definition
intelligent completion, that also give you the function prototype
easy way to switch between code and headers
interactive debugging with breaakpoints, but maybe
maybe folding
extra bonus points for refactoring tools like rename or extract method
I'd say stay away from defining projects - just treat the entire file branch as part of the "project" and let users have a settings file to override that default
99% of the difference in speed I see between IDE and vim users is code lookup and navigation. You need to be able to grep your source tree for a phrase (or intelligently look for the right symbol using ctags), show all the hits, and switch to that file in like two or three keystrokes.
All the other crap like repository navigation or interactive debugging is nice, but there are other ways to solve those problems. I'd say drop the interactive debugging even. Just focus on what makes IDEs good editors - have a "big picture" view of your project, instead of single file.
In fact, are there any plugins for vim that already achieve this?