A higher level tool for writing a pass in LLVM - c++

Normally If you want to modify LLVM IR, you need to write a pass. However, writing a pass by yourself is an overkill sometimes if a higher level tool could facilitate you.
For example, someone might wish to log every load and store in the program. For that purpose, he would need to inject code that does the logging. Now if there is a higher level tool, it can provide callbacks to us to write what we want. So in this case, for example, it could provide us OnLoad and OnStore functions which we can fill to tell the tool what to do on each load and store. Does such kind of a tool exist?
So basically I want something similar to what is provided by Dynamic Binary Instrumentation tools but that works with LLVM, for compile time code injection.

I think you should consider using PIN instead of LLVM for such things: http://www.pintool.org/
PIN enables you insert instrumentation/analyze code at several granularity levels: instruction, basic block, function, traces and even load/unload of shared libraries. Is may be a way more practical since you won't need to compile the application - so you can analyze programs wich aren't open source for example.
There are version of PIN for windows and linux.
PS: Another tool that seems useful: http://eces.colorado.edu/~blomsted/llvmpin/llvmpin.html

Related

Static C++ Api coverage tool

Given a set of public headers, and various test code that makes use of these headers, I need to generate a list of used/unused API calls.
I am working with a platform that can not easily have traditional code coverage at runtime, but my requirements are a bit simpler hopefully.
I only need this to occur statically, and it seems as if this should be an easily accomplished thing (Most IDE's show all available function calls). I haven't found an appropriate tool for this though.
Can anyone recommend one? Or point me to the specific term for what I am looking for?
Thank you

How to Prevent I/O Access in C++ or Native Compiled Code

I know this may be impossible but I really hope there's a way to pull it off. Please tell me if there's any way.
I want to write a sandbox application in C++ and allow other developers to write native plugins that can be loaded right into the application on the fly. I'd probably want to do this via DLLs on Windows, but I also want to support Linux and hopefully Mac.
My issue is that I want to be able to prevent the plugins from doing I/O access on their own. I want to require them to use my wrapped routines so that I can ensure none of the plugins write malicious code that starts harming the user's files on disk or doing things undesireable on the network.
My best guess on how to pull off something like this would be to include a compiler with the application and require the source code for the plugins to be distributed and compiled right on the end-user platform. Then I'd need an code scanner that could search the plugin uncompiled code for signatures that would show up in I/O operations for hard disk or network or other storage media.
My understanding is that the STD libaries like fstream wrap platform-specific functions so I would think that simply scanning all the code that will be compiled for platform-specific functions would let me accomplish the task. Because ultimately, any C native code can't do any I/O unless it talks to the OS using one of the OS's provided methods, right??
If my line of thinking is correct on this, does anyone have a book or resource recommendation on where I could find the nuts and bolts of this stuff for Windows, Linux, and Mac?
If my line of thinking is incorrect and its impossible for me to really prevent native code (compiled or uncompiled) from doing I/O operations on its own, please tell me so I don't create an application that I think is secure but really isn't.
In an absolutely ideal world, I don't want to require the plugins to distribute uncompiled code. I'd like to allow the developers to compile and keep their code to themselves. Perhaps I could scan the binaries for signatures that pertain to I/O access????
Sandboxing a program executing code is certainly harder than merely scanning the code for specific accesses! For example, the program could synthesize assembler statements doing system calls.
The original approach on UNIXes is to chroot() the program but I think there are problems with that approach, too. Another approach is a secured environment like selinux, possible combined with chroot(). The modern approach used to do things like that seems to run the program in a virtual machine: upon start of the program fire up a suitable snapshot of a VM. Upon termination just rewind to tbe snaphot. That merely requires that the allowed accesses are somehow channeled somewhere.
Even a VM doesn't block I/O. It can block network traffic very easily though.
If you want to make sure the plugin doesn't do I/O you can scan it's DLL for all it's import functions and run the function list against a blacklist of I/O functions.
Windows has the dumpbin util and Linux has nm. Both can be run via a system() function call and the output of the tools be directed to files.
Of course, you can write your own analyzer but it's much harder.
User code can't do I/O on it's own. Only the kernel. If youre worried about the plugin gaining ring0/kernel privileges than you need to scan the ASM of the DLL for I/O instructions.

Adding source instrumentation code - Is source-to-source compiler right approach? How to build one?

I am working on a project where I need to track changes to particular set of variables in any given application code to model memory access patterns.
I can think of two approaches mainly, please give your thoughts on them.
My initial thought is to do it like many profilers like gprof would do, where I add instrumentation code in the target application code before compilation and analyze the log generated by this instrumentation code to the get required information.
To accomplish, I can only think of some sort of source-to-source compiler where it parses given code and injects instrumentation code (Same language source-source compiler) into application which I can later compile and run to get the required logs.
Does this seem right or am I over-engineering? If not, are there tools that let me build a source-source compiler (relatively) easily?
I read about GDB's support for python, so, I am thinking if I can write a python script to get set of variables as config file and set watchpoints and log everytime there is a write to variables being watched. I tried to use this GDB feature but on my Ubuntu machine it doesn't seem to be working for now.
http://sourceware.org/gdb/onlinedocs/gdb/Python.html#Python
And, the language of applications is going to be nesC (I guess nesC is converted to C in the process of compilation) (and applications are going to run on TOSSIM like native apps on my computer).
See my paper on instrumenting codes using a program transformation systems (PTS) (PTS is a very general kind of "source-to-source compiler).
It shows how to install probes in code in a pretty straightforward way, once you have a grammar for the language of interest. The underlying tool, DMS, makes it fairly easy to define the grammar too.

Instrumentation (diagnostic) library for C++

I'm thinking about adding code to my application that would gather diagnostic information for later examination. Is there any C++ library created for such purpose? What I'm trying to do is similar to profiling, but it's not the same, because gathered data will be used more for debugging than profiling.
EDIT:
Platform: Linux
Diagnostic information to gather: information resulting from application logic, various asserts and statistics.
You might also want to check out libcwd:
Libcwd is a thread-safe, full-featured debugging support library for C++
developers. It includes ostream-based debug output with custom debug
channels and devices, powerful memory allocation debugging support, as well
as run-time support for printing source file:line number information
and demangled type names.
List of features
Tutorial
Quick Reference
Reference Manual
Also, another interesting logging library is pantheios:
Pantheios is an Open Source C/C++ Logging API library, offering an
optimal combination of 100% type-safety, efficiency, genericity
and extensibility. It is simple to use and extend, highly-portable (platform
and compiler-independent) and, best of all, it upholds the C tradition of you
only pay for what you use.
I tend to use logging for this purpose. Log4cxx works like a charm.
If debugging is what you're doing, perhaps use a debugger. GDB scripts are pretty easy to write up and use. Maintaining them in parallel to your code might be challenging.
Edit - Appending Annecdote:
The software I maintain includes a home-grown instrumentation system. Macros are used to queue log messages and configuration options control what classes of messages are logged and the level of detail to be logged. A thread processes the logging queue, flushing messages to file and rotating files as they become too large (which they commonly do). The system provides a lot of detail, but often all too often it provides huge files our support engineers must wade through for hours to find anything useful.
Now, I've only used GDB to diagnose bugs a few times, but for those issues it had a few nice advantages over the logging system. GDB scripting allowed me to gather new instrumentation data without adding new instrumentation lines and deploying a new build of my software to the client. GDB can generate messages from third-party libraries (needed to debug into openssl at one point). GDB adds no run-time impact to the software when not in use. GDB does a pretty good job of printing the contents of objects; the code-level logging system requires new macros to be written when new objects need to have their states logged.
One of the drawbacks was that the gdb scripts I generated had no explicit relationship to the source code; the source file and the gdb script were developed independently. Ideally, changes to the source file should impact and update the gdb script. One thought is to put specially-formatted comments in code and have a scripting language make a pass on the source files to generate the debugger script file for the source file. Finally, have the makefile execute this script during the build cycle.
It's a fun exercise to think about the potential of using GDB for this purpose, but I must admit that there are probably better code-level solutions out there.
If you execute your application in Linux, you can use "ulimit" to generate a core when your application crash (or assert(false), or kill -6 ), later, you can debug with gdb (gdb -c core_file binary_file) and analyze the stack.
Salu2.
PD. for profiling, use gprof

Finding very similar program executions

I was wondering if its possible / anyone knows any tools out there to compare the execution of two related programs (for example, assignments on a class) to see how similar they are. For example, not to compare the names of functions, but how they use syscalls. One silly case of this would be testing if a C string is printed as (see example below) in more than one case one separate program.
printf("%s",str)
Or as
for (i=0;i<len;i++) printf("%c",str[i]);
I havenĀ“t put much thought into this, but i would imagine that strace / ltrace (maybe even oprofile) would be a good starting point. Particularly, this is for UNIX C / C++ programs.
Thanks.
If you have access to the source code of the two programs, you may build a graph of the functions (each function is a node, and there is an edge from A to B if A calls B()), and compute some graph similarity metrics. This will catch a source code copy made by renaming and reorganizing.
An initial idea would be to use ltrace and strace to log the calls and then use diff on the logs. This would obviously only cover the library an system calls. If you need a more fine granular logging, the oprofile might help.
If you have access to the source code you could instrument your code by compiling it with profiling information and then parse the gcov output after the runs. A pure static source code analysis may be sufficient if your code is not taking different routes depending on external data/state.
I think you can do this kind of thing using valgrind.
A finer-grained version (and depending on what is the access to the program source and what you exactly want in terms of comparison) would be to use kprobes.
Kernel Dynamic Probes (Kprobes) provides a lightweight interface for kernel modules to implant probes and register corresponding probe handlers. A probe is an automated breakpoint that is implanted dynamically in executing (kernel-space) modules without the need to modify their underlying source. Probes are intended to be used as an ad hoc service aid where minimal disruption to the system is required. They are particularly advocated in production environments where the use of interactive debuggers is undesirable. Kprobes also has substantial applicability in test and development environments. During test, faults may be injected or simulated by the probing module. In development, debugging code (for example a printk) may be easily inserted without having to recompile to module under test.