For the development of my own Pass I want to write unit tests - i have lots of 'pure' helper methods, so they seem ideal candidates for unit test. But some of them require an instance of llvm::LoopInfo as an argument.
In my (Function-)Pass I just use
void getAnalysisUsage(llvm::AnalysisUsage &AU) const override {
AU.setPreservesCFG();
AU.addRequired<llvm::LoopInfoWrapperPass>();
}
...
llvm::LoopInfo &loopInfo = getAnalysis<LoopInfoWrapperPass>(F).getLoopInfo();
to get this information object.
In my unit test I currently parse my llvm::Function void foo() (that I want to run my analysis on) from disk like this:
llvm::SMDiagnostic Err;
llvm::LLVMContext Context;
std::unique_ptr<llvm::Module> module(parseIRFile(my_bc_filename, Err, Context));
llvm::Function* foo = module.operator*().getFunction("foo");
to finalize my test I would have to fill in following stub:
llvm::LoopInfo& = /*run LoopInfoWrapperPass on foo and return LoopInfo element */;
My first attempts were based on using the PassManager<Function> (in Header "llvm/IR/PassManager.h"), AnalysisManager<Function>, and the class LoopInfoWrapperPass, but I couldn't find any example usage online for LLVM 4.0 - and older examples seemed to be using a previous version of PassManager, and I did not see how to make use of the LegacyPassManager. I tried to look into the sources for PassManager but could not make enough sense of the typedefs and template arguments (and they are increasing my irrational dislike for C++ as a language).
How do I fill in that stub? How do I call this Analysis Pass (and get LoopInfo) in my plain C++ code?
PS: There are more passes other than LoopInfoWrapperPass I need to use, but I'm assuming the way should be transferable to any Analysis Pass.
PPS: I'm using googletest as a unit test framework, with a CMake build configuration that makes the unit tests their own target, and I'm building my Pass out-of-tree against binary libs of LLVM 4.0.1, if any of that is somehow relevant.
I am not sure how you have your unit tests structured, but looking around in the LLVM source tree is a good idea.
One example can be found in CFGTest.cpp here.
You need to create the PassManager and the pipeline yourself. From my short experience on this, it works well for small tests, but once you need anything bigger or pass data in/out it's really restricting, since the LoopInfo data have only meaning within the pipeline (aka runOn() methods and friends).
Otherwise, you might want to opt (no pun intended) for the simpler, IMHO, method of creating the set of the required analysis yourself (only dominators in the case of LoopInfo) without using the pass manager infrastructure. An example of this can be seen here.
Hope this helps.
Related
I was hoping someone can give me an example as to how to use the RandomNumberGenerator class within LLVM. All of the examples I am able to find seem to use outdated methods.
I would like to be able to create a RNG within a pass that can be overridden with the '-rng-seed' parameter.
How can this value be accessed if it was provided as a parameter, and how to create the value if it was not provided as a parameter?
Also, I understand that a single RNG is not meant to be shared between threads for a single module. If I am running multiple passes on a module, can they share the same generated RNG?
The RandomNumberGenerator class has a private constructor (check its doc and the source file under llvm/lib/Support/RandomNumberGenerator.cpp), so the only way (that I know of, at least) to get a hold of an instance is via Module's createRNG method.
So, assuming that you have llvm:Function pass (and using C++11):
bool runOnFunction(llvm::Function &CurFunc) override {
auto rng = CurFunc.getParent()->createRNG(this);
llvm::errs() << (*rng)() << '\n';
return false;
}
Now you can run this on a module like this (assuming you modified the hello world pass from the documentation):
opt -load ./libLLVMHelloPass.so -hello foo.bc -o bar.bc
Rerunning this, it will give you the same pseudo-random number.
The -rng-seed option becomes available to your pass once you include the header (and link against the LLVM support library, i.e. llvm-config --libfiles support). So, changing the above execution line to something like:
opt -load ./libLLVMHelloPass.so -hello -rng-seed 42 foo.bc -o bar.bc
should give a different sequence.
Lastly, AFAIK, LLVM passes via opt are run sequentially in the context of a PassManager (certainly for the legacy one). I believe one should adhere to that advice when building a custom standalone LLVM tool using multi-threading (in other words, not intended to be run by opt). For relevant examples of standalone apps using the LLVM API have a look into the unit tests source subdir (one hint is to look for .cpp files that have a main(), although they are not always set up like that).
I've got a problem when unit testing my program.
The problem is simple but i'm not sure why this is not working.
1 -> i build all my program
2 -> i build my unitTest
3 -> the test is running.
All is ok when it is not about getting global data from the data segment. It seems as if the variable are not initialized / or simply found. So of course all my tests become wrong.
My question is:
Is it totally wrong to build an executable, then running the test on it? Or should i must compile all my code + the unit test in the same time, and then running it? Or is it just a lack of SenTesting framework?
I forgot to mention that this is a C++ const string. Dunno if that change something.
*EDIT***
My assumption was wrong, but i still don't understand the magic beyond! Seems a C++ magic hoydi hoo?
char cstring[] = "***";
std::string cppString = "***";
NSString* nstring = #"***";
- (void)testSync{
STAssertNotNil(nstring, nil); // fine
STAssertNotNil((id)strlen(bbb), nil); // fine
STAssertNotNil((id)cppString.size(), nil); // failed
}
EDIT 2**
Actually this is normal that the C++ is not initialized at this part of the code. If i do a nm on my executable, it appears that my C and Obj-C global are put into the dataSegment. I thought my C++ string was in the same case, but it is actually put into the bss segment. That's means it's uninitialized. The fact is the C++ compiler do some magic beyond and the C++ string is initialized after the main() call and act like if it were into the dataSegment.
I didn't know that testSuit doesn't have main() call, so the C++ object are never initialized. There is some technique in order to call the .ctor before the testSuit. But i am too lazy too explain and it's some kind of topic. I have just replaced my C++ string with a simple char array, and it work perfectly since my value are now POD.
By the way there is no devil in global variable if they are just read-only. ;)
OK, I can see a few faults here.
First of all, this code gives errors on my environment (Xcode 5) and for good reasons (with ARC enabled). I don't know how you got the thing to compile. The reason is that you are casting an integer (or long) to an object, and this will result in many errors, as it is normally an invalid operation. So, the real question is not why the third "assert" failed, but why the second one succeeded.
As far as the second part of your question is concerned, I have to admit that I do not completely understand your question, and you may have to explain it more thoroughly.
In general, unit testing is testing specific parts of your code. Therefore, you typically don't perform the tests on an actual final executable (this is not called unit testing, I believe), nor do you have to compile "all your c++ code + your unit tests at the same time".
Since you are using Xcode, I will give you some indications.
Write your application (or at least a part of it), and find the aspects / functions / objects you want to perform unit tests on.
In separate files, write unit tests that instantiate these objects and test their methods, call them and compare the inputs and outputs.
You should have a second target in your application, that will compile only the unit test source code and the relevant main program code.
Build this target, or press command-U and it will report successes and failures.
So, you need to separate your source code and isolate your classes / methods to make them testable like this. This needs a good architecture and design of the application on your part, and you may need to make some compromises in flexibility (that is up to you to decide). Oh, and I believe that in a testable code you should avoid global variables in general, for various reasons. Global variables are helpful sometimes, but they generally make unit testing really difficult, (and if misused may lead to spaghetti code, but this is a whole different story)
I hope I helped, even without fully understanding the second part of your post.
I want to apply a DFS traversing algorithm on a CFG of a function. Therefore, I need the internal representation of the CFG. I need oriented edges and spotted MachineBasicBlock::const_succ_iterator. It is there a way to get the CFG with oriented edges by using a FunctionPass, instead of a MachineFunctionPass? The reason why I want this is that I have problems using MachineFunctionPass. I have written several complex passes till now, but I cannot run a MachineFunctionPass pass.
I found that : "A MachineFunctionPass is a part of the LLVM code generator that executes on the machine-dependent representation of each LLVM function in the program. Code generator passes are registered and initialized specially by TargetMachine::addPassesToEmitFile and similar routines, so they cannot generally be run from the opt or bugpoint commands."...So how I can run a MachineFunctionPass?
When I was trying to run with opt a simple MachineFunctionPass, I got the error :
Pass 'mycfg' is not initialized.
Verify if there is a pass dependency cycle.
Required Passes:
opt: PassManager.cpp:638: void llvm::PMTopLevelManager::schedulePass(llvm::Pass*): Assertion `PI && "Expected required passes to be initialized"' failed.
So I have to initialize the pass. But in my all other passes I did not any initialization and I don't want to use INITIALIZE_PASS since I have to recompile the llvm file that is keeping the pass registration... Is there a way to keep using static RegisterPass for a MachineFunctionPass ? I mention that if I change to FunctionPass, I have no problems, so indeed it might be an opt problem.
I have started another pass for CallGraph. I am using CallGraph &CG = getAnalysis<CallGraph>(); efficiently. It is a similar way of getting CFG-s? What I found till now are succ_iterator/succ_begin/succ_end which are from CFG.h, but I think I still have to get the CFG analysis somehow.
Thank you in advance !
I think you may have some terms mixed up. Basic blocks within each function are already arranged in a kind-of CFG, and LLVM provides you the tools to traverse that. See my answer to this question, for example.
MachineFunction lives on a different level, and unless you're doing something very special, this is not the level you should operate on. It's too low-level, and too target specific. There's some overview of the levels here
I have about a hundred of simple tests done with boost unit test library. Not only I get very long compile times (in order of half a minute), but the size of the resulting executable gets really big - 4MB for just a hundred simple tests. If the tests are done without using boost test, the executable size is a mere 120kB.
How can I lessen the bloat? This question is just because of interest, not that I need test code to have shiny performance :)
The debugging info is already stripped. I've tried all optimization options with no success.
EDIT:
Each test is basically as follows:
PlainOldDataObject a, b;
a = { ... initial_data ... };
a = some_simple_operation(a);
b = { ... expected_result ... };
BOOST_CHECK(std::memcmp(&a, &b, sizeof(PlainOldDataObject)) == 0);
I. Which usage variant do you employ? If you employ single header variant of unit test framework, you should switch to offline variant (either static or dynamic)
II. If you suspect that BOOST_AUTO_TEST_CASE macro is at fault, you have several options:
Give up single assertion per test case policy and use number of "themed" test cases. I personally find this acceptable.
Use manual test case registration. You can probably automate it with your own macro to avoid tedious repetition.
Split into multiple test files. You might see at least some compilation time improvement (or might not).
III. If you suspect BOOST_CHECK statements, there is not much you can do, but I'd be rather surprised to see this much overhead from them. Maybe you should investigate further.
Try using Loki library instead: it also has many common-used generic components (including a static assertion macro, similar to BOOST_CHECK).
Loki is known to be lightweight, but even more powerful than boost is, because it uses a policy-based approach to class design. However, it doesn't have all that variety of tools, only most common ones: smart pointers, small object allocator, meta-programming helpers, factories and a few others. But if you don't need any of those monstrous boost libs like serialization for ex, you may find it satisfying for your needs.
I'm trying to write a test to verify a compiling error. It's about assigning a number to a String type property. But since it's a compiling error, the unit test code won't even be compiled at the first place. So anyone have any suggestion on how to do this?
I'm thinking maybe I can assign the number in runtime and check to see if certain exception got thrown? But I'm not sure how exactly to do that.
Thanks in advance!
If I understand you correctly, you have some piece of code that may not compile and you want to write the unit test that fails if the code really doesn't compile. If that is the case, then you should not write any unit tests. You should understand that you should write unit tests only for your code, not the code that somebody else wrote.
You didn't specify the programming language, so I will do some pseudo-code. Say you are writing a function to add two numbers:
function add(a, b) {
return a + b;
}
Very simple. You should unit test this, e.g. by making tests like the below:
function testAdd() {
assertEquals(4, add(2, 2));
assertEquals(46, add(12, 34));
}
However, you should not write a test that tests whether + operator is working fine. This is the job of whoever wrote the library that implements how + operator really works.
So, if that's the case, don't write any unit tests. Compiling your code is job of your compiler. The compiler should report that there is a compilation error in a proper way. You should not test whether compiler does its job correctly - testing that is the job of the people who are writing the compiler.
You did not specify the language
Ruby, Python -- dynamic & strong typesystem. This means that the types deduced during runtime (dynamic), but implicit conversions between types are prohibited
Js, Perl -- dynamic & weak typesystem.
C++ -- static and strong typesystem.
Let's assume we are talking about C++. Moreover I can create more really example. Imagine that you implement static assert for your project which doesn't not use c++11 compiler
template <bool>
struct my_static_assert;
template <>
struct my_static_assert<true> {};
If you want to check that such mechanism works ok. You should create unittest function which do the follow steps:
Create some file to compiler
Create external process of the compiler and pass test compile unit to it
Wait for compiler process completion
Retrive return code from the compiler process
Your function check return code from 4.
I checked google-test guide but it seems that they doesn't support such concept https://github.com/google/googletest/blob/master/docs/advanced.md