I am writing a pass that needs information about loops. Therefore I am overriding getAnalysisUsage(AnalysisUsage&) to let the pass manager know that my pass depends on LoopInfoWrapperPass. However, when I try to get the result of that analysis, LLVM asserts that the analysis wasn't required by my pass. Here's a simple pass that I'm having trouble with:
#include <llvm/Pass.h>
#include <llvm/Support/raw_ostream.h>
#include <llvm/Analysis/LoopInfo.h>
struct Example : public ModulePass {
static char ID;
Example() : ModulePass(ID) {}
bool runOnModule(Module& M) override {
errs() << "what\n";
LoopInfo& loops = getAnalysis<LoopInfoWrapperPass>().getLoopInfo();
loops.print(errs());
return false;
}
virtual void getAnalysisUsage(AnalysisUsage& AU) const override {
errs() << "here\n";
AU.addRequired<LoopInfoWrapperPass>();
}
};
char Example::ID = 0;
static RegisterPass<Example> X("example", "an example", false, false);
When I run this pass, the two debug statements are printed in the correct order (here then what) but when getAnalysis<LoopInfoWrapperPass>() is called, I get this assertion error:
opt: /home/matt/llvm/llvm/include/llvm/PassAnalysisSupport.h:211: AnalysisType& llvm::Pass::getAnalysisID(llvm::AnalysisID) const [with AnalysisType = llvm::LoopInfoWrapperPass; llvm::AnalysisID = const void*]: Assertion `ResultPass && "getAnalysis*() called on an analysis that was not " "'required' by pass!"' failed.
This is the same method that is given in LLVM's documentation on writing passes, so I'm not quite sure what's going wrong here. Could anyone point me in the right direction?
LoopInfoWrapperPass is derived from FunctionPass. Your Example class, however, derives from ModulePass. It works on the module level, so you'll need to tell LoopInfoWrapperPass which function you want to analyze. Basically, you might want to loop every function f in the module, and use getAnalysis<LoopInfoWrapperPass>(f).
Alternatively, the easiest way to fix the code above is to replace ModulePass with FunctionPass and runOnModule(Module& M) with runOnFunction(Function& F). Then, getAnalysis<LoopInfoWrapperPass>() should work just fine.
Related
Say I have a simple test with mocks.
#include "boost/interprocess/detail/interprocess_tester.hpp"
#include <gtest/gtest.h>
#include <gmock/gmock.h>
using namespace ::testing;
struct IInterface
{
virtual void method(int foo) = 0;
};
struct SimpleMock : public IInterface
{
MOCK_METHOD1(method, void(int foo));
};
struct TestFixture : public Test
{
TestFixture()
{
// Don't care about other invocations not expected
EXPECT_CALL(mock, method(_)).Times(AnyNumber());
}
void setupExpectation(int data)
{
EXPECT_CALL(mock, method(data)).Times(1);
}
SimpleMock mock;
};
TEST_F(TestFixture, SimpleTest)
{
setupExpectation(2);
mock.method(2);
setupExpectation(5);
mock.method(3); // will fail expectation
}
This will fail with the message below referencing into the helper method, which makes it hard to debug or figure which expectation failed since I don't see the line I called setupExpectation or the actual argument value.
test_HarmonicTherapyStateMachineAit.cpp:27: Failure
Actual function call count doesn't match EXPECT_CALL(mock, method(data))...
Expected: to be called once
Actual: never called - unsatisfied and active
Note my actual use case has more complicated expect calls where I think it warrants splitting it into a separate method (and having multiple expectations in one test). However, I'm not sure how to get a more informative error message from this.
I've read about http://google.github.io/googletest/gmock_cook_book.html#controlling-how-much-information-gmock-prints. However, this provides more information than I really want, which is just a line number of the function that calls setupExpectation.
I also just tried making a MACRO to wrap the common expectations. This would be easy in this simple case. However, my actual use case has more complicated logic that I'd rather not place into a macro.
Even if I could do something like EXPECT_CALL(...).Times(1) << "argument: " << foo; That would be helpful.
Any help would be appreciated.
Currently I had the same problem, that gmock provides me very few information. My solution was to write a custom Matcher.
The syntax of the EXPECT_CALL macro is according to the documentation:
EXPECT_CALL(mock_object, method(matchers))
A quick solution could be to write a matcher, which in case of an error prints a message to the screen.
e.g:
MATCHER_P2(isEqual, expected, line, "")
{
bool isEqual = arg == expected;
if(!isEqual)
printf("Expected argument: %d, actual: %d (EXPECT_CALL line: %d)\n", expected, arg, line);
return isEqual;
}
The EXPECT_CALL then changes to
EXPECT_CALL(mock, method(isEqual(3, __LINE__))).Times(1);
Please let me know if you have another/better solution to this.
I have a Function pass, called firstPass, which does some analysis and populates:
A a;
where
typedef std::map< std::string, B* > A;
class firstPass : public FunctionPass {
A a;
}
typedef std::vector< C* > D;
class B {
D d;
}
class C {
// some class packing information about basic blocks;
}
Hence I have a map of vectors traversed by std::string.
I wrote associated destructors for these classes. This pass works successfully on its own.
I have another Function pass, called secondPass, needing this structure of type A to make some transformations. I used
bool secondPass::doInitialization(Module &M) {
errs() << "now running secondPass\n";
a = getAnalysis<firstPass>().getA();
return false;
}
void secondPass::getAnalysisUsage(AnalysisUsage &AU) const {
AU.addRequired<firstPass>();
AU.setPreservesAll();
}
The whole code compiles fine, but I get a segmentation fault when printing this structure at the end of my first pass only if I call my second pass (since B* is null).
To be clear:
opt -load ./libCustomLLVMPasses.so -passA < someCode.bc
prints in doFinalization() and exits successfully
opt -load ./libCustomLLVMPasses.so -passA -passB < someCode.bc
gives a segmentation fault.
How should I wrap this data structure and pass it to the second pass without issues? I tried std::unique_ptr instead of raw ones but I couldn't make it work. I'm not sure if this is the correct approach anyway, so any help will be appreciated.
EDIT:
I solved the problem of seg. fault. It was basically me calling getAnalysis in doInitialization(). I wrote a ModulePass to combine my firstPass and secondPass whose runOnModule is shown below.
bool MPass::runOnModule(Module &M) {
for(Function& F : M) {
errs() << "F: " << F.getName() << "\n";
if(!F.getName().equals("main") && !F.isDeclaration())
getAnalysis<firstPass>(F);
}
StringRef main = StringRef("main");
A& a = getAnalysis<firstPass>(*(M.getFunction(main))).getA();
return false;
}
This also gave me to control the order of the functions processed.
Now I can get the output of a pass but cannot use it as an input to another pass. I think this shows that the passes in llvm are self-contained.
I'm not going to comment on the quality of the data structures based on their C++ merit (it's hard to comment on that just by this minimal example).
Moreover, I wouldn't use the doInitialization method, if the actual initialization is that simple, but this is a side comment too. (The doc does not mention anything explicitly about it, but if it is ran once per Module while the runOn method is ran on every Function of that module, it might be an issue).
I suspect that the main issue seems to stem from the fact A a in your firstPass is bound to the lifetime of the pass object, which is over once the pass is done. The simplest change would be to allocate that object on the heap (e.g. new) and return a pointer to it when calling getAnalysis<firstPass>().getA();.
Please note that using this approach might require manual cleanup if you decide to use a raw pointer.
I'm implementing several Passes on the LLVM in order to add original optimization,
These Passes are based on FunctionPass and ModulePass.
Now, each Pass is invoked by corresponding opt command option which is
registerd by RegisterPass template.
In future, I'd like to these Passes to be invoked only by one opt command option.
My idea is as follows:
First, Function passes to run, and finally Module pass to run.
Each Function passes to use the former Function passes' analysis information.
The final Module pass to construct a new function using the former Function passes' result.
All of these Passes sequence to invoke by only one opt command option specifying the final Module pass.
I thought I could make it with addRequired method in the AnalysisUsage class.
However, it doesn't seem to work:
In the Function pass, several Function passes may be addRequired in the order.
In the Function pass, only one Module pass may be addRequired.
In the Function pass(X), Function pass and Module pass cannot be addRequired simultaneously.
i.e. opt command execution with option X causes to a lock status.
In the Module pass, only one Module pass may be addRequired.
In the Module pass(Y), Function pass(Z) cannot be addRequired.
i.e. opt command with option Y executes only Y, and Function pass(Z) is ignored.
I am not familiar to the Pass manager mechanism.
Anybody help me how to run the Function pass before the Module pass with only one opt command option?
The case of execution is shown below:-
$ opt -stats -load ~/samples/tryPass4.so -MPass4 hello2.ll -S -o tryPass4.ll -debug-pass=Structure
Pass Arguments: -targetlibinfo -datalayout -notti -basictti -x86tti -MPass4 -verify -verify-di -print-module
Target Library Information ↑
Data Layout -FPass4 doesn't appear here
No target information
Target independent code generator's TTI
X86 Target Transform Info
ModulePass Manager
Module Pass
Unnamed pass: implement Pass::getPassName()
FunctionPass Manager
Module Verifier
Debug Info Verifier
Print module to stderr
Pass Arguments: -FPass4 <- here -FPass4 appears, but not executed
FunctionPass Manager
Function Pass
***** Module Name : hello2.ll <- output from the Module pass
The source code for above is as follows:-
using namespace llvm;
namespace{
class tryFPass4 : public FunctionPass {
public :
static char ID;
tryFPass4() : FunctionPass(ID){}
~tryFPass4(){}
virtual bool runOnFunction(llvm::Function &F);
virtual void getAnalysisUsage(llvm::AnalysisUsage &AU) const;
};
class tryMPass4 : public ModulePass {
public :
static char ID;
tryMPass4() : ModulePass(ID){}
~tryMPass4(){}
virtual bool runOnModule(llvm::Module &M);
virtual void getAnalysisUsage(llvm::AnalysisUsage &AU) const;
};
}
bool tryFPass4::runOnFunction(Function &F) {
bool change = false;
....
return change;
}
bool tryMPass4::runOnModule(Module &M) {
bool change = false ;
....
return change;
}
void tryFPass4::getAnalysisUsage(AnalysisUsage &AU) const {
AU.setPreservesCFG();
}
void tryMPass4::getAnalysisUsage(AnalysisUsage &AU) const {
AU.setPreservesCFG();
AU.addRequired<tryFPass4>();
}
char tryFPass4::ID = 0;
static RegisterPass<tryFPass4> X("FPass4", "Function Pass", false, false);
char tryMPass4::ID = 0;
static RegisterPass<tryMPass4> Y("MPass4", "Module Pass", false, false);
I tried to simulate the problem here using LLVM 3.8.1.
I believe your Function pass gets to run here:
Module Pass
Unnamed pass: implement Pass::getPassName()
I do not know why it is marked as unnamed although getPassName is overriden.
A fine detail that you need to watch is that in order for the function pass to actually execute its runOnFunction method, you need to invoke the Function & specific method of getAnalysis as in:
getAnalysis<tryFPass4>(f); // where f is the current Function operating on
It seems if the dependent pass operates on a small unit of IR than the pass that requires it, it needs to be executed explicitly. I might be mistaken since I have not yet tried it with a BasicBlockPass required by a FunctionPass.
To learn LLVM I made a ModulePass that runs through the functions, basic blocks, and finally instructions. At some point I want to dig into the instructions and perform analysis. While reading the documentation I came across http://llvm.org/docs/doxygen/html/classllvm_1_1InstVisitor.html and the documentation recommends using these structures to efficiently traverse IR rather than do a lot of if(auto* I = dyn_cast<>()) lines.
I tried making a variation of the documentation example, but for BranchInst:
struct BranchInstVisitor : public InstVisitor<BranchInst> {
unsigned Count;
BranchInstVisitor() : Count(0) {}
void visitBranchInst(BranchInst &BI){
Count++;
errs() << "BI found! " << Count << "\n";
}
}; // End of BranchInstVisitor
Within my ModulePass, I created the visitor:
for(Module::iterator F = M.begin(), modEnd = M.end(); F != modEnd; ++F){
BranchInstVisitor BIV;
BIV.visit(F);
...
Unfortunately, my call to visit(F) fails when I compile:
error: invalid static_cast from type ‘llvm::InstVisitor<llvm::BranchInst>* const’ to type ‘llvm::BranchInst*’ static_cast<SubClass*>(this)->visitFunction(F);
How do I correctly implement an LLVM InstVisitor? Are InstVisitors supposed to be run outside of passes? If I missed documentation, please let me know where to go.
The template parameter should be the type you're declaring, not a type of instruction, like this:
struct BranchInstVisitor : public InstVisitor<BranchInstVisitor>
Each visitor can override as many visit* methods as you want -- it's not like each visitor is tied to one type of instruction. That wouldn't be very useful.
I’m writing a CallGraphSCCPass which needs dominator tree information on each function. My getAnalysisUsage is fairly straightforward:
virtual void getAnalysisUsage(AnalysisUsage& au) const override
{
au.setPreservesAll();
au.addRequired<DominatorTreeWrapperPass>();
}
The pass is registered like this:
char MyPass::ID = 0;
static RegisterPass<MyPass> tmp("My Pass", "Do fancy analysis", true, true);
INITIALIZE_PASS_BEGIN(MyPass, "My Pass", "Do fancy analysis", true, true)
INITIALIZE_PASS_DEPENDENCY(DominatorTreeWrapperPass)
INITIALIZE_PASS_END(MyPass, "My Pass", "Do fancy analysis", true, true)
When I try to add my pass to a legacy::PassManager, it dies with this error message:
Unable to schedule 'Dominator Tree Construction' required by ‘My Pass'
Unable to schedule pass
UNREACHABLE executed at LegacyPassManager.cpp:1264!
I statically link LLVM to my program, and define the pass in my program, too.
Am I doing something wrong? Does it make sense to require the DominatorTreeWrapperPass from a CallGraphSCCPass?
I also sent the question on the LLVM ML, but the server appears to be down at the moment.
If it makes any difference, I'm using LLVM 3.7 trunk, up-to-date as of a few weeks ago.
CallGraphSCCPass appears to be a special case that doesn't support every analysis very well. The simplest thing to do is to convert the pass to a ModulePass, and use <llvm/ADT/SCCIterator.h> to construct call graph SCCs from runOnModule, like how CGPassManager does it.
virtual void getAnalysisUsage(AnalysisUsage& au) const override
{
au.addRequired<CallGraphWrapperPass>();
// rest of your analysis usage here...
}
virtual bool runOnModule(Module& m) override
{
CallGraph& cg = getAnalysis<CallGraphWrapperPass>().getCallGraph();
scc_iterator<CallGraph*> cgSccIter = scc_begin(&cg);
CallGraphSCC curSCC(&cgSccIter);
while (!cgSccIter.isAtEnd())
{
const vector<CallGraphNode*>& nodeVec = *cgSccIter;
curSCC.initialize(nodeVec.data(), nodeVec.data() + nodeVec.size());
runOnSCC(curSCC);
++cgSccIter;
}
return false;
}
bool runOnSCC(CallGraphSCC& scc)
{
// your stuff here
}
Module passes take no issue at requiring DominatorTreeWrapperPass, or other analyses like MemoryDependenceAnalysis. However, this naive implementation might not support modifications to the call graph like CGPassManager does.
CallGraphSCCPass is a ModulePass and hence is being handled and invoked by ModulePassManager, ModulePassManager invokes FunctionPassManager (which manages FunctionPasses) and gives the handle to it after invoking Module passes(in the other word ModulePasses queue before FunctionPasses in the PassManager's pipeline), so you can not ask PassManager for a functionPass while you are inside a ModulePass but you can ask for a ModulePass inside FunctionPass since they already have run. by the same reason you can not ask for a LoopPass inside a FunctionPass