I want to use LoopInfoPass in the runOnSCC() method of a pass derived from CallGraphSCC-Pass. More specifically, I want to judge whether a basicblock is in a loop or not in the runOnSCC() method.
So, the code should be like:
LoopInfo &lf = getAnalysis<LoopInfo>(F);
Loop *bbLoop = lf.getLoopFor(BB);
and the getAnalysisUsage:
void AnalyzerPass::getAnalysisUsage(AnalysisUsage &Info) const {
Info.addRequired<CallGraphWrapperPass>();
Info.addRequired<LoopInfo>();
Info.addRequired<ScalarEvolution>();
Info.addPreserved<CallGraphWrapperPass>();
Info.setPreservesCFG();
}
and the Initialization:
INITIALIZE_PASS_BEGIN(AnalyzerPass, "analyzer",
"analysis", false, false)
INITIALIZE_PASS_DEPENDENCY(CallGraphWrapperPass)
INITIALIZE_PASS_DEPENDENCY(LoopInfo)
INITIALIZE_PASS_DEPENDENCY(ScalarEvolution)
INITIALIZE_PASS_END(AnalyzerPass, "analyzer",
"analysis", false, false)
Finally, I use a PassManager:
PassManager pm;
pm.add(LoopInfo);
pm.add(ScalarEvolution);
pm.add(AnalyzerPass);
pm.run(&module);
When executing the pm.run(&module), I get the error:
Unable to schedule 'Natural Loop Info' for 'analysis'.
Unable to schedule pass.
I want to ask where am I wrong? There are too little information about how to use CallGraphSCC-Pass. Any help would be much appreciated!
Related
I am making an application in C++, and it requires a config file that will be read and interpreted on launch. It will contain things such as:
Module1=true
Now, my original plan was to store it all in variables and simply have
If(module1) {
DO_STUFF();
}
However this seems wasteful as it would be checking constantly for a value that would never change. Any ideas?
Optimize the code, only if you find a bottleneck with a profiler. Branch prediction should do its thing here, module1 never changes, so if you call it in a loop, even, there shouldn't be a noticeable performance loss.
If you want to experiment, you can branch once, and make a pointer point to the right function:
using func_ptr = void (*)();
func_ptr p = [](){};
if(module1)
p = DO_STUFF;
while(...)
p();
But this is just something to profile, look at the assembly...
There are also slower, but comfortable ways you could be storing the configuration, e.g. in an array with enumerated indexes, or a map. If I were to get some value in a loop, I'd do:
auto module1 = modules[MODULE1]; // array and enumeration
//auto module1 = modules.at("module1"); // map and string
while(...)
{
if(module1)
DO_STUFF;
...
}
So I'd end up with what you already have.
performance wise a boolean check is no problem, except you start doing it millions or billions of times. Maybe you can start merging code which belongs to module1, but other than that you'd have to check for it like you currently do
This really isn't an issue. If your program requires that Module1 should be true then let it check the value and continue on. It wont affect your performance unless it is being checked too many times.
One thing you could do is make an inline function if it being checked too many times. However, you will have to make sure the function shouldnt be too big otherwise it will be a bigger bottleneck
Sorry guys, didn't spot this when I looked it up:
MDSN
So I check the boolean once on launch and then I don't need to anymore as only the correct functions are launched.
Depending on how your program is set up and how the variables change the behaviour of the code you might be able to use function pointers:
if(Module1 == true)
{
std::function<void(int)> DoStuff = Module1Stuff;
}
And then later:
while(true)
{
DoStuff(ImportantVariable);
}
See http://en.cppreference.com/w/cpp/utility/functional/function for further reference.
Not that I think it'll help all that much but it's an alternative to try out at least.
This can be solved if you know the all use cases of the values you check. For example, if you've read your config file and module1 is true - you do one thing, if it is false - another. Let's start with example:
class ConfigFileWorker {
public:
virtual void run() = 0;
};
class WithModule1Worker {
public:
void run() final override {
// do stuff as if your `Module1` is true
}
};
class WithoutModule1Worker {
public:
void run() final override {
// do stuff as if your `Module1` is false
}
};
int main() {
std::unique_ptr<ConfigFileWorker> worker;
const bool Module1 = read_config_file(file, "Module1");
if (Module1) { // you check this only once during launch and just use `worker` all the time after
worker.reset(new WithModule1Worker);
} else {
worker.reset(new WithoutModule1Worker);
}
// here and after just use the pointer with `run()` - then you will not need to check the variable all the time, you'll just perform action.
}
So you have predefined behaviour for 2 cases (true and false) and just create an object of one of them during parsing the config file on launch. This is java-like code, but of course you may use function pointers, std::function and other abstractions instead of a base class, however, base class-option has more flexibility in my opinion.
I am writing a pass that needs information about loops. Therefore I am overriding getAnalysisUsage(AnalysisUsage&) to let the pass manager know that my pass depends on LoopInfoWrapperPass. However, when I try to get the result of that analysis, LLVM asserts that the analysis wasn't required by my pass. Here's a simple pass that I'm having trouble with:
#include <llvm/Pass.h>
#include <llvm/Support/raw_ostream.h>
#include <llvm/Analysis/LoopInfo.h>
struct Example : public ModulePass {
static char ID;
Example() : ModulePass(ID) {}
bool runOnModule(Module& M) override {
errs() << "what\n";
LoopInfo& loops = getAnalysis<LoopInfoWrapperPass>().getLoopInfo();
loops.print(errs());
return false;
}
virtual void getAnalysisUsage(AnalysisUsage& AU) const override {
errs() << "here\n";
AU.addRequired<LoopInfoWrapperPass>();
}
};
char Example::ID = 0;
static RegisterPass<Example> X("example", "an example", false, false);
When I run this pass, the two debug statements are printed in the correct order (here then what) but when getAnalysis<LoopInfoWrapperPass>() is called, I get this assertion error:
opt: /home/matt/llvm/llvm/include/llvm/PassAnalysisSupport.h:211: AnalysisType& llvm::Pass::getAnalysisID(llvm::AnalysisID) const [with AnalysisType = llvm::LoopInfoWrapperPass; llvm::AnalysisID = const void*]: Assertion `ResultPass && "getAnalysis*() called on an analysis that was not " "'required' by pass!"' failed.
This is the same method that is given in LLVM's documentation on writing passes, so I'm not quite sure what's going wrong here. Could anyone point me in the right direction?
LoopInfoWrapperPass is derived from FunctionPass. Your Example class, however, derives from ModulePass. It works on the module level, so you'll need to tell LoopInfoWrapperPass which function you want to analyze. Basically, you might want to loop every function f in the module, and use getAnalysis<LoopInfoWrapperPass>(f).
Alternatively, the easiest way to fix the code above is to replace ModulePass with FunctionPass and runOnModule(Module& M) with runOnFunction(Function& F). Then, getAnalysis<LoopInfoWrapperPass>() should work just fine.
I’m writing a CallGraphSCCPass which needs dominator tree information on each function. My getAnalysisUsage is fairly straightforward:
virtual void getAnalysisUsage(AnalysisUsage& au) const override
{
au.setPreservesAll();
au.addRequired<DominatorTreeWrapperPass>();
}
The pass is registered like this:
char MyPass::ID = 0;
static RegisterPass<MyPass> tmp("My Pass", "Do fancy analysis", true, true);
INITIALIZE_PASS_BEGIN(MyPass, "My Pass", "Do fancy analysis", true, true)
INITIALIZE_PASS_DEPENDENCY(DominatorTreeWrapperPass)
INITIALIZE_PASS_END(MyPass, "My Pass", "Do fancy analysis", true, true)
When I try to add my pass to a legacy::PassManager, it dies with this error message:
Unable to schedule 'Dominator Tree Construction' required by ‘My Pass'
Unable to schedule pass
UNREACHABLE executed at LegacyPassManager.cpp:1264!
I statically link LLVM to my program, and define the pass in my program, too.
Am I doing something wrong? Does it make sense to require the DominatorTreeWrapperPass from a CallGraphSCCPass?
I also sent the question on the LLVM ML, but the server appears to be down at the moment.
If it makes any difference, I'm using LLVM 3.7 trunk, up-to-date as of a few weeks ago.
CallGraphSCCPass appears to be a special case that doesn't support every analysis very well. The simplest thing to do is to convert the pass to a ModulePass, and use <llvm/ADT/SCCIterator.h> to construct call graph SCCs from runOnModule, like how CGPassManager does it.
virtual void getAnalysisUsage(AnalysisUsage& au) const override
{
au.addRequired<CallGraphWrapperPass>();
// rest of your analysis usage here...
}
virtual bool runOnModule(Module& m) override
{
CallGraph& cg = getAnalysis<CallGraphWrapperPass>().getCallGraph();
scc_iterator<CallGraph*> cgSccIter = scc_begin(&cg);
CallGraphSCC curSCC(&cgSccIter);
while (!cgSccIter.isAtEnd())
{
const vector<CallGraphNode*>& nodeVec = *cgSccIter;
curSCC.initialize(nodeVec.data(), nodeVec.data() + nodeVec.size());
runOnSCC(curSCC);
++cgSccIter;
}
return false;
}
bool runOnSCC(CallGraphSCC& scc)
{
// your stuff here
}
Module passes take no issue at requiring DominatorTreeWrapperPass, or other analyses like MemoryDependenceAnalysis. However, this naive implementation might not support modifications to the call graph like CGPassManager does.
CallGraphSCCPass is a ModulePass and hence is being handled and invoked by ModulePassManager, ModulePassManager invokes FunctionPassManager (which manages FunctionPasses) and gives the handle to it after invoking Module passes(in the other word ModulePasses queue before FunctionPasses in the PassManager's pipeline), so you can not ask PassManager for a functionPass while you are inside a ModulePass but you can ask for a ModulePass inside FunctionPass since they already have run. by the same reason you can not ask for a LoopPass inside a FunctionPass
I have written C++ tests with GTest which basically work like this
MyData data1 = runTest(inputData);
MyData data2 = loadRegressionData();
compareMyData(data1,data2);
with
void compareMyData(MyData const& data1, MyData const& data2)
{
ASSERT_EQ(data1.count, data2.count);
//pseudo:
foreach element in data1/data2:
EXPECT_EQ(data1.items[i], data2.items[i]);
}
Now I would like to save the data1 contents to a file IFF the test fails and I don't see
an elegant solution yet.
First approach: Make compareMyData return the comparison result. This can't work with the ASSERT_EQ which is fatal. Writing if (!EXPECT_EQ(...)) doesn't compile so the only way I found is
bool compareMyData(MyData const& data1, MyData const& data2)
{
EXPECT_EQ(data1.count, data2.count);
if (data1.count != data2.count)
return false;
//pseudo:
foreach element in data1/data2:
{
EXPECT_EQ(data1.items[i], data2.items[i]);
if (data1.items[i]!= data2.items[i])
return false;
}
}
Not very elegant :-(
Second idea: Run code when the test failed
I know I can implement ::testing::EmptyTestEventListener and get notified if a test fails, but that doesn't give me the data I want to write to file and it is "far away" from the place I'd like it to have. So my question here is: Is there a way to run code at the end of a test if it failed (e.g. catching an exception?).
To ask more general: how would you solve this?
On the advanced guide that VladLosev linked, it says,
Similarly, HasNonfatalFailure() returns true if the current test has at least one non-fatal failure, and HasFailure() returns true if the current test has at least one failure of either kind.
So calling HasNonfatalFailure might be what you want.
(I'm pretty late, but had the same question.)
The way around:
bool HasError = false;
FAIL() << (HasError = true);
if (HasError){
// do something
}
Instead of FAIL() could be ASSERT_.. EXPECT_... and so on. (in output you'll see "true" it's a fee for shortcut).
I haven't coded much for probably a few years, and I wanted to make a really basic thread manager in C++ for an idea I had. I have ran into an issue where I get this error:
ThreadManager.cpp:49:37: error: cannot convert
'ThreadManager::updateLoop' from type 'DWORD (ThreadManager::)(LPVOID)
{aka long unsigned int (ThreadManager::)(void*)}' to type
'LPTHREAD_START_ROUTINE {aka long unsigned int
(attribute((stdcall)) )(void)}'
Yet, I don't know how to attempt to fix it. Here is my code, I couldn't figure out how to paste it in here with formatting. It said I needed 4 spaces on each line but that seemed like would take a while, so I put it on pastebin:
ThreadManager.cpp: http://pastebin.com/2bL3mTqv
ThreadManager.h: http://pastebin.com/7xETj5BK
Like I said, I haven't programmed much for a LONG time, and I am trying to get back into it with what I remember, so any help would be appreciated.
The comments have said the basics, but here's it spelled out: you can't pass in a method to a class when a call is expecting a normal function. To do what you want, I'd do the following:
// New Function
void threadMain(void* classPointer)
{
ThreadManager* realClass = (ThreadManager*)classPointer;
realClass->updateLoop();
}
ThreadManager::ThreadManager(int max)
{
// Assign maxThreads to max value
maxThreads = max;
// Start updateThread, and let it run updateLoop() until terminated
updateThread = CreateThread(
NULL, // default security attributes
0, // use default stack size
threadMain, // thread function name
this, // argument to thread function
0, // use default creation flag
NULL); // ignore thread identifier
// Check the return value for success
// If failed, exit process.
if (updateThread == NULL) {
ExitProcess(3);
}
}
Now I know you want an extra argument, so probably use std::tuple to pass in the "this" pointer and any extra arguments you actually want.
Now having said all of that, take the advice of others and use std::thread and such, not the win32-specific calls unless you really need to.