How to get loopinfo in Module Pass - llvm

I want to get loopinfo in each function by iterating through functions in Module Pass. My code is as follows:
for (auto &F:M) {
if(!F.isDeclaration()){
LoopInfo &LI = getAnalysis<LoopInfoWrapperPass>(F).getLoopInfo();
}
}
However, there is an error, I think my variable Settings should conform to the first function definition, how should I resolve.
clang-12: /llvmtest/llvm/lib/IR/LegacyPassManager.cpp:1645: virtual
std::tuple<llvm::Pass*, bool>
{anonymous}::MPPassManager::getOnTheFlyPass(llvm::Pass*,
llvm::AnalysisID, llvm::Function&): Assertion `FPP && “Unable to find
on the fly pass”’ failed. PLEASE submit a bug report to
https://bugs.llvm.org/ and include the crash backtrace, preprocessed
source, and associated run script.

You can not do this with the legacy pass manager. In the legacy pass manager, every pass could only get info from same-scoped passes -- module from module, function from function, loop from loop, plus one exception allowing function passes to get data from module passes.
With the new pass manager, you'd create a LoopAnalysisManager and add the analysis pass you want and run it. See https://llvm.org/docs/NewPassManager.html#using-analyses .
Note that most of LLVM is presently written to support both pass managers at once. If you do this, you'll need to write your pass differently from most of LLVM's passes, you can't use the types with names like "WrapperPass" that exist to support both legacy and new pass managers.

Related

Get ScriptOrigin from v8::Module

It seems trivial, but I've searched far and wide.
I'm using this resource to make v8 run with ES Modules and I'm trying to implement my own search/load algorithm. Thus far, I've managed to make a simple system which loads a file from a known location, however I'd like to implement external modules. This means that the known location is actually unknown throughout the application. Take the following directory tree as an example:
~/
- index.js
import 'module1_index'; // This is successfully resolved to /libs/module1/module1_index.js
/libs/module1/
- module1_index.js
export * from './lib.js' // This import fails because it is looking for ./lib.js in ~/source
- lib.js
export /* literally anything */
The above example begins by executing the index.js file from ~. When module1_index.js is executed, lib.js is looked for from ~ and consequently fails. In order to address this, the files must be looked for relative to the file being executed at the moment, however I have not found a means to do this.
First Attempt
I'm given the opportunity to look for the file in the callResolve method (main.cpp:280):
v8::MaybeLocal<v8::Module> callResolve(v8::Local<v8::Context> context, v8::Local<v8::String> specifier, v8::Local<v8::Module> referrer)
or in loadModule (main.cpp:197)
v8::MaybeLocal<v8::Module> loadModule(char code[], char name[], v8::Local<v8::Context> cx)
however, as mentioned, I have found no function by which to extract the ScriptOrigin from the module. I should mention, when files are successfully resolved, the ScriptOrigin is initiated with the exact path to the file, and is reliable.
Second Attempt
I set up a stack, which keeps track of the current file being executed. Every import which is made is pushed onto the stack. Once the file has finished executing, it is popped. This also did not work, as there was no way to reliably determine once the file had finished executing.
It seems that the loadModule function does just that: loads. It does not execute, so I cannot pop after the module has loaded, as the imports are not fully resolved. The checkModule/execModule functions are only invoked on dynamic imports, making them useless to determining the completion of a static import.
I'm at a loss. I'm not familiar with v8 enough to know where to look, although I have dug through some NodeJS source code looking for an implementation, to no avail.
Any pointers are greatly appreciated.
Thanks.
Jake.
I don't know much about module resolution, but looking at V8's sources, I can see an example mapping a v8::Module to a std::string absolute_path, which sounds like what you're looking for. I'm not copying the whole code here, because the way it uses custom metadata is a bit involved; the short story is that it keeps a std::unordered_map to keep data about each module's source on the side. (I wonder if it would be possible to use Module::ScriptId() as that map's key, for simplification.)
Code search finds a bunch more example uses of InstantiateModule, mostly in tests. Tests often serve as useful examples/documentation :-)

LLVM GetAnalysis() failing with required passes

I have a custom set of passes created using LLVM to run on some bitcode.
I've managed to get it to compile, but whenever I try to run it with a pass that calls getAnalysis() on another pass type it fails with:
Assertion `ResultPass && "getAnalysis*() called on an analysis that was not " "'required' by pass!"' failed.
The custom pass that is calling getAnalysis() requires its type, specifically;
bool Operators::doInitialization(){
ParseConfig &parseConfig = getAnalysis<ParseConfig>(); // Fails here.
}
.
.
.
void Operators::getAnalysisUsage(AnalysisUsage &AU) const{
AU.addRequired<ParseConfig>();
return;
}
I've spent a few days on this and am quite lost. I know the following is true:
ParseConfig is registered successfully via the RegisterPass<> template, and I have stepped through it in GDB to find that it does get registered.
Also using GDB I have found that when looking into getAnalysis() that the list of registered passes is always empty (which causes the assertion).
Important Note: I will eventually be using this on a Fortran project which is compiled with Flang, thus the LLVM library version I'm using is the Flang fork (found here). That fork is right around LLVM 7.1, but the specific files associated with registering passes seems to not be different from the current LLVM library.
Move getAnalysis function from doInitialization to runOnFunction would make it work.
From LLVM page
This method call getAnalysis* returns a reference to the pass desired. You may get a runtime assertion failure if you attempt to get an analysis that you did not declare as required in your getAnalysisUsage implementation. This method can be called by your run* method implementation, or by any other local method invoked by your run* method.

Running standard optimization passes on a LLVM module

Say I have a valid LLVM module:
std::unique_ptr<llvm::Module> module;
I want to run LLVM traditional optimization passes on it:
llvm::PassBuilder passBuilder;
llvm::ModulePassManager modulePassManager = passBuilder.buildPerModuleDefaultPipeline(llvm::PassBuilder::OptimizationLevel::O3);
llvm::ModuleAnalysisManager moduleAnalysisManager;
passBuilder.registerModuleAnalyses(moduleAnalysisManager);
modulePassManager.run(*module, moduleAnalysisManager);
Unfortunately, the call crashes and debugging shows that the moduleAnalysisManager has only the module passes, but
not the function ones that are wrapped with the proxy class.
How should I setup modulePassManager to handle all (module) passes of a specific level? I don't have individual functions, so I can't run the function passes just on them.
The proper LLVM way is to create all analysisManagers and then link all of them together. Let's start by creating them:
llvm::PassBuilder passBuilder;
llvm::LoopAnalysisManager loopAnalysisManager(true); // true is just to output debug info
llvm::FunctionAnalysisManager functionAnalysisManager(true);
llvm::CGSCCAnalysisManager cGSCCAnalysisManager(true);
llvm::ModuleAnalysisManager moduleAnalysisManager(true);
Then we register each manager individually, and then we cross register them. This means that the number of managers here is fixed by design and if LLVM (7 at this time) changes the number of managers, this will need to be adapted:
passBuilder.registerModuleAnalyses(moduleAnalysisManager);
passBuilder.registerCGSCCAnalyses(cGSCCAnalysisManager);
passBuilder.registerFunctionAnalyses(functionAnalysisManager);
passBuilder.registerLoopAnalyses(loopAnalysisManager);
// This is the important line:
passBuilder.crossRegisterProxies(
loopAnalysisManager, functionAnalysisManager, cGSCCAnalysisManager, moduleAnalysisManager);
Once the passBuilder is created, we can finally make the optimization passes for the module with a call to the moduleAnalysisManager.
llvm::ModulePassManager modulePassManager =
passBuilder.buildPerModuleDefaultPipeline(llvm::PassBuilder::OptimizationLevel::O3);
modulePassManager.run(*module, moduleAnalysisManager);
This will run module level passes as well as all inner passes that LLVM can run on pieces of the module (function level, loop level...).

llvm irbuilder call instruction throwing exception on function inlining pass

I'm new to LLVM. I am using the clang c++ API to compile multiple stub files (in c) to IR, and then stick them together using IR builder (after linking them) to eventually run via JIT.
All this works great, unless I add a functionInlining pass to my optimizations, at which point one of these function calls made in IR builder will trigger the following exception when the pass manager is run:
Assertion failed: (New->getType() == getType() && "replaceAllUses of value with new value of different type!"), function replaceAllUsesWith, file /Users/mike/Development/llvm/llvm/lib/IR/Value.cpp, line 356.
This is how I make the call instruction (pretty straight forward):
Function *kernelFunc = mModule->getFunction( (kernel->Name() + StringRef("_") + StringRef(funcName)).str());
if (kernelFunc){
CallInst* newInst = builder.CreateCall(kernelFunc, args);
}
Later the module is optimized:
legacy::PassManager passMan;
PassManagerBuilder Builder;
Builder.OptLevel = 3;
//Builder.Inliner = llvm::createFunctionInliningPass(); //commenting this back in trigger the exception
Builder.populateModulePassManager(passMan);
passMan.run( *mModule ); //exception occurs before this call returns
Any ideas what to look for?
Try running llvm::verifyModule on your module to see if it's correct. You might have an error and have been getting lucky beforehand but it tripped something up in the inliner.
In general assertions check a subset of things that can be wrong with your module but verify checks a lot more.
It could be a bug in LLVM but more than likely it's a bad module, it's easy to happen.
So I finally setup my dev environment so I could inspect the assertion call in the debugger. I turns out the basic block being replaced had a different context set than the one it was being replaced with. going back and making sure IRBuilder was using the same context as the IR parsers solved the problem.

RestKit entity mapping for UnitTests does not work

I'm trying to create my RKEntityMapping outside of my UnitTest. The problem I have is it only works if I create it inside my test. For example, this works:
RKEntityMapping *accountListMapping = [RKEntityMapping mappingForEntityForName:#"CustomerListResponse" inManagedObjectStore:_sut.managedObjectStore];
[accountListMapping addAttributeMappingsFromDictionary:#{#"count": #"pageCount",
#"page": #"currentPage",
#"pages": #"pages"}];
While the following does now work. The all to accoutListMapping returns exactly what is shown above using the same managed object store:
RKEntityMapping *accountListMapping = [_sut accountListMapping];
When the RKEntityMapping is created in _sut I get this error:
<unknown>:0: error: -[SBAccountTests testAccountListFetch] : 0x9e9cd10: failed with error:
Error Domain=org.restkit.RestKit.ErrorDomain Code=1007 "Cannot perform a mapping operation
with a nil destination object." UserInfo=0x8c64490 {NSLocalizedDescription=Cannot perform
a mapping operation with a nil destination object.}
I'm assuming the nil destination object it is referring to is destinationObject:nil.
RKMappingTest *maptest = [RKMappingTest testForMapping:accountListMapping
sourceObject:_parsedJSON
destinationObject:nil];
Make sure that the file you have created has a target membership of both your main target, and your test target. You can find this by:
clicking on the .m file of your class
open the utilities toolbar (the one on the right)
in the target membership section tick both targets.
This is because if your class does not have target membership to your test target, the test target actually creates a copy of the class that you have created, meaning it has a different binary file to the main target. This leads to that class using the test's version of the RestKit binary, rather than the main projects RestKit. This will lead to the isKindOfClass method failing when it tries to see if the mapping you have passed is of type RKObjectMapping from the main project, because it is of type RKObjectMapping from the test projects version of RestKit, so your mapping doesn't get used, and you get your crash.
At least this is my understanding of how the LLVM compiler works. I'm new to iOS dev so please feel free to correct if I got something wrong.
This problem may also be caused by duplicated class definitions, when including RestKit components for multiple targets individually when using Cocoapods.
For more information on this have a look at this answer.
I used a category on the Mapped object for example
RestKitMappings+SomeClass
+ (RKObjectMapping*)responsemappings {
return mappings;
}
now this category has to be included in the test target as well otherwise the mapping will not be passed.
When you're running a test you aren't using the entire mapping infrastructure, so RestKit isn't going to create a destination object for you. It's only going to test the mapping itself. So you need to provide all three pieces of information to the test method or it can't work.