I have the following line of code throwing an exception in LLVM IR C++ API:
AllocaInst *allocate = builder->CreateAlloca(objectType);
When run, it throws the following exception:
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x38)
* frame #0: 0x00000001000302fa birdd`llvm::BasicBlock::getModule() const + 4
frame #1: 0x0000000100003018 birdd`llvm::IRBuilderBase::CreateAlloca(llvm::Type*, llvm::Value*, llvm::Twine const&) [inlined] llvm::BasicBlock::getModule(this=<unavailable>) at BasicBlock.h:117:68 [opt]
frame #2: 0x0000000100003013 birdd`llvm::IRBuilderBase::CreateAlloca(this=0x0000000101102230, Ty=0x0000000101801000, ArraySize=0x0000000000000000, Name=0x00007ffeefbff038) at IRBuilder.h:1598 [opt]
This gave me an indication that the getModule() is returning an invalid pointer. Funny thing is the builder and the module share the same LLVMContext.
So I decided to run it through verifyModule as follows:
verifyModule(*builder->GetInsertBlock()->getModule());
The same error. But when I access the module object directly, it appears to be fine.
Here is my initialisation code:
static LLVMContext context;
std::unique_ptr<Module> module = std::make_unique<Module>("Main", context);
std::unique_ptr<IRBuilder<>> builder = std::make_unique<llvm::IRBuilder<>>(context);
I'm stuck. Any help will be appreciated!
IRBuilder will not be able to pick up Module from context (same context may be used by multiple modules) and in any case Module alone would not be enough - builder also needs to know the point at which instructions should be inserted. So you'd either need to provide it with a BasicBlock at construction time or set it explicitly via
builder->SetInsertPoint(BB);
or even
builder->SetInsertPoint(Inst);
if you want to insert not at BB's end.
A few side notes:
I'd suggest to follow LLVM's coding style of upper-cased variable names (this would make maintenance easier later on).
Builders are cheap to create so most often they are just created as local variables in functions that need them:
IRBuilder<> IRB;
Related
I'm trying to implement a very simple, local, HTTP server for my C++ application — I'm using XCode on macOS. I have to implement it from within a dynamically loaded library rather than the "main" thread of the program. I decided to try using boost::beast since another part of the application uses boost libraries already. I'm trying to implement this example, but within the context of my library, and not as part its main program.
The host application for this library calls on the following function to start a localhost server, but crashes when instantiating "acceptor":
extern "C" BASICEXTERNALOBJECT_API long startLocalhost(TaggedData* argv, long argc, TaggedData * retval) {
try {
string status;
retval->type = kTypeString;
auto const address = net::ip::make_address("127.0.0.1");
unsigned short port = static_cast<unsigned short>(std::atoi("1337"));
net::io_context ioc{1};
tcp::acceptor acceptor{ioc, {address, port}}; // <-- crashes on this line
tcp::socket socket{ioc};
http_server(acceptor, socket);
ioc.run();
status = "{'status':'ok', 'message':'localhost server started!'}";
retval->data.string = getNewBuffer(status);
}
catch(std::exception const& e)
{
string status;
//err_msg = "Error: " << e.what() << std::endl;
status = "{'status':'fail', 'message':'Error starting web server'}";
retval->data.string = getNewBuffer(status);
}
return kESErrOK;
}
When stepping through the code, I see that XCode reports an error when the line with tcp::acceptor ... is executed:
Thread 1: EXC_BAD_ACCESS (code=1, address=0x783c0a3e3f22650c)
and is highlighted at the single line of code in a function in scheduler.h:
//Get the concurrency hint that was used to initialize the scheduler.
int concurrency_hint() const
{
return concurrency_hint_; //XCode halts here
}
I'm debating as to whether or not I should include a different C++ web server, like Drogon, instead of boost::beast, but I thought I would post here to see if anybody had any insight as to why the crash is happening in this case.
Update
I found a fix that is a workaround for my particular circumstances, hopefully it can help others running into this issue.
The address to the service_registry::create static factory method resolves correctly when I add ASIO_DECL in front of the methods declaration in asio/detail/service_registry.hpp.
It should look like this:
// Factory function for creating a service instance.
template <typename Service, typename Owner>
ASIO_DECL static execution_context::service* create(void* owner);
By adding ASIO_DECL in front of it, it resolves correctly and the scheduler and kqueue_reactor objects initialize properly avoiding the bad access to concurrency_hint().
In my case I am trying to use non-Boost ASIO inside of a VST3 audio plug-in running in Ableton Live 11 on macOS on an M1 processor. Using the VST3 plug-in in I'm getting this same crash. Using the same plug-in in other DAW applications, such as Reaper, does not cause the crash. It also does not occur for Ableton Live 11 on Windows.
I've got it narrowed down to the following issue:
In asio/detail/impl/service_registry.hpp the following method attempts to return a function pointer address to a create/factory method.
template <typename Service>
Service& service_registry::use_service(io_context& owner)
{
execution_context::service::key key;
init_key<Service>(key, 0);
factory_type factory = &service_registry::create<Service, io_context>;
return *static_cast<Service*>(do_use_service(key, factory, &owner));
}
Specifically, this line: factory_type factory = &service_registry::create<Service, io_context>;
When debugging in Xcode, in the hosts that work, when inspecting
factory, it shows the correct address linking to the service_registry::create<Service, io_context> static method.
However, in Ableton Live 11, it doesn't point to anything - somehow the address to the static method does not resolve correctly. This causes a cascade of issues, ultimately leading up to trying to invoke the factory function pointer in asio/asio/detail/impl/service_registry.ipp in the method service_registry::do_use_service. Since it doesn't point to a proper create method, nothing is created, it results in uninitialized objects, including the scheduler instance.
Therefore, when calling scheduler_.concurrency_hint() in kqueue_reactor.ipp the scheduler is uninitialized, and the EXC_BAD_ACCESS error results.
It's unclear to me why under some host processes, dynamically loading the plug-in cannot resolve the static method address, but others have no problem. In my case I compiled asio.hpp for standalone ASIO into the plug-in directly, there was no linking.
The best guesses I can come up with are
maybe your http_server might start additional threads or even fork. This might cause io_context and friends to be accessed after startLocalhost returned. To explain the crash location appearing to be at the indicated line, I could add the heuristic that something is already off during the destructor for ioc
the only other idea I have is that actually the opening/binding of the acceptor throws, but due to possible incompatibilities of types in the shared module vs the main program, the exception thrown is not actually caught and causes abnormal termination. This might happen more easily if the main program also uses Boost libraries, but a different copy (build/version) of them.
In this case there's a simple thing you can do: split up initialization and use the overloads that take error_code to instead use them.
I want to get loopinfo in each function by iterating through functions in Module Pass. My code is as follows:
for (auto &F:M) {
if(!F.isDeclaration()){
LoopInfo &LI = getAnalysis<LoopInfoWrapperPass>(F).getLoopInfo();
}
}
However, there is an error, I think my variable Settings should conform to the first function definition, how should I resolve.
clang-12: /llvmtest/llvm/lib/IR/LegacyPassManager.cpp:1645: virtual
std::tuple<llvm::Pass*, bool>
{anonymous}::MPPassManager::getOnTheFlyPass(llvm::Pass*,
llvm::AnalysisID, llvm::Function&): Assertion `FPP && “Unable to find
on the fly pass”’ failed. PLEASE submit a bug report to
https://bugs.llvm.org/ and include the crash backtrace, preprocessed
source, and associated run script.
You can not do this with the legacy pass manager. In the legacy pass manager, every pass could only get info from same-scoped passes -- module from module, function from function, loop from loop, plus one exception allowing function passes to get data from module passes.
With the new pass manager, you'd create a LoopAnalysisManager and add the analysis pass you want and run it. See https://llvm.org/docs/NewPassManager.html#using-analyses .
Note that most of LLVM is presently written to support both pass managers at once. If you do this, you'll need to write your pass differently from most of LLVM's passes, you can't use the types with names like "WrapperPass" that exist to support both legacy and new pass managers.
I have a custom set of passes created using LLVM to run on some bitcode.
I've managed to get it to compile, but whenever I try to run it with a pass that calls getAnalysis() on another pass type it fails with:
Assertion `ResultPass && "getAnalysis*() called on an analysis that was not " "'required' by pass!"' failed.
The custom pass that is calling getAnalysis() requires its type, specifically;
bool Operators::doInitialization(){
ParseConfig &parseConfig = getAnalysis<ParseConfig>(); // Fails here.
}
.
.
.
void Operators::getAnalysisUsage(AnalysisUsage &AU) const{
AU.addRequired<ParseConfig>();
return;
}
I've spent a few days on this and am quite lost. I know the following is true:
ParseConfig is registered successfully via the RegisterPass<> template, and I have stepped through it in GDB to find that it does get registered.
Also using GDB I have found that when looking into getAnalysis() that the list of registered passes is always empty (which causes the assertion).
Important Note: I will eventually be using this on a Fortran project which is compiled with Flang, thus the LLVM library version I'm using is the Flang fork (found here). That fork is right around LLVM 7.1, but the specific files associated with registering passes seems to not be different from the current LLVM library.
Move getAnalysis function from doInitialization to runOnFunction would make it work.
From LLVM page
This method call getAnalysis* returns a reference to the pass desired. You may get a runtime assertion failure if you attempt to get an analysis that you did not declare as required in your getAnalysisUsage implementation. This method can be called by your run* method implementation, or by any other local method invoked by your run* method.
I'm new to LLVM. I am using the clang c++ API to compile multiple stub files (in c) to IR, and then stick them together using IR builder (after linking them) to eventually run via JIT.
All this works great, unless I add a functionInlining pass to my optimizations, at which point one of these function calls made in IR builder will trigger the following exception when the pass manager is run:
Assertion failed: (New->getType() == getType() && "replaceAllUses of value with new value of different type!"), function replaceAllUsesWith, file /Users/mike/Development/llvm/llvm/lib/IR/Value.cpp, line 356.
This is how I make the call instruction (pretty straight forward):
Function *kernelFunc = mModule->getFunction( (kernel->Name() + StringRef("_") + StringRef(funcName)).str());
if (kernelFunc){
CallInst* newInst = builder.CreateCall(kernelFunc, args);
}
Later the module is optimized:
legacy::PassManager passMan;
PassManagerBuilder Builder;
Builder.OptLevel = 3;
//Builder.Inliner = llvm::createFunctionInliningPass(); //commenting this back in trigger the exception
Builder.populateModulePassManager(passMan);
passMan.run( *mModule ); //exception occurs before this call returns
Any ideas what to look for?
Try running llvm::verifyModule on your module to see if it's correct. You might have an error and have been getting lucky beforehand but it tripped something up in the inliner.
In general assertions check a subset of things that can be wrong with your module but verify checks a lot more.
It could be a bug in LLVM but more than likely it's a bad module, it's easy to happen.
So I finally setup my dev environment so I could inspect the assertion call in the debugger. I turns out the basic block being replaced had a different context set than the one it was being replaced with. going back and making sure IRBuilder was using the same context as the IR parsers solved the problem.
I want to use Google's Javascript Engine V8 in a project, and attempted to write a wrapper class for the engine. Parts of the Code are copied from samples/shell.cc, from the V8 Distribution.
However, it just aborts with a Segmentation fault, and I can't figure out why, although the problem is happening around v8::internal::Top::global_context() (due to an invalid context, which appears to be NULL).. The code itself looks fine to me, but maybe I did something incredibly stupid :-).
The Segmentation fault in my Code happens in v8::Script::Compile.
Code in Question (Updated): https://gist.github.com/4c28227185a14bb6288c
Thanks to Luis G. Costantini R.'s Answer, there is no longer a problem in Set (It doesn't abort anymore), however, exposed names are still not available and will result in a ReferenceError...
Thy to change v8::Context::Scope context_scope(context); from the constructor (line 134) to internal_executeString (before script = v8::Script::Compile(source, name);). That because the destructor of the class v8::Context::Scope exits from the context.
I changed the method addFunction:
void addFunction(const std::string& fname, v8::InvocationCallback func)
{
v8::HandleScope handle_scope;
std::cout << "before ::Set()" << std::endl;
v8::Context::Scope context_scope(context);
context->Global()->Set(v8::String::New(fname.c_str()),
v8::FunctionTemplate::New(func)->GetFunction());
std::cout << "after ::Set()" << std::endl;
}
The function must be added to the global object of the context used to execute the script. There is an excellent tutorial (in two parts) of V8:
http://www.homepluspower.info/2010/06/v8-javascript-engine-tutorial-part-1.html
and
http://www.homepluspower.info/2010/06/v8-javascript-engine-tutorial-part-2.html
If you try to create an instance of JavaScript Function (FunctionTemplate::GetFunction()) or JavaScript Object (ObjectTemplate::NewInstance()) before entering the context (via Context::Scope), you get the segmentation fault. The reason: there is no JavaScript context available and both Function and Object always exist in a JavaScript execution context only. As per V8 documentation:
Function:
A JavaScript function object (ECMA-262, 15.3).
Object:
A JavaScript object (ECMA-262, 4.3.3).
The stack backtrace is almost useless unless I download all the source and try to build it myself, so... :)
Change js.executeString("1+1", true, false); to js.executeString("1+1", true, true); and see what the exception handler tells you?
Looks like you just got stung by this bug, that is if you have not already taken note of it. Perhaps submit another report since the referenced one looks old. Perhaps dig a little deeper and investigate the stack frame at every function call until the Segmentation Fault is received, you could either find a work around or the fix for this bug :)
I had a similar segmentation fault and the problem turned out to be the following. I was creating a new thread and attempting to create an object template and object in that thread. Unfortunately it seems that if you create a thread, you need to make sure that you enter a v8::Context again in order to do such things.
I got it working by passing a Handle to the v8::Context::Calling to the newly created thread and entered it in the new thread by using a scope.
I wrote this here as it is the only useful thing that comes up when I do a google search for the segmentation fault.