I just get started with LLVM. I am reading the code for stack protection which is located in lib/CodeGen/StackProtector.cpp. In this file, the InsertStackProtectors function will insert a call to llvm.stackprotect to the code:
// entry:
// StackGuardSlot = alloca i8*
// StackGuard = load __stack_chk_guard
// call void #llvm.stackprotect.create(StackGuard, StackGuardSlot)
// ...(Skip some lines)
CallInst::
Create(Intrinsic::getDeclaration(M, Intrinsic::stackprotector),
Args, "", InsPt);
This llvm.strackprotect(http://llvm.org/docs/LangRef.html#llvm-stackprotector-intrinsic) seems to be an intrinsic function of llvm, so I tried to find the source code of this function. However, I cannot find it...
I do find one line definition of this function in include/llvm/IR/Intrinsics.td, but it does not tell how it is implemented.
So my questions are:
Where can I find the code for this llvm.strackprotect function?
What is the purpose of these *.td files?
Thank you very much!
The .td file is LLVM's use of code-generation to reduce the amount of boilerplate code. In this particular case, ./include/llvm/IR/Intrinsics.gen is generated in the build directory and contains code describing the intrinsics specified in the .td file.
As for stackprotector, there's a bunch of code in the backend for handling it. See for instance lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp - in SelectionDAGBuilder::visitIntrinsicCall it generates the actual DAG nodes that implement this intrinsic
Related
I am trying to collect some information from my LLVM optimization pass during runtime. In other words, I want to know the physical address of a specific IR instruction after compilation. So my idea is to convert the LLVM metadata into LLVM DWARF data that can be used during runtime. Instead of attaching the filename and line numbers, I want to attach my own information. My question falls into two parts:
Here is a code that can get the Filename and Line number of an instruction:
if (DILocation *Loc = I->getDebugLoc()) { // Here I is an LLVM instruction
unsigned Line = Loc->getLine();
StringRef File = Loc->getFilename();
StringRef Dir = Loc->getDirectory();
bool ImplicitCode = Loc->isImplicitCode();
}
But How can I set this fields? I could not find a relevant function.
How can I see the updated Debug Information during (filename and line numbers) runtime? I used -g for compiling but still I do not see the Debug Information.
Thanks
The function you need it setDebugLoc() and the info is only included in the result if you include enough of it. The module verifier will tell you what you're missing. These two lines might also be what's tripping you up.
module->addModuleFlag(Module::Warning, "Dwarf Version", dwarf::DWARF_VERSION);
module->addModuleFlag(Module::Warning, "Debug Info Version", DEBUG_METADATA_VERSION);
I am trying to see how to avoid LLVM JIT compilation every time and use the cached copy. I see that LLVM has ObjectCache support for code generation from module, but to get module from a file or code string, it needs to be compiled and go through different optimization passes. What is the best way to go about it?
Cache the final image object to some file and first lookup the file, and try to parse and try to create ExecutionEngine with the image so that one can execute (get pointer to function and call it)
Save the intermediate output of code compilation and optimization - i.e. write a module to some file (e.g., using dump) and try to read it (parse IR). then use ObjectCache support for code generation from this module.
Option (2) seems two steps and likely worse than (1), but is (1) the right way to go about it?
Given that you have an instance of ObjectFile you can write it to the disk:
std::string cacheName("some_name.o");
std::error_code EC;
raw_fd_ostream outfile(cacheName, EC, sys::fs::F_None);
outfile.write(object.getBinary()->getMemoryBufferRef().getBufferStart(),
object.getBinary()->getMemoryBufferRef().getBufferSize());
outfile.close();
Then you can read it back from disk:
std::string cacheName("some_name.o");
ErrorOr<std::unique_ptr<MemoryBuffer>> buffer =
MemoryBuffer::getFile(cacheName.c_str());
if (!buffer) {
// handle error
}
Expected<std::unique_ptr<ObjectFile>> objectOrError =
ObjectFile::createObjectFile(buffer.get()->getMemBufferRef());
if (!objectOrError) {
// handle error
}
std::unique_ptr<ObjectFile> objectFile(std::move(objectOrError.get()));
auto owningObject = OwningBinary<ObjectFile>(std::move(objectFile),
std::move(buffer.get()));
auto object = owningObject.getBinary();
You can take this code and plug it in your custom ObjectCache, then feed the object cache to the JIT engine.
I hope it helps.
I'm trying to write a LLVM function pass to do some instrumentation.
Therefore, I I need to get
filename in which the function is declared
the line numbers (begin and end) of the source file in which the function is decalred.
I already found and tried getMetadata("dbg") but I do not want to use the compiler flag -g.
Is there another way to get these information?
Well... the debug metadata is emitted when debug info generation is enabled. You may want to reduce the amount of debug information generated with -gline-tables-only
I've been modifying an example C++ program from the Caffe deep learning library and I noticed this code on line 234 that doesn't appear to be referenced again.
::google::InitGoogleLogging(argv[0]);
The argument provided is a prototxt file which defines the parameters of the deep learning model I'm calling. The thing that is confusing me is where the results from this line go? I know they end up being used in the program because if I make a mistake in the prototxt file then the program will crash. However I'm struggling to see how the data is passed to the class performing the classification tasks.
First of all, argv[0] is not the first argument you pass to your executable, but rather the executable name. So you are passing to ::google::InitGoogleLogging the executable name and not the prototxt file.
'glog' module (google logging) is using this name to decorate the log entries it outputs.
Second, caffe is using google logging (aka 'glog') as its logging module, and hence this module must be initialized once when running caffe. This is why you have this
::google::InitGoogleLogging(argv[0]);
in your code.
I am trying to use C++ application with FreeRTOS.
I come to know about this post :- https://sourceforge.net/p/freertos/discussion/382005/thread/5d5201c0 but I am not sure how and where to add this TaskCPP.h file.
Right now I have very simple main.cpp file something like this.
int main(void)
{
//Set priority bits to preempt priority
NVIC_PriorityGroupConfig(NVIC_PriorityGroup_4);
for( ;; );
return 0;
}
And this gives me an error :-
/usr/bin/../lib/gcc/arm-none-eabi/4.7.4/../../../../arm-none-eabi/bin/ld: error: STM32F4_FreeRTOS.axf uses VFP register arguments, /usr/bin/../lib/gcc/arm-none-eabi/4.7.4/libgcc.a(unwind-arm.o) does not
/usr/bin/../lib/gcc/arm-none-eabi/4.7.4/../../../../arm-none-eabi/bin/ld: failed to merge target specific data of file /usr/bin/../lib/gcc/arm-none-eabi/4.7.4/libgcc.a(unwind-arm.o)
I am not sure what is wrong with settings.
That error is related to your tool chain. Your target triple indicates, a more generic tool chain, but FreeRTOS seems to use more specific ARM features. You may want to read this question: ARM compilation error, VFP registered used by executable, not object file
As workaround: call your compiler with -print-multi-lib and check whether the libraries required by FreeRTOS are available. If they are, you'll have to enable them. If they are not, you'll have to use another tool chain.