What is actually the LLVM Context? Is it the environment such as bitsize in which the code runs? What are the mwmbers of LLVMContext class? I went through http://llvm.org/docs/doxygen/html/classllvm_1_1LLVMContext.html . But Couldn't understand much.
From the link you included:
This is an important class for using LLVM in a threaded context. It
(opaquely) owns and manages the core "global" data of LLVM's core
infrastructure, including the type and constant uniquing tables.
Since it says "opaquely" you're not supposed to know what it contains, what it does or what's used for. Just think of it as a reference to the core LLVM "engine" that you should pass to the various methods that require a LLVMContext.
edit: just to clarify: no, it doesn't contain things such as bitsize - those are defined in TargetData.
Related
Well this might be a very weird question but my curiosity has striken pretty hard on this. So here it goes...
NOTE: Lets take the language C into consideration here.
As programmers we usually define a user-defined datatype(say struct) in the source code with the appropriate name.
Suppose I have a program in which I have a structure defined as:
struct Animal {
char *name;
int lifeSpan;
};
And also I have started the execution of this program.
Now, my question here is;
What if I want to define a new structure called "Plant" just like "Animal" mentioned above in my program, without writing its definition in the source code itself(which is obviously impossible currently) but rather from a user input string(or a file input) during runtime.
Lets say my program takes input string from a text file named file1.txt whose content is:
struct Plant {
char *name;
int lifeSpan;
};
What I want now is to have a new structure named "Plant" in my program which is already in execution. The program should read the file content and create a structure as written in the file and attach it to itself on-the-go.
I have checked out a solution for C++ in the discussion Declaring a data type dynamically in C++ but it doesnt seem to have a very convincing solution.
The solution I am looking for is at the compiler-linker-loader level rather than from the language itself.I would be very pleased and thankful if anyone is looking forward to sharing their ideas on this.
What you're asking about is basically "can we implement C as a scripting language?", since this is the only way code can be executed after compilation.
I'm aware that people have been writing (mostly in the comments) that it's possible in other languages but isn't possible in C, since C is a compiled language (hence data types should be defined during compile time).
However, to the best of my knowledge it's actually possible (and might not be as hard as one would imagine).
There are many possible approaches (machine code emulation (VM), JIT compilation, etc').
One approach will use a C compiler to compile the C script as an external dynamic library (.dll on windows, .so on linux, etc') and than "load" the compiled library and execute the code (this is pretty much the JIT compilation approach, for lazy people).
EDIT:
As mentioned in the comments, by using this approach, the new type is loaded as part of an external library.
The original code won't know about this new type, only the new code (or library) will be "aware" of this new type and able to properly use it.
On the other hand, I'm not sure why you're insisting on the need to use static types and a compiler-linker-loader level solution.
The language itself (the C language) can manage this task dynamically (during execution time).
Consider Ruby MRI, for example. The Ruby language supports dynamic types that can be defined during runtime...
...However, this is implemented in C and it's possible to use the code from within C to define new modules and classes. These aren't static types that can be tested during compilation (type creation and identification is performed during runtime).
This is a perfect example showing that C (as a language) can dynamically define "types".
However, this is also a poor example because Ruby's approach is slow. A custom approved can be far faster since it would avoid the huge overhead related to functionality you might not need (such as inheritance).
General Scenario
Using dlsym(), I dynamically load a shared object addon from my main thread.
I follow either of these two ways.
Way A
Pass a struct of pointers to symbols to the addon so it can call the host's functions and access other variables, knowing their data type of course.
Way B
Let the addon call symbols by their extern "C" identifier and have the runtime normally lookup them.
Question
Is there any difference between these two methods regarding ABI stability? For example: would one of this methods guarantee more chance of compatibility from an addon to the host program in case they were compiled in different environments?
One advantage of "Way A" is that it gives you the chance to pass different pointers to different plugins. So you could for example make a "v1" struct of pointers, and then later a "v2" that newer plugins could request.
If anything works well two methods are equivalent except some performance loss. But runtime lookup resolves named symbols in global scope, which could be influenced by flag like RTLD_GLOBAL used in dlopen. It will lead to different behaviour under different context even though using the same addon.
So I think method A is better.
I have a mesh processing algorithm that calls vtkPointsProjectedHull multithreaded via high-level map-reduce (Qt's version).
If you look at the source code of vtkPointsProjectedHull you can see that it calls a free standing C-function and for that it uses a static global variable at line 27:
static double firstPt[3];
(You might be able to imagine how long it took to find this bug after I made the code multi-threaded...).
The layout of the class and free standing C-functions make it hard to move the static definition into a class variable. (I'm sure it is doable, but not straightforward).
The solution in VisualC++ is quite easy, I made a vtkPointsProjectedHullFixed.cxx with the single change being that the static variable is thread_local:
__declspec(thread) static double firstPt[3];
Now I am porting this code to OSX Clang. And thread local storage is explicitly disabled there.
Do I have to rewrite the whole vtkPointsProjectedHullFixed class to use a class variable? Or do you know a better way?
A guess as I cannot confirm at present, but you might find:
_Thread_local static double firstPt[3];
will work. Apple Clang does support C11 thread local, and double is a C type; what it doesn't support is C++ thread local which supports C++ types with constructors/destructors.
HTH
Edit: Confirmed _Thread_local works as expected with C code. Should work with C++ code, but only if the variable type is a C one (as double is).
I want to apply a DFS traversing algorithm on a CFG of a function. Therefore, I need the internal representation of the CFG. I need oriented edges and spotted MachineBasicBlock::const_succ_iterator. It is there a way to get the CFG with oriented edges by using a FunctionPass, instead of a MachineFunctionPass? The reason why I want this is that I have problems using MachineFunctionPass. I have written several complex passes till now, but I cannot run a MachineFunctionPass pass.
I found that : "A MachineFunctionPass is a part of the LLVM code generator that executes on the machine-dependent representation of each LLVM function in the program. Code generator passes are registered and initialized specially by TargetMachine::addPassesToEmitFile and similar routines, so they cannot generally be run from the opt or bugpoint commands."...So how I can run a MachineFunctionPass?
When I was trying to run with opt a simple MachineFunctionPass, I got the error :
Pass 'mycfg' is not initialized.
Verify if there is a pass dependency cycle.
Required Passes:
opt: PassManager.cpp:638: void llvm::PMTopLevelManager::schedulePass(llvm::Pass*): Assertion `PI && "Expected required passes to be initialized"' failed.
So I have to initialize the pass. But in my all other passes I did not any initialization and I don't want to use INITIALIZE_PASS since I have to recompile the llvm file that is keeping the pass registration... Is there a way to keep using static RegisterPass for a MachineFunctionPass ? I mention that if I change to FunctionPass, I have no problems, so indeed it might be an opt problem.
I have started another pass for CallGraph. I am using CallGraph &CG = getAnalysis<CallGraph>(); efficiently. It is a similar way of getting CFG-s? What I found till now are succ_iterator/succ_begin/succ_end which are from CFG.h, but I think I still have to get the CFG analysis somehow.
Thank you in advance !
I think you may have some terms mixed up. Basic blocks within each function are already arranged in a kind-of CFG, and LLVM provides you the tools to traverse that. See my answer to this question, for example.
MachineFunction lives on a different level, and unless you're doing something very special, this is not the level you should operate on. It's too low-level, and too target specific. There's some overview of the levels here
I'm working on a number crunching app using the CUDA framework. I have some static data that should be accessible to all threads, so I've put it in constant memory like this:
__device__ __constant__ CaseParams deviceCaseParams;
I use the call cudaMemcpyToSymbol to transfer these params from the host to the device:
void copyMetaData(CaseParams* caseParams)
{
cudaMemcpyToSymbol("deviceCaseParams", caseParams, sizeof(CaseParams));
}
which works.
Anyways, it seems (by trial and error, and also from reading posts on the net) that for some sick reason, the declaration of deviceCaseParams and the copy operation of it (the call to cudaMemcpyToSymbol) must be in the same file. At the moment I have these two in a .cu file, but I really want to have the parameter struct in a .cuh file so that any implementation could see it if it wants to. That means that I also have to have the copyMetaData function in the a header file, but this messes up linking (symbol already defined) since both .cpp and .cu files include this header (and thus both the MS C++ compiler and nvcc compiles it).
Does anyone have any advice on design here?
Update: See the comments
With an up-to-date CUDA (e.g. 3.2) you should be able to do the memcpy from within a different translation unit if you're looking up the symbol at runtime (i.e. by passing a string as the first arg to cudaMemcpyToSymbol as you are in your example).
Also, with Fermi-class devices you can just malloc the memory (cudaMalloc), copy to the device memory, and then pass the argument as a const pointer. The compiler will recognise if you are accessing the data uniformly across the warps and if so will use the constant cache. See the CUDA Programming Guide for more info. Note: you would need to compile with -arch=sm_20.
If you're using pre-Fermi CUDA, you will have found out by now that this problem doesn't just apply to constant memory, it applies to anything you want on the CUDA side of things. The only two ways I have found around this are to either:
Write everything CUDA in a single file (.cu), or
If you need to break out code into separate files, restrict yourself to headers which your single .cu file then includes.
If you need to share code between CUDA and C/C++, or have some common code you share between projects, option 2 is the only choice. It seems very unnatural to start with, but it solves the problem. You still get to structure your code, just not in a typically C like way. The main overhead is that every time you do a build you compile everything. The plus side of this (which I think is possibly why it works this way) is that the CUDA compiler has access to all the source code in one hit which is good for optimisation.