Dictionary of arrays of lambda-like function pre c++11 - c++

I am trying to write a verilator test that is a bit complicated. I am building a cpu in verilog and I would like to make sure that control signals are correct at certain stages. As such I am trying to write some code that works as follows:
checkAssertion(phase=2, instruction=3766636548)
I was hoping to implement this as
std::map<int, std::function<bool()>[5]>
This way for every instruction I aim to test I could look up the instruction and check that the state is valid for a specific instruction at a specific phase.
I also tried making something that looked like this as well
std::vector<std::function<bool()>> functors;
functors.push_back([&] { return read_cpu_var("phase") == 0; });
functors.push_back([&] { return (read_cpu_var("phase") == 1) && (read_cpu_var("register_read_1") == 2); });
functors.push_back([&] { return read_cpu_var("phase") == 2; });
functors.push_back([&] { return read_cpu_var("phase") == 3; });
functors.push_back([&] { return read_cpu_var("phase") == 4; });
However it seems that this will not work as I cant get verilator to compile the c++ as c++11. Though if you are curious this is the build and run command
verilator -Wall --trace -cc --vpi cpu.v --exe -std=c++11 ../TestBenches/cpu_verify.cpp && make -j -C obj_dir -f Vcpu.mk Vcpu && obj_dir/Vcpu
So without c++11 I am without lambdas and without std::function. I figured those would be an eloquent way to lay out each of the assertions for each of the phases. How would I do this eloquently before c++11

Related

Abort a procedure with flag, return, exception, goto or break

I've got a code which parses a file and breaks up if invalid conditions are met.
The code is in C++ and goes like that:
bool ok = true;
if (task1() == false)
ok = false;
if (ok && (task2() == false))
ok = false;
if (ok && (task3() == false))
ok = false;
cleanup();
return ok;
Now I'm looking into cleaner alternatives to get the same result.
As far as I see there are:
using a flag and many conditions as in the code above
There are many redundant tests for the same information.
The effect on the runtime will be negligible and probably
entirely removed by the compiler but it still makes the code
more complicated.
you could wrap the tasks in a method and return from it
This looks much cleaner but you spread your code in multiple
functions. Depending on your context there might be a long
list of parameters. Further more many it is also not the
best to spread returns all over the method.
you could use exceptions
This will give some quite descriptive code but it is also
heavy as you just want to skip some calls. Further more it
might not be exactly an exceptional case.
you could break from a do while(0) or another loop
or switch statement.
Well, it is not really meant for such a task. Other than
that you get a lightweight and compact implementation with
and descriptive keyword.
using a goto statement
That seems to combine most advantages. Still, I am unsure.
Everywhere people are stating, that breaking multiple loops
is the only remaining sensible use for this keyword.
I didn't find a discussion about such a code. Which implementations are generally suggested? Is this case mentioned in any C++ coding guidelines? Are there other practical options?
Edit: My goal does not seem to be clear. I'm looking for the best way how to break from a procedure and not for a way to call three methods. The described problem is more of an example. I'm interested in arguments for and against different syntaxes to do this.
In the code of object each method is a placeholder for a couple of code lines which are similar but differ from each other. There are maybe 50 code blocks. An old code block was looking like the following (I know that there are more things to optimize than just the objective of this question):
if (ok)
{
keyString = "speed";
tmpVal = configuration->getValue(section, keyString, -1);
if (tmpVal != -1)
{
m_speed = tmpVal;
if (m_speed < m_minSpeed)
m_minSpeed = m_speed;
m_needSpeed = true;
}
else if (m_needSpeed)
{
ok = false;
keyErr = keyString;
}
}
Assuming that all of these functions return a bool, it looks to me like the shown code is logically identical to:
bool ok= task1() && task2() && task3();
cleanup();
return ok;

Workflow of LLVM and clang

I am just a beginner in LLVM, and (https://www.cs.cornell.edu/~asampson/blog/llvm.html) webpage along with the stack overflow, and my fellow researcher has helped me a lot.
I would first like to illustrate what I am trying to work on (the problem) and then I will describe the approach that I have taken to work on the problem.
Then, I need your advice and guidance if I am missing anything.
Work Problem
My input is a C program and output is its SSA form in prefix representation printed in an output File.
For eg, if the C code segment is :
x=4;
x++;
z=x+7;
The output SSA form in prefix representation is :
( = x0 4)
( = x1 (+ x0 1) )
( = z (x1 + 7) )
Please ignore the actual IR instruction for now, just assume that I am able to read the IR and convert it to this form, with some extra statements (which I am not presenting here for readability).
My ignorant Approach of using LLVM (Please find the complete program below)
using namespace llvm;
namespace {
struct TestPass: public ModulePass {
IRssa::ptr ir_ssa = IRssa::ptr(new IRssa());
static char ID;
typedef std::list<std::pair<std::string, std::list<Instruction *> > > funcDump;
TestPass() : ModulePass(ID) { }
std::map<std::string, funcDump> workingList;
bool runOnModule(Module &M) {
std::string funcName, bkName;
for (Function &F : M) { //Found a new Function
if (isa<Function>(F) && !(F.getName().startswith("llvm."))) {
funcName = F.getName();
std::pair<std::string, std::list<Instruction *> > funcBlockList;
std::list<std::pair<std::string, std::list<Instruction *> > > wholeFuncBlocks;
for (BasicBlock &B : F) { //Blocks of the Function
if (isa<BasicBlock>(B)) {
bkName = B.getName();
}
std::list<Instruction *> listInst;
for (auto &I : B) {
Instruction *ins;
ins = &I;
listInst.push_back(ins);
}
funcBlockList.first = bkName;
funcBlockList.second = listInst;
wholeFuncBlocks.push_back(funcBlockList);
}
workingList[funcName] = wholeFuncBlocks;//Mapping of the functions
}
}
ir_ssa->setFunctionDump(workingList);
funcDump funcData;
funcData = workingList["start_program"]; //Starting from the start_program function
convertFunctionToSSA(funcData, ir_ssa);
std::ofstream outFile;
outFile.open("Out.ssa");
printSSA_toFile(outFile, ir_ssa);
return false;
}
};
}
char TestPass::ID = 0;
static RegisterPass<TestPass> X("testPass", "Testing A Pass");
static void registerTestPass(const PassManagerBuilder &, legacy::PassManagerBase &PM) {
PM.add(new TestPass());
}
static RegisterStandardPasses RegisterMyPass(PassManagerBuilder::EP_ModuleOptimizerEarly, registerTestPass);
static RegisterStandardPasses RegisterMyPass0(PassManagerBuilder::EP_EnabledOnOptLevel0, registerTestPass);
//Automatically enable the pass (http://adriansampson.net/blog/clangpass.html)
Description:
As shown above I am calling a runOnModule() and collecting all the IR Instructions of all the blocks for each function in the program into a workingList data structure (a std::map in this case). After all the functions in the given program is finished reading, I then do my required task of reading IR instructions one at a time, function by function and block by block (in the user defined function convertFunctionToSSA(funcData, ir_ssa) taking the whole function IR as argument and the result of processing these IR is returned in the argument ir_ssa). I am also printing the resulted value from ir_ssa onto the output file outFile.
Now How do I Run (I type the following)
clang -O1 -g -Xclang -emit-llvm -c someProgram.c -o test.bc
opt -O1 -instnamer -mem2reg -simplifycfg -loops -lcssa -loop-simplify -loop-rotate -loop-unroll -unroll-count=15 -unroll-allow-partial -load src/libTestPass.so -testPass test.bc -o test
My Expectation
I assume (as per my understanding) that the above two commands does the following.
First clang takes the program someProgram.c and generates IR as an output file "test.bc".
The next command opt, takes the file "test.bc" and then applies all the above passes one by one till the last pass "-unroll-allow-partial" and it also links my library libTestPass.so (this .so file is generated on compiling the above ModulePass program) then, finally the pass "-testPass" which I think is the pass where I am doing my process (of converting to SSA prefix representation).
Your Advice and Comments
I am not sure if LLVM is actually running in the sequence as I am assuming (My Expectation). Kindly comment if I am missing anything or if my assumption is not correct. Also please feel free to ask more details if necessary.
Current Problem Faced
I am able to successfully convert most of the C programs but on a specific program I stuck with some error. Debugging this error lead me to think that I am missing somethink or my assumption about this working of LLVM in regards to the calling order of clang and opt is not correct.
Your help is highly appreciated.

LLVM Optimization Using C++ API

I'm trying to figure out how to perform all optimizations on an LLVM Module (e.g., all -O3 optimizations). I've tried the following but I'm not sure that all possible optimizations are being applied (e.g., inlining).
//take string "llvm" (LLVM IR) and return "output_llvm" (optimized LLVM IR)
static string optimize(string llvm) {
LLVMContext &ctx = getGlobalContext();
SMDiagnostic err;
Module *ir = ParseIR(MemoryBuffer::getMemBuffer(llvm), err, ctx);
PassManager *pm = new PassManager();
PassManagerBuilder builder;
builder.OptLevel = 3;
builder.populateModulePassManager(*pm);
pm->run(*ir);
delete pm;
string output_llvm;
raw_string_ostream buff(output_llvm);
ir->print(buff, NULL);
return output_llvm;
}
Is there anything else I can do to improve the performance of the output LLVM IR?
EDIT: I have tried to add all of the optimizations from the AddOptimizationPasses() function in opt.cpp, as shown below:
PassManager *pm = new PassManager();
int optLevel = 3;
int sizeLevel = 0;
PassManagerBuilder builder;
builder.OptLevel = optLevel;
builder.SizeLevel = sizeLevel;
builder.Inliner = createFunctionInliningPass(optLevel, sizeLevel);
builder.DisableUnitAtATime = false;
builder.DisableUnrollLoops = false;
builder.LoopVectorize = true;
builder.SLPVectorize = true;
builder.populateModulePassManager(*pm);
pm->run(*module);
Also, I create a FunctionPassManager before I create the PassManager and add several passes like so:
FunctionPassManager *fpm = new FunctionPassManager(module);
// add several passes
fpm->doInitialization();
for (Function &f : *ir)
fpm->run(f);
fpm->doFinalization();
However, the performance is the same as running on the command line with -O1 whereas I can get much better performance on the command line using -O3. Any suggestions?
Follow the logic in the function AddOptimizationPasses in opt.cpp. This is the source of truth.
While looking into LLVM optimization I found this information on pass ordering and I think it's potentially telling for why someone might encounter this situation.
Depending on your language and the optimizations you're expecting, you may need to specifically tune your optimizing passes to your use-cases. In particular, the ordering of those passes may be important. For example, if your better -O3 code was optimizing completely un-optimized code or code that was already partially optimized by your program, it may just be that you need to re-order or duplicate some passes in order to the expected final result.
Given the specific wording here and the fact that Eli's answer was accepted I'm not 100% sure if this is what the OP was seeing but this knowledge may be helpful for others with similar questions who find this answer like I did.

Does Go optimize out unreachable if-statements?

Go has a very unfortunate lack of built-in assertions. I want to implement them this way:
const ASSERT = true
func SomeFunction() {
if ASSERT && !some_condition_that_should_always_be_true() {
panic("Error message or object.")
}
}
My question is will the if-statement be optimized out if I define const ASSERT = false?
As noted by the people in the comments to your question, it's implementation-specific.
gc does remove it. You can build your program with -gcflags '-S' and see that the ASSERT part is not in the binary.
E.g. compile the following code with -gcflags '-S', and you'll see that the code on lines 8 and 9 is included, but change Assert to be false, and they won't be there in the asm listing.
package main
const Assert = true
var cond = true
func main() {
if Assert && !cond {
panic("failed")
}
}
EDIT:
As for gccgo, it removes this code at -O1 and above. You can see it by compiling the same code with
go build -compiler gccgo -gccgoflags '-O1' main.go
and then doing
objdump -S main
to see the annotated assembly.

Access pre-compiled functions within a class C++/11

Sorry if the title is misleading, I'm currently looking for solutions to the following:
I'm developing a library, for other people to use. They have to follow a strict design concept and the way they structure any additional features within the library. They all use Linux and (Vim) and as such as are allowed to use terminal commands (i.e to be able to compile etc..) and we all use clang as a compiler.
My question is this: Let's suppose I write a function called: "checkCode":
template<typename T>
void checkCode(T&& codeSnippet)
{
//// code
}
I want to make this function run so whenever they type "checkCode" in a terminal this function is therefore called. I know using clang thy have similar functionality, however, this is understandable as you're using the whole of clang. So:
1) Is it possible to just compile a class, and then access each of the functions through
the .dylab | .so file?
2) Might it be a better idea, or, better to take a copy of the source of clang, add this functionality and role it out to those using and contributing to the library? This would be like an additional add-on to clang?
Thanks
you could use one executable and symbolic links to it like busybox:
int main(int argc, char **argv)
{
string programName = argv[0];
size_t lastSlash = programName.find_last_of('/');
if(lastSlash != string::npos)
programName = programName.substr(lastSlash + 1);
if(programName == "function_1")
{
function_1();
return 0;
}
if(programName == "function_2")
{
function_2();
return 0;
}
// ...
// normal main code
return 0;
}