I am developing an GCC plugin to process AST in SSA form.
I create a callback run everytime after SSA form of function was compiled.
Here is my code
char* get_name_node(tree node) {
// return string represent node name
}
void execute_plugin_pass() {
printf("%s\n", get_name_node(cfun->decl));
}
struct opt_pass plugin_pass =
{
GIMPLE_PASS,
"plugin_pass",
0,
execute_plugin_pass,
NULL,
NULL,
0,
TV_PLUGIN_RUN,
PROP_gimple_any,
0,
0,
0,
0
};
extern "C" int
plugin_init(plugin_name_args* info, plugin_gcc_version* ver)
{
struct register_pass_info pass_info;
pass_info.reference_pass_name = where;
pass_info.pass = pass;
pass_info.ref_pass_instance_number = 0;
pass_info.pos_op = PASS_POS_INSERT_AFTER;
register_callback("plugin", PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
return 0;
}
But above code not run for method of class which was declare inside class declaration
For example, with this code
class A {
void method1();
void method2() {
// run some code here
}
};
void A::method1() {
// run some code here
}
My plugin only run for method1, but dont run for method2
At begining, i think this problems is because method2() will be considered as inline function, so i add option -fno-inline when run plugin. But it doesn't work
Can anyone help me?
Probably, the issue is where to insert your pass in the executed passes. Remember that the set and order of executed GCC passes vary with the optimization (so is different with -O0, -O1, -O2, ...). I don't understand where you are inserting it.
I also sometimes have the same issue of choosing where should I insert my pass. I often have a trial and error approach. Look into file gcc/passes.c of the GCC source tree.
If you need the SSA form of Gimple, consider inserting after the "ssa" pass or maybe after "phiopt".
If you don't need SSA, consider inserting after "cfg".
Out of curiosity, could you explain what your GCC plugin is doing?
Did you consider using GCC MELT (MELT is a high level domain specific language to code GCC extensions) for your work?
Related
I have been working on this simply hobbyist OS, and I have decided to add some C++ support. Here is the simple script I wrote. When I compile it, I get this message:
cp.o: In function `caller':
test.cpp:(.text+0x3a): undefined reference to `__stack_chk_fail'
Here is the script:
class CPP {
public:
int a;
void test(void);
};
void CPP::test(void) {
// Code here
}
int caller() {
CPP caller;
caller.test();
return CPP.a;
}
Try it like this.
class CPP {
public:
int a;
void test(void);
};
void CPP::test(void) {
CPP::a = 4;
}
int caller() {
CPP caller;
caller.test();
return caller.a;
}
int main(){
int called = caller();
std::cout << called << std::endl;
return 0;
}
It seems to me that the linker you are using can't find the library containing a security function crashing the program upon detecting stack smashing. (It may be that the compiler doesn't include the function declaration for some reason? I am not familiar who actually defies this specific function.) Try compiling with -fno-stack-protector or equivalent.
What is the compiler used? A workaround might be defining the function as something like exit(1); or similar. That would produce the intended effect yet fix the problem for now.
I created a test program to show how this actually plays out. Test program:
int main(){
int a[50]; // To have the compiler manage the stack
return 0;
}
With only -O0 as the flag ghidra decompiles this to:
undefined8 main(void){
long in_FS_OFFSET;
if (*(long *)(in_FS_OFFSET + 0x28) != *(long *)(in_FS_OFFSET + 0x28)) {
/* WARNING: Subroutine does not return */
__stack_chk_fail();
}
return 0;
}
With -fno-stack-protector:
undefined8 main(void){
return 0;
}
The array was thrown out by ghidra in decompilation, but we see that the stack protection is missing if you use the flag. There are also some messed up parts of this in ghidra (e.g. int->undefined8), but this is standard in decompilation.
Consequences of using the flag
Compiling without stack protection is not good per se, but it shouldn't affect you in much. If you write some code (that the compiler shouts you about) you can create a buffer overflowable program, which should not be that big of an issue in my optinion.
Alternative
Alternatively have a look at this. They are talking about embedded systems, but the topic seems appropriate.
Why is the code there
Look up stack smashing, but to my knowledge I will try to explain. When the program enters a function (main in this case) it stores the location of the next instruction in the stack.
If you write an OS you probably know what the stack is, but for completeness: The stack is just some memory onto which you can push and off which you can pop data. You always pop the last pushed thing off the stack. C++ and other languages also use the stack as a way to store local variables. The stack is at the end of memory and when you push something, the new thing will be further forward rather than back, it fills up 'backwards'.
You can initialise buffers as a local variable e.g. char[20]. If you filled the buffer without checking the length you might overfill this, and overwrite things in the stack other than the buffer. The return address of the next instruction is in the stack as well. So if we have a program like this:
int test(){
int a;
char buffer[20];
int c;
// someCode;
}
Then the stack will look something like this at someCode:
[ Unused space, c, buffer[0], buffer[1] ..., buffer[19], a, Return Address, variables of calling function ]
Now if I filled the buffer without checking the length I can overwrite a (which is a problem as I can modify how the program runs) or even the return address (which is a major flaw as I might be able to execute malicious shellcode, by injecting it into the buffer). To avoid this compilers insert a 'stack cookie' between a and the return address. If that variable is changed then the program should terminate before calling return, and that is what __stack_chk_fail() is for. It seems that it is defined in some library as well so you might not be able use this, despite technically the compiler being the one that uses this.
Is it possible to write some f() template function that takes a type T and a pointer to member function of signature void(T::*pmf)() as (template and/or function) arguments and returns a const char* that points to the member function's __func__ variable (or to the mangled function name)?
EDIT: I am asked to explain my use-case. I am trying to write a unit-test library (I know there is a Boost Test library for this purpose). And my aim is not to use any macros at all:
struct my_test_case : public unit_test::test {
void some_test()
{
assert_test(false, "test failed.");
}
};
My test suite runner will call my_test_case::some_test() and if its assertion fails, I want it log:
ASSERTION FAILED (&my_test_case::some_test()): test failed.
I can use <typeinfo> to get the name of the class but the pointer-to-member-function is just an offset, which gives no clue to the user about the test function being called.
It seems like what you are trying to achieve, is to get the name of the calling function in assert_test(). With gcc you can use
backtace to do that. Here is a naive example:
#include <iostream>
#include <execinfo.h>
#include <cxxabi.h>
namespace unit_test
{
struct test {};
}
std::string get_my_caller()
{
std::string caller("???");
void *bt[3]; // backtrace
char **bts; // backtrace symbols
size_t size = sizeof(bt)/sizeof(*bt);
int ret = -4;
/* get backtrace symbols */
size = backtrace(bt, size);
bts = backtrace_symbols(bt, size);
if (size >= 3) {
caller = bts[2];
/* demangle function name*/
char *name;
size_t pos = caller.find('(') + 1;
size_t len = caller.find('+') - pos;
name = abi::__cxa_demangle(caller.substr(pos, len).c_str(), NULL, NULL, &ret);
if (ret == 0)
caller = name;
free(name);
}
free(bts);
return caller;
}
void assert_test(bool expression, const std::string& message)
{
if (!expression)
std::cout << "ASSERTION FAILED " << get_my_caller() << ": " << message << std::endl;
}
struct my_test_case : public unit_test::test
{
void some_test()
{
assert_test(false, "test failed.");
}
};
int main()
{
my_test_case tc;
tc.some_test();
return 0;
}
Compiled with:
g++ -std=c++11 -rdynamic main.cpp -o main
Output:
ASSERTION FAILED my_test_case::some_test(): test failed.
Note: This is a gcc (linux, ...) solution, which might be difficult to port to other platforms!
TL;DR: It is not possible to do this in a reasonably portable way, other than using macros. Using debug symbols is really a hard solution, which will introduce a maintenance and architecture problem in the future, and a bad solution.
The names of functions, in any form, is not guaranteed to be stored in the binary [or anywhere else for that matter]. Static free functions certainly won't have to expose their name to the rest of the world, and there is no real need for virtual member functions to have their names exposed either (except when the vtable is formed in A.c and the member function is in B.c).
It is also entirely permissible for the linker to remove ALL names of functions and variables. Names MAY be used by shared libraries to find functions not present in the binary, but the "ordinal" way can avoid that too, if the system is using that method.
I can't see any other solution than making assert_test a macro - and this is actually a GOOD use-case of macros. [Well, you could of course pass __func__ as a an argument, but that's certainly NOT better than using macros in this limited case].
Something like:
#define assert_test(x, y) do_assert_test(x, y, __func__)
and then implment do_assert_test to do what your original assert_test would do [less the impossible bit of figuring out the name of the function].
If it's unit tests, and you can be sure that you will always do this with debug symbols, you could solve it in a very non-portable way by building with debug symbols and then using the debug interface to find the name of the function you are currently in. The reason I say it's non-portable is that the debug API for a given OS is not standard - Windows does it one way, Linux another, and I'm not sure how it works in MacOS - and to make matters worse, my quick search on the subject seems to indicate that reading debug symbols doesn't have an API as such - there is a debug API that allows you to inspect the current process and figure out where you are, what the registers contain, etc, but not to find out what the name of the function is. So that's definitely a harder solution than "convince whoever needs to be convinced that this is a valid use of a macro".
Using policy based design, an EncapsulatedAlgorithm:
template< typename Policy>
class EncapsulatedAlgorithm : public Policy
{
double x = 0;
public:
using Policy::subCalculate;
void calculate()
{
Policy::subCalculate(x);
}
protected:
~EncapsulatedAlgorithm() = default;
};
may have a policy Policy that performs a sub-calculation. The sub-calculation is not necessary for the algorithm: it can be used in some cases to speed up algorithm convergence. So, to model that, let's say there are three policies.
One that just "logs" something:
struct log
{
static void subCalculate(double& x)
{
std::cout << "Doing the calculation" << endl;
}
};
one that calculates:
struct calculate
{
static void subCalculate(double& x)
{
x = x * x;
}
};
and one to bring them all and in the darkness bind them :D - that does absolutely nothing:
struct doNothing
{
static void subCalculate(double& x)
{
// Do nothing.
}
};
Here is the example program:
typedef EncapsulatedAlgorithm<doNothing> nothingDone;
typedef EncapsulatedAlgorithm<calculate> calculationDone;
typedef EncapsulatedAlgorithm<loggedCalculation> calculationLogged;
int main(int argc, const char *argv[])
{
nothingDone n;
n.calculate();
calculationDone c;
c.calculate();
calculationLogged l;
l.calculate();
return 0;
}
And here is the live example. I tried examining the assembly code produced by gcc with the optimization turned on:
g++ -S -O3 -std=c++11 main.cpp
but I do not know enough about Assembly to interpret the result with certainty - the resulting file was tiny and I was unable to recognize the function calls, because the code of the static functions of all policies was inlined.
What I could see is that when no optimization is set for the, within the main function, there is a call and a subsequent leave related to the 'doNothing::subCalculate'
call _ZN9doNothing12subCalculateERd
leave
Here are my questions:
Where do I start to learn in order to be able to read what g++ -S spews out?
Is the empty function optimized away or not and where in main.s are those lines?
Is this design O.K.? Usually, implementing a function that does nothing is a bad thing, as the interface is saying something completely different (subCalculate instead of doNothing), but in the case of policies, the policy name clearly states that the function will not do anything. Otherwise I need to do type traits stuff like enable_if, etc, just to exclude a single function call.
I went to http://assembly.ynh.io/, which shows assembly output. I
template< typename Policy>
struct EncapsulatedAlgorithm : public Policy
{
void calculate(double& x)
{
Policy::subCalculate(x);
}
};
struct doNothing
{
static void subCalculate(double& x)
{
}
};
void func(double& x) {
EncapsulatedAlgorithm<doNothing> a;
a.calculate(x);
}
and got these results:
.Ltext0:
.globl _Z4funcRd
_Z4funcRd:
.LFB2:
.cfi_startproc #void func(double& x) {
.LVL0:
0000 F3 rep #not sure what this is
0001 C3 ret #}
.cfi_endproc
.LFE2:
.Letext0:
Well, I only see two opcodes in the assembly there. rep (no idea what that is) and end function. It appears that the G++ compiler can easily optimize out the function bodies.
Where do I start to learn in order to be able to read what g++ -S spews out?
This site's not for recommending reading material. Google "x86 assembly language".
Is the empty function optimized away or not and where in main.s are those lines?
It will have been when the optimiser was enabled, so there won't be any lines in the generated .S. You've already found the call in the unoptimised output....
In fact, even the policy that's meant to do a multiplication may be removed as the compiler should be able to work out you're not using the resultant value. Add code to print the value of x, and seed x from some value that can't be known at compile time (it's often convenient to use argc in a little experimental program like this, then you'll be forcing the compiler to at least leave in the functionally significant code.
Is this design O.K.?
That depends on a lot of things (like whether you want to use templates given the implementation needs to be exposed in the header file, whether you want to deal with having distinct types for every instantiation...), but you're implementing the design correctly.
Usually, implementing a function that does nothing is a bad thing, as the interface is saying something completely different (subCalculate instead of doNothing), but in the case of policies, the policy name clearly states that the function will not do anything. Otherwise I need to do type traits stuff like enable_if, etc, just to exclude a single function call.
You may want to carefully consider your function names... do_any_necessary_calculations(), ensure_exclusivity() instead of lock_mutex(), after_each_value() instead of print_breaks etc..
I am having a dynamic cast fail on a g++ compiler (Redhat 5.5 gcc version 3.4.6) that works fine on a Windows Visual studio 2003, 2005, and 2010 compiler. To understand what I am seeing I will try to quickly break the problem down. We have a process that loads in numerous "plugins" from a directory and dynamically loads these plugins (which are dynamically linked libraries). The process is supposed to compare different rules and data types to return an answer. To make the process modulure we made the process understand about a "BaseDataType" but not about the actual specific types (so we could keep the process generic). The flow of the program goes something like this:
All our "SpecifcObject" types inherit from "BaseDataType" like this
class SpecificObject : public virtual BaseDataType {
... Class Items ...
}
This is the code from the process looks like:
// Receive raw data
void receive_data(void *buff, int size,DataTypeEnum type)
{
// Get the plugin associated with this data
ProcessPlugin *plugin = m_plugins[type];
// Since we need to cast the data properly into its actual type and not its
// base data type we need the plugin to cast it for us (so we can keep the
// process generic)
BaseDataType *data = plugin->getDataObject(buff);
if(data)
{
// Cast worked just fine
.... Other things happen (but object isn't modified) ....
// Now compare our rule
RuleObject obj = getRule();
ResultObject *result = plugin->CompareData(obj,data);
if(result)
... Success Case ...
else
... Error Case ...
}
}
Now this is (generically) what a plugin would look like
BaseDataType* ProcessPluginOne::getDataObject(unsigned char *buff)
{
// SpecificObject inherits from BaseDataType using a "virtual" inheritance
SpecificObject *obj = reinterpret_cast<SpecificObject*>(buff);
if(obj)
return (BaseDataType*)obj;
else
return NULL;
}
ResultObject* ProcessPluginOne::CompareData(RuleObject rule, BaseDataType *data)
{
ResultObject *obj = NULL;
// This method checks out fine
if(data->GetSomeBaseMethod())
{
// This cast below FAILS every time in gcc but passes in Visual Studio
SpecificObject *obj = dynamic_cast<SpecificObject*>(data);
if(obj)
{
... Do Something ...
}
}
return result;
}
Again all of this works under Visual Studio but not under GCC. To debug the program I started adding some code to different sections. I finally got it to work once I did the following in the main process (see added code below):
// In process with Modification
void receive_data(void *buff, int size,DataTypeEnum type)
{
// Get the plugin associated with this data
ProcessPlugin *plugin = m_plugins[type];
// Since we need to cast the data properly into its actual type and not its
// base data type we need the plugin to cast it for us (so we can keep the
// process generic)
BaseDataType *data = plugin->getDataObject(buff);
if(data)
{
// Cast worked just fine
.... Other things happen (but object isn't modified) ....
// Now compare our rule
RuleObject obj = getRule();
/** I included the specific data types in as headers for debugging and linked in
* all the specific classes and added the following code
*/
SpecificObject *test_obj = dynamic_cast<SpecificObject*>(data);
if(test_obj)
cout << "Our was Data was casted correctly!" << endl;
/// THE CODE ABOVE FIXES THE BAD CAST IN MY PLUGIN EVEN THOUGH
/// THE CODE ABOVE IS ALL I DO
ResultObject *result = plugin->CompareData(obj,data);
if(result)
... Success Case ...
else
... Error Case ...
}
}
Significant Process Compile Options:
Compile: -m64 -fPIC -wno-non-template-friend -DNDEGBUG -I <Includes>
Link: -Wl -z muldefs -m64
Significant Plugin Compile Options
Compile: -Wall -wno-non-template-friend -O -O2
Link: -Wl -Bstatic -Bdynamic -z muldefs -shared -m64
Since I am not modifying the object "data" I have no idea why the rest of the program would suddenly start working. The only thing I can think of is that the virtual table is getting stripped off somewhere in the process and the "extra" dynamic cast forces the main process to keep the table (which still doesn't make a lot of sense).
I have tried taking out all optimization settings in gcc and its still the same. Any thoughts as to what is going on here?
Thanks.
The two most likely scenarios are that BaseDataType is not a polymorphic type, or that the compiler doesn't see the relationship between BaseDataType and SpecificObject at some point in the code (for example, in getDataObject the compiler might generate different code depending on knowledge of the parent-child relationship, since you use C-cast from child to parent. This is super easy to check: Change the C-cast to static_cast. If it fails to compile you're missing a critical include.
Let's say you have a function in C/C++, that behaves a certain way the first time it runs. And then, all other times it behaves another way (see below for example). After it runs the first time, the if statement becomes redundant and could be optimized away if speed is important. Is there any way to make this optimization?
bool val = true;
void function1() {
if (val == true) {
// do something
val = false;
}
else {
// do other stuff, val is never set to true again
}
}
gcc has a builtin function that let you inform the implementation about branch prediction:
__builtin_expect
http://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html
For example in your case:
bool val = true;
void function1()
{
if (__builtin_expect(val, 0)) {
// do something
val = false;
}
else {
// do other stuff, val is never set to true again
}
}
You should only make the change if you're certain that it truly is a bottleneck. With branch-prediction, the if statement is probably instant, since it's a very predictable pattern.
That said, you can use callbacks:
#include <iostream>
using namespace std;
typedef void (*FunPtr) (void);
FunPtr method;
void subsequentRun()
{
std::cout << "subsequent call" << std::endl;
}
void firstRun()
{
std::cout << "first run" << std::endl;
method = subsequentRun;
}
int main()
{
method = firstRun;
method();
method();
method();
}
produces the output:
first run subsequent call subsequent call
You could use a function pointer but then it will require an indirect call in any case:
void (*yourFunction)(void) = &firstCall;
void firstCall() {
..
yourFunction = &otherCalls;
}
void otherCalls() {
..
}
void main()
{
yourFunction();
}
One possible method is to compile two different versions of the function (this can be done from a single function in the source with templates), and use a function pointer or object to decide at runtime. However, the pointer overhead will likely outweigh any potential gains unless your function is really expensive.
You could use a static member variable instead of a global variable..
Or, if the code you're running the first time changes something for all future uses (eg, opening a file?), you could use that change as a check to determine whether or not to run the code (ie, check if the file is open). This would save you the extra variable. Also, it might help with error checking - if for some reason the initial change is be unchanged by another operation (eg, the file is on removable media that is removed improperly), your check could try to re-do the change.
A compiler can only optimize what is known at compile time.
In your case, the value of val is only known at runtime, so it can't be optimized.
The if test is very quick, you shouldn't worry about optimizing it.
If you'd like to make the code a little bit cleaner you could make the variable local to the function using static:
void function() {
static bool firstRun = true;
if (firstRun) {
firstRun = false;
...
}
else {
...
}
}
On entering the function for the first time, firstRun would be true, and it would persist so each time the function is called, the firstRun variable will be the same instance as the ones before it (and will be false each subsequent time).
This could be used well with #ouah's solution.
Compilers like g++ (and I'm sure msvc) support generating profile data upon a first run, then using that data to better guess what branches are most likely to be followed, and optimizing accordingly. If you're using gcc, look at the -fprofile-generate option.
The expected behavior is that the compiler will optimize that if statement such that the else will be ordered first, thus avoiding the jmp operation on all your subsequent calls, making it pretty much as fast as if it wern't there, especially if you return somewhere in that else (thus avoiding having to jump past the 'if' statements)
One way to make this optimization is to split the function in two. Instead of:
void function1()
{
if (val == true) {
// do something
val = false;
} else {
// do other stuff
}
}
Do this:
void function1()
{
// do something
}
void function2()
{
// do other stuff
}
One thing you can do is put the logic into the constructor of an object, which is then defined static. If such a static object occurs in a block scope, the constructor is run the fist time that an execution of that scope takes place. The once-only check is emitted by the compiler.
You can also put static objects at file scope, and then they are initialized before main is called.
I'm giving this answer because perhaps you're not making effective use of C++ classes.
(Regarding C/C++, there is no such language. There is C and there is C++. Are you working in C that has to also compile as C++ (sometimes called, unofficially, "Clean C"), or are you really working in C++?)
What is "Clean C" and how does it differ from standard C?
To remain compiler INDEPENDENT you can code the parts of if() in one function and else{} in another. almost all compilers optimize the if() else{} - so, once the most LIKELY being the else{} - hence code the occasional executable code in if() and the rest in a separate function that's called in else