Is it possible to create a function dynamically, during runtime in C++? - c++

C++ is a static, compiled language, templates are resolved during compile time and so on...
But is it possible to create a function during runtime, that is not described in the source code and has not been converted to machine language during compilation, so that a user can throw at it data that has not been anticipated in the source?
I am aware this cannot happen in a straightforward way, but surely it must be possible, there are plenty of programing languages that are not compiled and create that sort of stuff dynamically that are implemented in either C or C++.
Maybe if factories for all primitive types are created, along with suitable data structures to organize them into more complex objects such as user types and functions, this is achievable?
Any info on the subject as well as pointers to online materials are welcome. Thanks!
EDIT: I am aware it is possible, it is more like I am interested in implementation details :)

Yes, of course, without any tools mentioned in the other answers, but simply using the C++ compiler.
just follow these steps from within your C++ program (on linux, but must be similar on other OS)
write a C++ program into a file (e.g. in /tmp/prog.cc), using an ofstream
compile the program via system("c++ /tmp/prog.cc -o /tmp/prog.so -shared -fPIC");
load the program dynamically, e.g. using dlopen()

You can also just give the bytecode directly to a function and just pass it casted as the function type as demonstrated below.
e.g.
byte[3] func = { 0x90, 0x0f, 0x1 }
*reinterpret_cast<void**>(&func)()

Yes, JIT compilers do it all the time. They allocate a piece of memory that has been given special execution rights by the OS, then fill it with code and cast the pointer to a function pointer and execute it. Pretty simple.
EDIT: Here's an example on how to do it in Linux: http://burnttoys.blogspot.de/2011/04/how-to-allocate-executable-memory-on.html

Below an example for C++ runtime compilation based on the method mentioned before (write code to output file, compile via system(), load via dlopen() and dlsym()). See also the example in a related question. The difference here is that it dynamically compiles a class rather than a function. This is achieved by adding a C-style maker() function to the code to be compiled dynamically. References:
https://www.linuxjournal.com/article/3687
http://www.tldp.org/HOWTO/C++-dlopen/thesolution.html
The example only works under Linux (Windows has LoadLibrary and GetProcAddress functions instead), and requires the identical compiler to be available on the target machine.
baseclass.h
#ifndef BASECLASS_H
#define BASECLASS_H
class A
{
protected:
double m_input; // or use a pointer to a larger input object
public:
virtual double f(double x) const = 0;
void init(double input) { m_input=input; }
virtual ~A() {};
};
#endif /* BASECLASS_H */
main.cpp
#include "baseclass.h"
#include <cstdlib> // EXIT_FAILURE, etc
#include <string>
#include <iostream>
#include <fstream>
#include <dlfcn.h> // dynamic library loading, dlopen() etc
#include <memory> // std::shared_ptr
// compile code, instantiate class and return pointer to base class
// https://www.linuxjournal.com/article/3687
// http://www.tldp.org/HOWTO/C++-dlopen/thesolution.html
// https://stackoverflow.com/questions/11016078/
// https://stackoverflow.com/questions/10564670/
std::shared_ptr<A> compile(const std::string& code)
{
// temporary cpp/library output files
std::string outpath="/tmp";
std::string headerfile="baseclass.h";
std::string cppfile=outpath+"/runtimecode.cpp";
std::string libfile=outpath+"/runtimecode.so";
std::string logfile=outpath+"/runtimecode.log";
std::ofstream out(cppfile.c_str(), std::ofstream::out);
// copy required header file to outpath
std::string cp_cmd="cp " + headerfile + " " + outpath;
system(cp_cmd.c_str());
// add necessary header to the code
std::string newcode = "#include \"" + headerfile + "\"\n\n"
+ code + "\n\n"
"extern \"C\" {\n"
"A* maker()\n"
"{\n"
" return (A*) new B(); \n"
"}\n"
"} // extern C\n";
// output code to file
if(out.bad()) {
std::cout << "cannot open " << cppfile << std::endl;
exit(EXIT_FAILURE);
}
out << newcode;
out.flush();
out.close();
// compile the code
std::string cmd = "g++ -Wall -Wextra " + cppfile + " -o " + libfile
+ " -O2 -shared -fPIC &> " + logfile;
int ret = system(cmd.c_str());
if(WEXITSTATUS(ret) != EXIT_SUCCESS) {
std::cout << "compilation failed, see " << logfile << std::endl;
exit(EXIT_FAILURE);
}
// load dynamic library
void* dynlib = dlopen (libfile.c_str(), RTLD_LAZY);
if(!dynlib) {
std::cerr << "error loading library:\n" << dlerror() << std::endl;
exit(EXIT_FAILURE);
}
// loading symbol from library and assign to pointer
// (to be cast to function pointer later)
void* create = dlsym(dynlib, "maker");
const char* dlsym_error=dlerror();
if(dlsym_error != NULL) {
std::cerr << "error loading symbol:\n" << dlsym_error << std::endl;
exit(EXIT_FAILURE);
}
// execute "create" function
// (casting to function pointer first)
// https://stackoverflow.com/questions/8245880/
A* a = reinterpret_cast<A*(*)()> (create)();
// cannot close dynamic lib here, because all functions of the class
// object will still refer to the library code
// dlclose(dynlib);
return std::shared_ptr<A>(a);
}
int main(int argc, char** argv)
{
double input=2.0;
double x=5.1;
// code to be compiled at run-time
// class needs to be called B and derived from A
std::string code = "class B : public A {\n"
" double f(double x) const \n"
" {\n"
" return m_input*x;\n"
" }\n"
"};";
std::cout << "compiling.." << std::endl;
std::shared_ptr<A> a = compile(code);
a->init(input);
std::cout << "f(" << x << ") = " << a->f(x) << std::endl;
return EXIT_SUCCESS;
}
output
$ g++ -Wall -std=c++11 -O2 -c main.cpp -o main.o # c++11 required for std::shared_ptr
$ g++ -ldl main.o -o main
$ ./main
compiling..
f(5.1) = 10.2

Have a look at libtcc; it is simple, fast, reliable and suits your need. I use it whenever I need to compile C functions "on the fly".
In the archive, you will find the file examples/libtcc_test.c, which can give you a good head start.
This little tutorial might also help you: http://blog.mister-muffin.de/2011/10/22/discovering-tcc/
#include <stdlib.h>
#include <stdio.h>
#include "libtcc.h"
int add(int a, int b) { return a + b; }
char my_program[] =
"int fib(int n) {\n"
" if (n <= 2) return 1;\n"
" else return fib(n-1) + fib(n-2);\n"
"}\n"
"int foobar(int n) {\n"
" printf(\"fib(%d) = %d\\n\", n, fib(n));\n"
" printf(\"add(%d, %d) = %d\\n\", n, 2 * n, add(n, 2 * n));\n"
" return 1337;\n"
"}\n";
int main(int argc, char **argv)
{
TCCState *s;
int (*foobar_func)(int);
void *mem;
s = tcc_new();
tcc_set_output_type(s, TCC_OUTPUT_MEMORY);
tcc_compile_string(s, my_program);
tcc_add_symbol(s, "add", add);
mem = malloc(tcc_relocate(s, NULL));
tcc_relocate(s, mem);
foobar_func = tcc_get_symbol(s, "foobar");
tcc_delete(s);
printf("foobar returned: %d\n", foobar_func(32));
free(mem);
return 0;
}
Ask questions in the comments if you meet any problems using the library!

In addition to simply using an embedded scripting language (Lua is great for embedding) or writing your own compiler for C++ to use at runtime, if you really want to use C++ you can just use an existing compiler.
For example Clang is a C++ compiler built as libraries that could be easily embedded in another program. It was designed to be used from programs like IDEs that need to analyze and manipulate C++ source in various ways, but using the LLVM compiler infrasructure as a backend it also has the ability to generate code at runtime and hand you a function pointer that you can call to run the generated code.
Clang
LLVM

Essentially you will need to write a C++ compiler within your program (not a trivial task), and do the same thing JIT compilers do to run the code. You were actually 90% of the way there with this paragraph:
I am aware this cannot happen in a straightforward way, but surely it
must be possible, there are plenty of programing languages that are
not compiled and create that sort of stuff dynamically that are
implemented in either C or C++.
Exactly--those programs carry the interpreter with them. You run a python program by saying python MyProgram.py--python is the compiled C code that has the ability to interpret and run your program on the fly. You would need do something along those lines, but by using a C++ compiler.
If you need dynamic functions that badly, use a different language :)

A typical approach for this is to combine a C++ (or whatever it's written on) project with scripting language.
Lua is one of the top favorites, since it's well documented, small, and has bindings for a lot of languages.
But if you are not looking into that direction, perhaps you could think of making a use of dynamic libraries?

Yes - you can write a compiler for C++, in C++, with some extra features - write your own functions, compile and run automatically (or not)...

Have a look into ExpressionTrees in .NET - I think this is basically what you want to achieve. Create a tree of subexpressions and then evaluate them. In an object-oriented fashion, each node in the might know how to evaluate itself, by recursion into its subnodes. Your visual language would then create this tree and you can write a simple interpreter to execute it.
Also, check out Ptolemy II, as an example in Java on how such a visual programming language can be written.

You could take a look at Runtime Compiled C++ (or see RCC++ blog and videos), or perhaps try one of its alternatives.

Expanding on Jay's answer using opcodes, the below works on Linux.
Learn opcodes from your compiler:
write own myfunc.cpp, e.g.
double f(double x) { return x*x; }
compile with
$ g++ -O2 -c myfunc.cpp
disassemble function f
$ gdb -batch -ex "file ./myfunc.o" -ex "set disassembly-flavor intel" -ex "disassemble/rs f"
Dump of assembler code for function _Z1fd:
0x0000000000000000 <+0>: f2 0f 59 c0 mulsd xmm0,xmm0
0x0000000000000004 <+4>: c3 ret
End of assembler dump.
This means the function x*x in assembly is mulsd xmm0,xmm0, ret and in machine code f2 0f 59 c0 c3.
Write your own function in machine code:
opcode.cpp
#include <cstdlib> // EXIT_FAILURE etc
#include <cstdio> // printf(), fopen() etc
#include <cstring> // memcpy()
#include <sys/mman.h> // mmap()
// allocate memory and fill it with machine code instructions
// returns pointer to memory location and length in bytes
void* gencode(size_t& length)
{
// machine code
unsigned char opcode[] = {
0xf2, 0x0f, 0x59, 0xc0, // mulsd xmm0,xmm0
0xc3 // ret
};
// allocate memory which allows code execution
// https://en.wikipedia.org/wiki/NX_bit
void* buf = mmap(NULL,sizeof(opcode),PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON,-1,0);
// copy machine code to executable memory location
memcpy(buf, opcode, sizeof(opcode));
// return: pointer to memory location with executable code
length = sizeof(opcode);
return buf;
}
// print the disassemby of buf
void print_asm(const void* buf, size_t length)
{
FILE* fp = fopen("/tmp/opcode.bin", "w");
if(fp!=NULL) {
fwrite(buf, length, 1, fp);
fclose(fp);
}
system("objdump -D -M intel -b binary -mi386 /tmp/opcode.bin");
}
int main(int, char**)
{
// generate machine code and point myfunc() to it
size_t length;
void* code=gencode(length);
double (*myfunc)(double); // function pointer
myfunc = reinterpret_cast<double(*)(double)>(code);
double x=1.5;
printf("f(%f)=%f\n", x,myfunc(x));
print_asm(code,length); // for debugging
return EXIT_SUCCESS;
}
compile and run
$ g++ -O2 opcode.cpp -o opcode
$ ./opcode
f(1.500000)=2.250000
/tmp/opcode.bin: file format binary
Disassembly of section .data:
00000000 <.data>:
0: f2 0f 59 c0 mulsd xmm0,xmm0
4: c3 ret

The simplest solution available, if you're not looking for performance is to embed a scripting language interpreter, e.g. for Lua or Python.

It worked for me like this. You have to use the -fpermissive flag.
I am using CodeBlocks 17.12.
#include <cstddef>
using namespace std;
int main()
{
char func[] = {'\x90', '\x0f', '\x1'};
void (*func2)() = reinterpret_cast<void*>(&func);
func2();
return 0;
}

Related

system() function not called from LD_PRELOAD'ed library

I am trying to use LD_PRELOAD on linux to wrap calls to system function to add some preprocessing to the argument. Here's my system.cpp:
#define _GNU_SOURCE
#include <dlfcn.h>
#include <string>
#include <iostream>
typedef int (*orig_system_type)(const char *command);
int system(const char *command)
{
std::string new_cmd = std::string("set -f;") + command;
// next line is for debuggin only
std::cout << new_cmd << std::endl;
orig_system_type orig_system;
orig_system = (orig_system_type)dlsym(RTLD_NEXT,"system");
return orig_system(new_cmd.c_str());
}
I build it with
g++ -shared -fPIC -ldl -o libsystem.so system.cpp
which produces the .so object. I then run my program with
$ LD_PRELOAD=/path/to/libsystem.so ./myprogram
I do not get any errors - but seemingly my system function is not called. Running with LD_DEBUG=libs, I can see that my .so is being loaded, however my system function is not being called and the one from the standard library is called instead.
What do I need to change in code/build to get it to work?
You need
extern "C" int system ...
because it is called by a C function. The C++ version has its name mangled so it is not recognizable.
You might also want to consider saving the "orig_system" pointer so that you avoid calling dlsym every time. You can do this in a constructor/init function, so you would have something like
extern "C" {
typedef int (*orig_system_type)(const char *command);
static orig_system_type orig_system;
static void myInit() __attribute__((constructor));
void myInit()
{
orig_system = (orig_system_type)dlsym(RTLD_NEXT,"system");
}
int system(const char *command)
{
std::string new_cmd = std::string("set -f;") + command;
// next line is for debuggin only
std::cout << new_cmd << std::endl;
return orig_system(new_cmd.c_str());
}
}
(this code isn't tested, but I have used this technique in the past).
An alternative would be to use GNU ld's --wrap option.
If you compile your shared lib with
-Wl,--wrap system
then in your code you write
extern "C" {
void* __real_system(const char* command);
void* __wrap_system(const char* command)
{
std::string new_cmd = std::string("set -f;") + command;
// next line is for debuggin only
std::cout << new_cmd << std::endl;
return __real_system(new_cmd.c_str());
}
}
(Note that I've never used this).
The code should work perfectly fine. Assuming the driver program is something like this:
#include <cstdlib>
int main() {
system("find -name *.cpp");
}
Then env LD_PRELOAD=$PWD/libsystem.so ./a.out gives me this output:
set -f;find -name *.cpp
./system.cpp
./test.cpp
Which shows that not only is your debug statement appearing, but that glob is disabled for that command.

Qt5.4 How can I create an app based in plugins?

I have an app in QT 5.4 but when I need include a new functionality, I need recompile all the app, this take a time, I need know how create or modified my app to use plugins created by me.
A plugin-based architecture requires binary compatible and stable interfaces. Once you have these, a full-project recompilation should take about as much time as recompiling a single plugin.
Most likely, you have interdependencies in your code that preclude maintaining binary compatibility anyway - if you didn't, your changes would be localized enough so that a recompilation would only touch a couple of files.
What you're trying to do is come up with a solution to the wrong problem. Fix the structure of your code, and your recompilation times will drop. No need for plugins.
There are many alternatives; a popular one consists of using shared libraries implementing a well-defined API.
For example. Imagine this is the API you want to open for customization:
// pluggin_api.hpp
// This file defines the pluggin interface.
#pragma once
extern "C" { // avoid name mangling
const char* pluggin_name();
void foo(int x);
void bar(int y);
}
Then your users (or yourself) will implement different variations of this API, for example:
// pluggin_1.cpp
#include "pluggin_api.hpp"
#include <iostream>
const char* pluggin_name() {
return "Pluggin 1";
}
void foo(int x) {
std::cout << "2 * x = " << 2 * x << std::endl;
}
void bar(int y) {
std::cout << " 3 * y = " << 3 * y << std::endl;
}
and
// pluggin_2.cpp
#include "pluggin_api.hpp"
#include <iostream>
const char* pluggin_name() {
return "Pluggin 2";
}
void foo(int x) {
std::cout << "20 * x = " << 20 * x << std::endl;
}
void bar(int y) {
std::cout << " 30 * y = " << 30 * y << std::endl;
}
These .cpp files are compiled as shared libraries; under Linux it looks like this:
$ g++ -shared -fPIC -o pluggin_1.so pluggin_1.cpp
$ g++ -shared -fPIC -o pluggin_2.so pluggin_2.cpp
Finally, the main application can call the different pluggins by name:
// main.cpp
#include <iostream>
#include <dlfcn.h> // POSIX --- will work on Linux and OS X, but
// you'll need an equivalent library for Windows
void execute_pluggin(const char* name) {
// declare the signature of each function in the pluggin -- you
// could do this in the header file instead (or another auxiliary
// file)
using pluggin_name_signature = const char*(*)();
using foo_signature = void(*)(int);
using bar_signature = void(*)(int);
// open the shared library
void* handle = dlopen(name, RTLD_LOCAL | RTLD_LAZY);
// extract the functions
auto fun_pluggin_name = reinterpret_cast<pluggin_name_signature>(dlsym(handle, "pluggin_name"));
auto fun_foo = reinterpret_cast<foo_signature>(dlsym(handle, "foo"));
auto fun_bar = reinterpret_cast<bar_signature>(dlsym(handle, "bar"));
// call them
std::cout << "Calling Pluggin: " << fun_pluggin_name() << std::endl;
fun_foo(2);
fun_bar(3);
// close the shared library
dlclose(handle);
}
int main(int argc, char *argv[]) {
for(int k = 1; k < argc; ++k) {
execute_pluggin(argv[k]);
}
}
Compile and link with the dl library:
$ g++ -o main main.cpp -std=c++14 -ldl
and run (notice you need the ./ before the name; this has to do with library naming conventions and search paths):
$ ./main ./pluggin_1.so ./pluggin_2.so
Calling Pluggin: Pluggin 1
2 * x = 4
3 * y = 9
Calling Pluggin: Pluggin 2
20 * x = 40
30 * y = 90
There are so many details that I left out (most importantly error management). I recommend you read the book API Design for C++ to find alternative ideas (such as using an scripting language, using inheritance, using templates, and a mix of them).
I like the shared library method because then I can use the pluggins in other applications (for example: I can use Python's ctypes library, or Matlab's loadlibrary). I can also write the pluggin in, say, Fortran, and then wrap it in an interface compatible with the API.
Finally: notice that this has absolutely nothing to do with QT (though QT may provide a platform-independent shared library loader; I don't know). This is just a common manner in which people provide hooks for customization.
The Qt Documentation provides a 'How To' on creating plugins to extend a Qt based application using Qt's own mechanisms. See http://doc.qt.io/qt-5/plugins-howto.html.
It talks about a high-level API and a low-level API. You are interested in the low-level API.

Is there any way to transform strings into compileable/runnable code?

For example, suppose we have a string like:
string x = "for(int i = 0; i < 10; i++){cout << \"Hello World!\n\";}"
What is the simplest way to complete the following function definition:
void do_code(string x); /* given x that's valid c++ code, executes that code as if it were written inside of the function body */
The standard C++ libraries do not contain a C++ parser/compiler. This means that your only choice is to either find and link a C++ compiler library or to simply output your string as a file and launch the C++ compiler with a system call.
The first thing, linking to a C++ compiler, would actually be quite doable in something like Visual Studio for example, that does indeed have DLL libraries for compiling C++ and spitting out a new DLL that you could link at runtime.
The second thing, is pretty much what any IDE does. It saves your text-editor stuff into a C++ file, compile it by system-executing the compiler and run the output.
That said, there are many languages with build-in interpreter that would be more suitable for runtime code interpretation.
Not directly as you're asking for C++ to be simultaneously compiled and interpreted.
But there is LLVM, which is a compiler framework and API. That would allow you to take in this case a string containing valid C++, invoke the LLVM infrastructure and then afterwards use a LLVM-based just in time compiler as described at length here. Keep in mind you must also support the C++ library. You should also have some mechanism to map variables into your interpreted C++ and take data back out.
A big but worthy undertaking, seems like someone might have done something like this already, and maybe Cling is just that.
Use the Dynamic Linking Loader (POSIX only)
This has been tested in Linux and OSX.
#include<fstream>
#include<string>
#include<cstdlib>
#include<dlfcn.h>
void do_code( std::string x ) {
{ std::ofstream s("temp.cc");
s << "#include<iostream>\nextern \"C\" void f(){" << x << '}'; }
std::system( "g++ temp.cc -shared -fPIC -otemp.o" );
auto h = dlopen( "./temp.o", RTLD_LAZY );
reinterpret_cast< void(*)() >( dlsym( h, "f" ) )();
dlclose( h );
}
int main() {
std::string x = "for(int i = 0; i < 10; i++){std::cout << \"Hello World!\\n\";}";
do_code( x );
}
Try it online! You'll need to compile with the -ldl parameter to link libdl.a. Don't copy-paste this into production code as this has no error checking.
Works for me:
system("echo \"#include <iostream> \nint main() { for(int i = 0; i < 10; i++){std::cout << i << std::endl;} }\" >temp.cc; g++ -o temp temp.cc && ./temp");

Profiling C++ Destructor Calls

I am profiling a C++ application compiled on optimization level -O3 with the intel c++ compiler from intel composer xe 2013. The profiler (Instruments on OS X) states that a very large portion of time is being spent calling the destructor for a particular type of object. However, it will not provide me with information regarding what function allocated the object in the first place. Is there any tool that can provide the information on what functions allocate the largest quantity of a certain type of object?
Edit: I have also tried the -profile-functions flag for the intel c++ compiler with no success.
You could add two more parameters to the constructor, the file and the line number. Save that information in the object, and print it when the destructor is called. Optionally you could hide some of the ugliness in a macro for the constructor.
#include <iostream>
#include <string>
using std::string;
class Object
{
string _file;
int _line;
public:
Object( const char * file, int line ) : _file(file), _line(line) {}
~Object() { std::cerr << "dtor for object created in file: " << _file << " line: " << _line << std::endl; }
};
int main( int argc, char * argv[] )
{
Object obj( __FILE__, __LINE__ );
return 0;
}
This is how it runs
$ g++ main.cpp -o main && ./main
dtor for object created in file: main.cpp line: 16
$

Is there a way to get function name inside a C++ function?

I want to implement a function tracer, which would trace how much time a function is taking to execute. I have following class for the same:-
class FuncTracer
{
public:
FuncTracer(LPCTSTR strFuncName_in)
{
m_strFuncName[0] = _T('\0');
if( strFuncName_in ||
_T('\0') != strFuncName_in[0])
{
_tcscpy(m_strFuncName,strFuncName_in);
TCHAR strLog[MAX_PATH];
_stprintf(strLog,_T("Entering Func:- <%s>"),m_strFuncName);
LOG(strLog)
m_dwEnterTime = GetTickCount();
}
}
~FuncTracer()
{
TCHAR strLog[MAX_PATH];
_stprintf(strLog,_T("Leaving Func:- <%s>, Time inside the func <%d> ms"),m_strFuncName, GetTickCount()-m_dwEnterTime);
LOG(strLog)
}
private:
TCHAR m_strFuncName[MAX_PATH];
DWORD m_dwEnterTime;
};
void TestClass::TestFunction()
{
// I want to avoid writing the function name maually..
// Is there any macro (__LINE__)or some other way to
// get the function name inside a function ??
FuncTracer(_T("TestClass::TestFunction"));
/*
* Rest of the function code.
*/
}
I want to know if there is any way to get the name of the function from inside of a function? Basically I want the users of my class to simply create an object the same. They may not pass the function name.
C99 has __func__, but for C++ this will be compiler specific. On the plus side, some of the compiler-specific versions provide additional type information, which is particularly nice when you're tracing inside a templatized function/class.
MSVC: __FUNCTION__, __FUNCDNAME__, __FUNCSIG__
GCC: __func__, __FUNCTION__, __PRETTY_FUNCTION__
Boost library has defined macro BOOST_CURRENT_FUNCTION for most C++ compilers in header boost/current_function.hpp. If the compiler is too old to support this, the result will be "(unknown)".
VC++ has
__FUNCTION__ for undecorated names
and
__FUNCDNAME__ for decorated names
And you can write a macro that will itself allocate an object and pass the name-yelding macro inside the constructor. Smth like
#define ALLOC_LOGGER FuncTracer ____tracer( __FUNCTION__ );
C++20 std::source_location::function_name
main.cpp
#include <iostream>
#include <string_view>
#include <source_location>
void log(std::string_view message,
const std::source_location& location = std::source_location::current()
) {
std::cout << "info:"
<< location.file_name() << ":"
<< location.line() << ":"
<< location.function_name() << " "
<< message << '\n';
}
int f(int i) {
log("Hello world!"); // Line 16
return i + 1;
}
int f(double i) {
log("Hello world!"); // Line 21
return i + 1.0;
}
int main() {
f(1);
f(1.0);
}
Compile and run:
g++ -ggdb3 -O0 -std=c++20 -Wall -Wextra -pedantic -o source_location.out source_location.cpp
./source_location.out
Output:
info:source_location.cpp:16:int f(int) Hello world!
info:source_location.cpp:21:int f(double) Hello world!
so note how the call preserves caller information, so we see the desired main call location instead of log.
I have covered the relevant standards in a bit more detail at: What's the difference between __PRETTY_FUNCTION__, __FUNCTION__, __func__?
Tested on Ubuntu 22.04, GCC 11.3.
I was going to say I didn't know of any such thing but then I saw the other answers...
It might interest you to know that an execution profiler (like gprof) does exactly what you're asking about - it tracks the amount of time spent executing each function. A profiler basically works by recording the instruction pointer (IP), the address of the currently executing instruction, every 10ms or so. After the program is done running, you invoke a postprocessor that examines the list of IPs and the program, and converts those addresses into function names. So I'd suggest just using the instruction pointer, rather than the function name, both because it's easier to code and because it's more efficient to work with a single number than with a string.