I'm compiling some C++ code at runtime, which I'm then calling in a sort of plugin system, see also my other question. What I do is I create the source code, write it to a file, compile that file and write the output to another file. However, this process feels kinda ugly so I was hoping for some input.
//Open a file
std::ofstream fout("SOURCECODEPATH");
//Write actual function to file
fout << "extern \"C\" void testFunc(float testArray[]) {\n"
" testArray[0] = 1.0;\n"
" testArray[1] = 2.0;\n"
" testArray[2] = 3.0;\n"
"}" << std::endl;
//Compile the file, and write the stdout and stderr to PROCESSOUTPUTPATH using "&>"
system("c++ -shared -fPIC -std=c++14 SOURCECODEPATH -o COMPILEDLIBRARYPATH &> PROCESSOUTPUTPATH");
//Read PROCESSOUTPUTPATH (not implemented)
Currently it's creating 3 files, SOURCECODEPATH, COMPILEDLIBRARYPATH, and PROCESSOUTPUTPATH. However, I would much rather not have the SOURCECODEPATH and PROCESSOUTPUTPATH written to the OS, but rather have them used internally. So pipe (?) the sourcecode to the process and get back the output (preferable split into stderr and stdout). What would be the easiest way to do this?
Please reconsider what you're doing. C++ and Python are very different languages in very many ways, not least in their build and execution models. It seems very unlikely that runtime compilation is the real solution to your underlying problem (which you have not shared with us). Simply put, C++ was not designed to support this, Python was.
Technically, there are a few solutions for runtime compilation of C++, but they require much more management and effort than eval in Python. However, they are pretty specialised and, again, unlikely to be a good solution to your underlying problem.
Related
I am writing a scientific computation code in C++. There are outputs that I want to write in a console and outputs that I write into a file. However, when debugging after implementing a new feature, it is useful to print out much more information than usual. So far I was just sending more information to std::cout/clog and commented these lines out when not needed.
What I want is like std::clog, which would go into a file when needed, or not do anything at all, when not needed. It is ok, if I need to recompile the code to switch between the two regimes. It is important that nothing happens when not needed, because for a real large calculation the log file would be enormous (or the the console full of rubbish) and all the writing would slow the calculation down.
I am looking for the smallest possible implementation, ideally using only standard libraries for portability.
The obvious solution is to have a global variable, redirect clog to a file and then use an if statement.
bool DEBUG = true;
std::ofstream out("logfile");
std::clog.rdbuf(out.rdbuf());
...
if (DEBUG) std::clog << "my message" << std::endl;
...
Is there a more elegant way of doing this?
Edit:
I would like to avoid using non-standard libraries and preprocessor macros (program is spread across many files and also a bad programming habit in general). One way I could imagine this working, but I don't know how to do it, is to create a globally accessible object that would be able to accept messages using << and would save them to a file. Then I could just comment out the line inside this object class that saves it to a file. However, I don't know how much performance impact may result from passing messages to such a disfunctional object.
You may use any external logging library for C/C++.
Or create your own small implementation with only utilities what you need.
A traditional logging mechanism is build on macros (or inline functions) and looks like:
#define LOG_MESSAGE(msg) \
{
#ifdef DEBUG
// your debug logging
#else
// your release logging, may be leaved empty
#endif // DEBUG
}
It's also useful to add different logging levels: Error, Warning, Info, etc.
Is it possible to (and, if so, how does one) determine the shared libraries of an application that are used by an application at runtime? Basically, can I programmatically obtain the the output of ldd? Preferred C/C++ solution does not just jump to execute ldd on the command-line.
Consider the following:
I have a driver application that calls doAction() from a shared library libfoo. I compile the application once and then set LD_LIBRARY_PATH to an appropriate directory containing a libfoo with the doAction() symbol defined. This way, I can have multiple implementations of doAction() in different libfoos but only ever compile an application once.
A real world example would be a professor having a class of students implement doAction(). Instead of compiling a test harness against each student's implementation of doAction(), the students submit a shared library and the professor can simply change LD_LIBRARY_PATH to evaluate each student.
My goal in obtaining the library currently being used is to perform an md5sum on the library at runtime to ensure I'm calling the correct library. In the contrived example, all students would submit the md5sum of their library and the professor could match the running executable + shared library (database lookup, log to file, ...) to the student, to prevent an accident in setting LD_LIBRARY_PATH effecting another student's grade (forgot to change LD_LIBRARY_PATH to David's directory and ran again with Bill's libfoo).
Since it looks like you're using something UNIX-y, just use dlopen instead of dynamically linking your driver app against the missing symbol.
Full sequence is:
iterate over all submitted .so library filenames somehow (maybe you have one directory with studentname.so or something)
load each library
get the entry point function
call it
unload library (optional, I guess)
like so:
void *lib = dlopen(filename, RTLD_LOCAL);
void *libfun = dlsym(lib, "doAction");
if (libfun == NULL)
cout << "student failed by not providing doAction() in " << filename << endl;
else {
void (*doAction)(void) = (void (*)(void)) libfun;
// no, I can't remember the correct syntax for casting to function pointer
cout << "calling " << filename << ":doAction()" << endl;
doAction();
// is there some way to tell if it succeeded?
cout << "unloading " << filename << endl;
dlclose(lib);
}
Notes:
if the interface is the same in each case (ie, void (*)()), you could make this configurable by directory name and symbol name, and it'd work for more than one test
in fact, if the interface is NOT what you expect, the function pointer cast will do horrible things, so careful with this
finally, if the student used C++, their function name symbol will be mangled. Tell them to declare the entry-point as extern "C" void doAction() to avoid that.
the RTLD_LOCAL flag should stop anything in one student's library interfering with another (if you don't unload), but there are other flags it may be sensible to add
specifically, RTLD_NOW will cause dlopen to fail if the student lib has an unresolved external reference it can't figure out (so you can handle it gracefully, by failing them): otherwise your program may just crash when you call doAction.
Although I think the above is better than the solution you're directly asking for help with, I did also find a reference to dl_iterate_phdr while double-checking the docs. If you're on Linux specifically, and if the dl_phdr_info.dlpi_name is actually the filename ... you might be able to get it that way.
I still think it's much uglier, though.
If you're using Linux, you can use the dl_iterate_phdr function:
The dl_iterate_phdr() function allows an application to inquire at run time to find out which shared objects it has loaded.
http://linux.die.net/man/3/dl_iterate_phdr
At runtime, it is not an application, it is a process.
If the process has pid 1234, you can get its memory map by reading /proc/1234/maps (or /proc/1234/smaps which is more detailed). That map lists in particular mmap-ed files (notably shared libraries). From inside the application, read /proc/self/maps
Try
grep so /proc/self/maps
to have an idea of what I mean.
By the way, if you have an address, the dladdr function gives information about the nearest symbol and shared object...
addenda
And as Rob Mayoff answered, dl_iterate_phdr is probably the best solution on Linux
If this is Linux (I doubt there's a generic POSIX way to do this but I could be wrong), you may be interested in the contents of /proc/(pid)/maps. This gives the mapped memory ranges for your process and you could search for which of the ranges your md5sum() function's address falls in.
If you're in linux/unix, you could use strace like strace -o strace.log -f students_binary . Strace traces all system calls, including the calls to open a library. Then you could parse strace.log for all openings of any file and perform the md5sum on all open files.
I have looked for some info on this and haven't found anything very helpful.
Background
What I have is GNU Common Lisp installed. I can create a Lisp file and compile it to a .o object file using the command:
gcl -compile <my lisp filename>
Once I have that .o file I use the command (using MinGW):
g++ -o myProgram.exe temp.o temp.cpp
My Lisp file works in GCL. My temp.cpp file is:
//function from (defun fib (x) ...) in lisp file
extern int fib(int);
#include <iostream>
int main()
{
int val;
std::cout << "Print Fibonacci numbers" << std::endl;
std::cout << ">";
std::cin >> val;
while (val != -1)
{
std::cout << fib(val) << std::endl << std::endl;
std::cout << ">";
std::cin >> val;
}
return 0;
}
The errors I get when compiling are this:
temp.cpp:(.text+0x180): undefined reference to `fib(int)'
temp.o:temp.c:(.text+0xb): undefined reference to `vs_base'
temp.o:temp.c:(.text+0x17): undefined reference to `vs_limit'
temp.o:temp.c:(.text+0x1d): undefined reference to `vs_top'
temp.o:temp.c:(.text+0x2d): undefined reference to `small_fixnum_table'
...
The errors are a lot longer and it looks like all the functions defined in GCL.
My Question
So, finally my question. Is what I am trying to do possible? Do I somehow need to include the entire GCL library with a program if I plan on linking it with a C++ program?
First of all, I'm not sure it is possible to call GCL compiled functions from C++ at all. Compare definitions of your CL's and C++'s functions:
(defun fib (x) ...)
and
int fib(int)
Second function is strictly typed, while first one takes and returns any object. So, what function should g++ search for in your temp.o file? Even if you declare type in CL's function, will compiled function have same format as C++'s functions? Even such similar languages as C++ and Delphi cannot be linked together without special directives because of different order of passing arguments to the function stack. Diving deeper you can see that C++ and CL have completely different memory management strategies, so it's completely obscure how to use them together.
One way to overcome any such differences is to use bridges - any resources that can be accessed from both languages, e.g. sockets, pipes and so on. For example, if you have module my-lisp-module you can create simple socket interface for its public functions and call them from whatever language you like.
Though using bridges is extremely flexible, it is not very convenient. Another way to embed Common Lisp into C++ program is to... use Embedded Common Lisp. It was designed specially for purposes like yours. You can find rules of embedding on their manual page.
Finally, you can use Common Lisp implementation for a platform, that already supports integration with C++ code. If you work only in Windows, it must be easy to integrate your app on one of CL implementations for CLR. If you are about to move to Linux, implementations for JVM are also available.
I made the transition from C++ to objective-C a while ago, and am now finding NSLog() tiresome. Instead, still in Objective-C, I would like to be able to write something like
stdout << "The answer is " << 42 << "\n";
(I know that NSLog prints to stderr, I could put up with writing stderr << "Hello world";)
Basically, I just want to be able to use the C++ pipe syntax in Objective-C.
I don't care about speed (within reason) or if the only method uses precompiler macros or other hack-ish things.
You really should get used to format strings as in NSLog. The C++ style syntax may be easy to write, but it is a nightmare to maintain. Think about internationalization. A format string can easily be loaded at runtime. Cocoa provides the function NSLocalizedString for that. But for C++’s stream operators you probably have to write different code for every language.
What you're wanting is stream operations.
There isn't a really 'good' way to do this in Cocoa, I have a library that I never really fleshed out that would allow you to do something 'near' this, but still wouldn't get a lot of the benifits.
http://github.com/jweinberg/Objective-Curry/blob/master/OCFileStream.m
Starting from there you would be able to write a class that did
[[[stdOutStream write:#"10"] write:[bleh description]] write:#"more stuff"];
I would ideally like to be able to add (very repetitive) C/C++ code to my actual code, but at compile time, code which would come from say, the stdout of a python script, the same way one does with macros.
For example, let's say I want to have functions that depend on the public attributes of a given class, being able to just write the following in my C++ code would be a blessing:
generate_boring_functions(FooBarClass,"FooBarClass.cpp")
Is that feasible using conventional means? Or must I hack with Makefiles and temporary source files?
Thanks.
You do most likely need to tweak the Makefile a bit. It would be easy to write a (Python) script that reads each of your source files as an additional preprocessing step, replacing instances of generate_boring_functions (or any other script-macro) with the correct code, potentially just by invoking generate_boring_functions.py with the right arguments, and bypassing the need for temporary files by sending the source to the compiler over standard input.
Damn, now I want to make something like this.
Edit: A rule like this, stuck in a makefile, could be used to handle the extra build step. This is untested and added only for some shot at completeness.
%.o : %.cpp
python macros.py $< | g++ -x cpp -c - -o $#
If a makefile isn't conventional enough for you, you could get by with cleverly-written macros.
class FooBarClass
{
DEFINE_BORING_METHODS( FooBarClass )
/* interesting functions begin here */
}
I very frequently see this done to implement the boilerplate parts of COM classes.
But if you want something that's neither make nor macro, then I don't know what you could possibly mean.
A makefile (or equivalent) is a "conventional" means!
I've never used this particular technology, but it sounds as though you're looking for something like Ned Batchelder's Cog tool.
Python scripts are embedded into a C++ source file such that when run through the cog tool additional C++ code is generated for the C++ compiler to consume. So your build process would consist of an extra step to have cog produce the actual C++ source file before the C++ compiler is invoked.
You could try the Boost Preprocessor Library. It's just an extension of the regular preprocessor, but if you're creative, you can achieve nearly anything in it.
Did you have a look at PythoidC ? It can be used to generate C code.
I have encountered this exact same problem multiple times.
I use it exactly in the way you describe -- (i.e. to run "boringFunction( filename.cpp, "filename.cpp") for a set of files).
It is used to generate code that "registers" the code contained in a specific set of files to a std::map, to handle adding user-written functions to the library without dynamically recompiling the whole library or relying on the (likely novice programmer) user to write syntactically correct C++ code to e.g. implement class functions.
I have solved it in two ways (which are basically equivalent)
1) A purely C++ "bootstrapping" method, in which during compilation, make compiles a simple C++ program that generates the necessary files, and then calls a second makefile that compiles the actual code generated in the temporary files.
2) A shell based method that uses bash to accomplish the same thing (I.e. use simple shell commands to iterate through the files and output new files to a temporary location, then call make on the output).
The functions can either be output to one file each, or can be output to one monolithic file for the second compilation.
Then, the functions can either be loaded dynamically (i.e. they are compiled as a shared library), or I can recompile all the rest of the code with the generated functions included.
The only hard part was (a) figuring out a way to register the function names uniquely (e.g. using preprocessor __COUNTER__ only works if it is a single monolithic file), and (b) figuring out how to reliably call the generation function in the makefile before the main makefile runs.
The advantage of the pure-C++ method (versus e.g. bash) is that it could possibly work on systems that do not have the same bash linux shell by default (e.g. windows or macOS), in which case of course a more complex cmake method is necessary..
I have included the hard parts of the makefile for posterity:
The first makefile called is:
# Dummy to compile filters first
$(MAKECMDGOALS): SCRIPTCOMPILE
make -f Makefile2 $(MAKECMDGOALS)
SCRIPTCOMPILE:
#sh scripts/filter_compiler_single.sh filter_stubs
.PHONY: SCRIPTCOMPILE
Where scripts/filter_compilr_single.sh is e.g.:
BUILD_DIR="build/COMPILED_FILTERS";
rm -r $BUILD_DIR
mkdir -p $BUILD_DIR
ARGSET="( localmapdict& inputmaps, localmapdict& outputmaps, void*& userdata, scratchmats& scratch, const std::map<std::string,std::string>& params, const uint64_t& curr_time , const std::string& nickname, const std::string& desc )"
compfname=$BUILD_DIR"/COMPILED_FILTERS.cpp"
echo "//// START OF GENERATED FILE (this file will be overwritten!) ////" > $compfname #REV: first overwrites
echo "#include <salmap_rv/include/salmap_rv_filter_includes.hpp>" >> $compfname
echo "using namespace salmap_rv;" >> $compfname
flist=$(find $1 -maxdepth 1 -type f) #REV: add constraint to only find .cpp files?
for f in $flist;
do
compfnamebase=$(basename $f) #REV: includes .cpp
alg=${compfnamebase%.cpp}
echo $f " >> " $compfname
echo "void ""$alg""$ARGSET""{" >> $compfname
echo "DEBUGPRINTF(stdout, \"Inside algo funct "$alg"\");" >> $compfname; #REV: debug...
cat $f >> $compfname
echo "}""REGISTER_SAL_FILT_FUNC(""$alg"")" >> $compfname
done
echo "//// END OF GENERATED FILE ////" >> $compfname
The second makefile Makefile2 is the normal compilation instructions.
It is not beautiful, and I would love to find a better way to do it, but as it is, extracting even just the base filename from every file during compilation is difficult even using templates or constexpr (e.g. some macro function that takes __FILE__). And that would rely on the user remembering to add the specific macro call to their function filter stub, which is just adding extra unneccessary work and asking to introduce spelling errors etc.