I have a non standard c compiler, for the example lets call it comp.
How can I use it with Waf?
I see that in all the examples:
def options(ctx):
ctx.load('compiler_c')
def configure(ctx):
ctx.load('compiler_c')
And I want to load my own compiler - comp, so that any build or task will be assosiated with it?
Thanks!
The best option is to define your own c_compiler tool, see for example icc in waflib/Tools or c_bgxlc in waflib/extras, modules called c_* in extras will be automatically loaded by load('compiler_c').
Related
Now, I implemented a factory class to dynamically create class with a idenification string, please see the following code:
void IOFactory::registerIO()
{
Register("NDAM9020", []() -> IOBase * {
return new NDAM9020();
});
Register("BK5120", []() -> IOBase * {
return new BK5120();
});
}
std::unique_ptr<IOBase> IOFactory::createIO(std::string ioDeviceName)
{
std::unique_ptr<IOBase> io = createObject(ioDeviceName);
return io;
}
So we can create the IO class with the registered name:
IOFactory ioFactory;
auto io = ioFactory.createIO("BK5120");
The problem with this method is if we add another IO component, we should add another register code in registerIO function and compile the whole project again. So I was wondering if I could dynamically register class from a configure file(see below) at runtime.
io_factory.conf
------------------
NDAM9020:NDAM9020
BK5120:BK5120
------------------
The first is identification name and the second is class name.
I have tried with Macros, but the parameter in Macros cann't be string. So I was wondering if there is some other ways. Thanks for advance.
Update:
I didn't expect so many comments and answers, Thank you all and sorry for replying late.
Our current OS is Ubuntu16.04 and we use the builtin compiler that is gcc/g++5.4.0, and we use CMake to manage the build.
And I should mention that it is not a must that I should register class at runtime period, it is also OK if there is a way to do this in compile period. What I want is just avoiding the recompiling when I want to register another class.
So I was wondering if I could dynamically register class from a configure file(see below) at runtime.
No. As of C++20, C++ has no reflection features allowing it. But you could do it at compile time by generating a simple C++ implementation file from your configuration file.
How to dynamically register class in a factory class at runtime period with c++
Read much more about C++, at least a good C++ programming book and see a good C++ reference website, and later n3337, the C++11 standard. Read also the documentation of your C++ compiler (perhaps GCC or Clang), and, if you have one, of your operating system. If plugins are possible in your OS, you can register a factory function at runtime (by referring to to that function after a plugin providing it has been loaded). For examples, the Mozilla firefox browser or recent GCC compilers (e.g. GCC 10 with plugins enabled), or the fish shell, are doing this.
So I was wondering if I could dynamically register class from a configure file(see below) at runtime.
Most C++ programs are running under an operating system, such as Linux. Some operating systems provide a plugin mechanism. For Linux, see dlopen(3), dlsym(3), dlclose(3), dladdr(3) and the C++ dlopen mini-howto. For Windows, dive into its documentation.
So, with a recent C++ implementation and some recent operating systems, y ou can register at runtime a factory class (using plugins), and you could find libraries (e.g. Qt or POCO) to help you.
However, in pure standard C++, the set of translation units is statically known and plugins do not exist. So the set of functions, lambda-expressions, or classes in a given program is finite and does not change with time.
In pure C++, the set of valid function pointers, or the set of valid possible values for a given std::function variable, is finite. Anything else is undefined behavior. In practice, many real-life C++ programs accept plugins thru their operating systems or JIT-compiling libraries.
You could of course consider using JIT-compiling libraries such as asmjit or libgccjit or LLVM. They are implementation specific, so your code won't be portable.
On Linux, a lot of Qt or GTKmm applications (e.g. KDE, and most web browsers, e.g. Konqueror, Chrome, or Firefox) are coded in C++ and do load plugins with factory functions. Check with strace(1) and ltrace(1).
The Trident web browser of MicroSoft is rumored to be coded in C++ and probably accepts plugins.
I have tried with Macros, but the parameter in Macros can't be string.
A macro parameter can be stringized. And you could play x-macros tricks.
What I want is just avoiding the recompiling when I want to register another class.
On Ubuntu, I recommend accepting plugins in your program or library
Use dlopen(3) with an absolute file path; the plugin would typically be passed as a program option (like RefPerSys does, or like GCC does) and dlopen-ed at program or library initialization time. Practically speaking, you can have lots of plugins (dozen of thousands, see manydl.c and check with pmap(1) or proc(5)). The dlsym(3)-ed C++ functions in your plugins should be declared extern "C" to disable name mangling.
A single C++ file plugin (in yourplugin.cc) can be compiled with g++ -Wall -O -g -fPIC -shared yourplugin.cc -o yourplugin.so and later you would dlopen "./yourplugin.so" or an absolute path (or configure suitably your $LD_LIBRARY_PATH -see ld.so(8)- and pass "yourplugin.so" to dlopen). Be also aware of Rpath.
Consider also (after upgrading your GCC to GCC 9 at least, perhaps by compiling it from its source code) using libgccjit (it is faster than generating temporary C++ code in some file and compiling that file into a temporary plugin).
For ease of debugging your loaded plugins, you might be interested by Ian Taylor's libbacktrace.
Notice that your program's global symbols (declared as extern "C") can be accessed by name by passing a nullptr file path to dlopen(3), then using dlsym(3) on the obtained handle. You want to pass -rdynamic -ldl when linking your program (or your shared library).
What I want is just avoiding the recompiling when I want to register another class.
You might registering classes in a different translation unit (a short one, presumably). You could take inspiration from RefPerSys multiple #include-s of its generated/rps-name.hh file. Then you would simply recompile a single *.cc file and relink your entire program or library. Notice that Qt plays similar tricks in its moc, and I recommend taking inspiration from it.
Read also J.Pitrat's book on Artificial Beings: the Conscience of a Conscious Machine ISBN which explains why a metaprogramming approach is useful. Study the source code of GCC (or of RefPerSys), use or take inspiration from SWIG, ANTLR, GNU bison (they all generate C++ code) when relevant
You seem to have asked for more dynamism than you actually need. You want to avoid the factory itself having to be aware of all of the classes registered in it.
Well, that's doable without going all the way runtime code generation!
There are several implementations of such a factory; but I am obviously biased in favor of my own: einpoklum's Factory class (gist.github.com)
simple example of use:
#include "Factory.h"
// we now have:
//
// template<typename Key, typename BaseClass, typename... ConstructionArgs>
// class Factory;
//
#include <string>
struct Foo { Foo(int x) { }; }
struct Bar : Foo { Bar(int x) : Foo(x) { }; }
int main()
{
util::Factory<std::string, Foo, int> factory;
factory.registerClass<Bar>("key_for_bar");
auto* my_bar_ptr factory.produce("key_for_bar");
}
Notes:
The std::string is used as a key; you could have a factory with numeric values as keys instead, if you like.
All registered classes must be subclasses of the BaseClass value chosen for the factory. I believe you can change the factory to avoid that, but then you'll always be getting void *s from it.
You can wrap this in a singleton template to get a single, global, static-initialization-safe factory you can use from anywhere.
Now, if you load some plugin dynamically (see #BasileStarynkevitch's answer), you just need that plugin to expose an initialization function which makes registerClass() class calls on the factory; and call this initialization function right after loading the plugin. Or if you have a static-initialization safe singleton factory, you can make the registration calls in a static-block in your plugin shared library - but be careful with that, I'm not an expert on shared library loading.
Definetly YES!
Theres an old antique post from 2006 that solved my life for many years. The implementation runs arround having a centralized registry with a decentralized registration method that is expanded using a REGISTER_X macro, check it out:
https://web.archive.org/web/20100618122920/http://meat.net/2006/03/cpp-runtime-class-registration/
Have to admit that #einpoklum factory looks awesome also. I created a headeronly sample gist containing the code and a sample:
https://gist.github.com/h3r/5aa48ba37c374f03af25b9e5e0346a86
Say I have a function
def pyfunc():
print("ayy lmao")
return 4
and I want to call it in c++
int j = (int)python.pyfunc();
how exactly would I do that?
You might want to have a look into this:https://docs.python.org/2/extending/extending.html
In order to call a Python function from C++, you have to embed Python
in your C++ application. To do this, you have to:
Load the Python DLL. How you do this is system dependent:
LoadLibrary under Windows, dlopen under Unix. If the Python DLL is
in the usual path you use for DLLs (%path% under Windows,
LD_LIBRARY_PATH under Unix), this will happen automatically if you try
calling any function in the Python C interface. Manual loading will
give you more control with regards to version, etc.
Once the library has been loaded, you have to call the function
Py_Initialize() to initialize it. You may want to call
Py_SetProgramName() or Py_SetPythonHome() first to establish the
environment.
Your function is in a module, so you'll have to load that:
PyImport_ImportModule. If the module isn't in the standard path,
you'll have to add its location to sys.path: use
PyImport_ImportModule to get the module "sys", then
PyObject_GetAttrString to get the attribute "path". The path
attribute is a list, so you can use any of the list functions to add
whatever is needed to it.
Your function is an attribute of the module, so you use
PyObject_GetAttrString on the module to get an instance of the
function. Once you've got that, you pack the arguments into a tuple or
a dictionary (for keyword arguments), and use PyObject_Call to call
it.
All of the functions, and everything that is necessary, is documented
(extremely well, in fact) in https://docs.python.org/2/c-api/. You'll
be particularly interested in the sections on "Embedding Python" and
"Importing Modules", along with the more general utilities ("Object
Protocol", etc.). You'll also need to understand the general principles
with regards to how the Python/C API works—things like reference
counting and borrowed vs. owned references; you'll probably want to read
all of the sections in the Introduction first.
And of course, despite the overall quality of the documentation, it's
not perfect. A couple of times, I've had to plunge into the Python
sources to figure out what was going on. (Typically, when I'm getting
an error back from Python, to find out what it's actually complaining
about.)
I currently use some old C library for getting program options and would like to replace that with some proper C++ (mainly to become independent of that library, which is a real burden). I was looking into using boost.program_options, but am not sure it can support all I want. Some things I want is:
allow the following command-line syntax: myprogram option=value (in particular, I don't really want the --option value syntax)
use a default value if no value is provided (obviously this can be done in my program, but support in the options library would be nice)
allow default options (which are always present even if I don't give them) and an automatic help output consisting of all the options and their descriptions
allow mathematical parsing, i.e. (command line) myprogram option1=Pi option2=3/5 option3=sqrt(2) to give 3.1415..., 0.6, and 1.415... in my program
allow single values to be expanded. Let option_3Dpoint correspond to an std::array<double,3>, I want both myprogram option_3Dpoint=0,0,0 and myprogram option_3Dpoint=0 (expanding to 0,0,0) to work
Which of these can be supported by boost.program_options? Are there any alternatives?
boost.program_options is very good library. You can use to parse config files aswell. Answers:
Dont know but seems no builtin support.
Yes.
Yes.
No unless you make your own expression evaluation handler or use some other boost libs to do this.
Yes, you will need to write your own handler which creates 3DPoint object from string like 0,0,0
Please do anyone know if clang has option to return symbol definition or declaration(or reference)?
I mean: there is option for clang executable called -code-completion-at=path_to_file:line:coloumn
the clang will look into code and returns you completion strings(ie if there is for example
std::string_type_variable. ..it returns to you all the methods and attributes you can call from std::string.
Now what i want is, for clang to return the file and coordinates, where the definition of the symbols starts .. so if it std::string_type_variable - i want to return to me coordinates, where i have writte std::string variable; in the code.
I want to use it in vim instead of cscope/ctags obsolete functionality(tags system using ctags/cscope in vim doesnt know context - its not usable at all in bigger projects)
I know there is clang followup(http://blog.wuwon.id.au/2011/10/vim-plugin-for-navigating-c-with.html) but it doesnt work correctly(actually doesnt work for me at all)
Is it even possible? it shouldn't be that hard, if it can return the completion, it probably already knows, where from he read the definition of the variable...
Clang provides such functionality by libclang shared library, but there is a simple example on how to use it. If you built clang from source, take a look at c-index-test executable. It's source located in tools/c-index-test.
I don't want user to see all the exporting functions through Dependence in my DLL, is there a way to do it? I complie my DLL with C++ and MS Visual Studio.
Another option may be to create an exported function which will return an array of addresses of the functions which you would like to hide - once you have these addresses, you can call them directly
static void** Funcs = {&foo, &foo1, &foo2, 0};
__declspec (dllexport) void* GetFuncs (void)
{
return &Funcs;
}
in your executable you can do the following
void** Funcs = GetFuncs();
(*Funcs[0]) (1, 2, 3);
Use a *.def file and use the NONAME attribute to prevent the name's being exported: see Exporting Functions from a DLL by Ordinal Rather Than by Name ... there's an an example here.
This is really awkward, but if you don't want others to even see the ordinals, you can wrap your functions with COM. A COM DLL only exposes the common COM functions, so yours will be hidden. Then, there are techniques to use the DLL without registering it first, so no information about the COM class you'll be using could be found in the system. It's definitively a weird reason to use COM, but the request is quite uncommon as well...
IMO using NONAME is useless for this purpose - it does not hide dependencies. Dependencies would still be shown (by using ordinals). And your basic users would still be able to get to them via GetProcAddress.
I think you'll have to use more sophisticated approach - e.g. solutions proposed by eran.
Please don't try to hide your access inside a COM object thinking it will be hidden. Please see this article Enumerate COM object (IDispatch) methods using ATL? to see how someone can probe a COM DLL for function names.
Additionally it is desirable to hide the names of exported functions. This is desirable when your DLL is for your own use, via other code modules, and it does something which only you want your calling code to have access. This category may include algorithmic trade secrets.
Another trick is to export decoy functions that crash or set an internal state to allow the code know it has been compromised. In a compromised state the code can purposefully generate wrong results or random crashes. It could also send mail back to an account with information about the snooper.
A really simple way is to wrap it in a packer like UPX.
What you see exported is just the stuff UPX uses to unpack the file into memory
No, the whole point of having exports is so that they are VISIBLE.
That's the short answer. The long answer involves .def files. You can tell the linker to turn your C functions into indexes using a [definition file](http://msdn.microsoft.com/en-us/library/d91k01sh(VS.80).aspx).