I have a long-standing C library used by C and C++ programs.
It has a few compilation units (in other words, C++ source files) that are totally C++, but this has never been a problem. My understanding is that linkers (at least, Linux, Windows, etc.) always work at the file-by-file level so that an object file in a library that isn't referred to, doesn't have any effect on the linking, isn't put in the binary, and so on. The C users of the library never refer to the C++ symbols, and the library doesn't internally, so the resulting linked app is C-only. So while it's always worked perfectly, I don't know if it's because the C++ doesn't make it past the linking stage, or because more deeply, this kind of mixing would always work even if I did mix languages.
For the first time I'm thinking of adding some C++ code to the existing C API's implementation.
For purposes of discussion let us say I have a C function that does something, and logs it via stdout, and since this is buffered separately from cout, the output can become confusing. So let us say this module has an option that can be set to log to cout instead of stdout. (This is a more general question, not merely about getting cout and stdout to cooperate.) The C++ code might or might not run, but the dependencies will definitely be there.
In what way would this impact users of this library? It is widely used so I cannot check with the entire user base, and as it's used for mission-critical apps it'd be unacceptable to make a change that makes links start failing, at least unless I supply a release note explaining the problem and solution.
Just as an example of a possible problem, I know compilers have "hidden" libraries of support functions that are necessary for C and C++ programs. There are obviously also the Standard C and C++ libraries, that normally you don't have to explicitly link to. My concerns are that the compiler might not know to do these things.
Related
I am using C++ (in xcode and code::blocks), I don't know much.
I want to make something compilable during runtime.
for eg:
char prog []={"cout<<"helloworld " ;}
It should compile the contents of prog.
I read a bit about quines , but it didn't help me .
It's sort of possible, but not portably, and not simply.
Basically, you have to write the code out to a file, then
compile it to a dll (invoking the compiler with system), and
then load the dll. The first is simple, the last isn't too
difficult (but will require implementation specific code), but
the middle step can be challenging: obviously, it only works if
the compiler is installed on the system, but you have to find
where it is installed, verify that it is the same version (or
at least a version which generates binary compatible code),
invoke it with the same options that were used when your code
was compiled, and process any errors.
C++ wasn't designed for this. (Compiled languages generally
aren't.)
The short answer is "no, you can't do that". C and C++ were never designed to do this.
That's pretty much also the long answer to the actual question, but I'll expand a bit on a few ideas.
The code, as compiled by the compiler is pretty certainly not trivial to add things to. There are a few techniques that can be used to "add more code" to a program:
Add a dynamic shared library (DLL), which contains code that has been compiled separately to the existing code. You could of course also have code in your program to output some code, compile this code with the compiler, link it into a dynamic library, and load it in your code.
You could build your own little code-generator that generates machine code in a chunk of memory. Note that you probably need to call a "special" memory allocation function, as "normal" memory allocations are typically not allowed to be executed - you need to allocate "with execute permission" - VirtualAlloc in Windows does have such a flag, and mmap in Linux/Unix flavours does too. And of course, you pretty much have to "be a compiler" to achieve this.
You could naturally also invent your own interpreted language, which would allow your program to load in for example a text-file with commands/instructions to be executed, or contain text inside the program for execution with this language.
But like I said to start with, this is not what C and C++ (and most other compiled languages) were meant for, so it's not going to be as simple as "stick some C++ code in a string, and make it run".
It depends why you want to do this.
If it's for efficiency reasons - you know what a function does only at run time, but it has to be very efficient - then what was already suggested (writing to a file, compiling to a dll / so and dynamically loading it) is your best option.
BUT if the reason you want this is to allow for user-input behaviour, say a general function your read from a database (behaviour or a unit ingame? value of a field in a plot?) - or more generally you just want to change / augment behaviour at runtime with little concern for efficiency, I recommend using an outside scripting language like lua, which easily interacts with your compiled C++ code.
The C and C++ languages compile to binary machine code, unlike Java and C# which generate instructions for a 'virtual machine' or interpreted scripting languages such as JavaScript. The compilation of C++ is performed by a separate executable, the compiler, which is not incorporated into the resulting executable.
So the language does not have any built in "eval" capability to translate further code once compilation is finished.
It's not uncommon for new C/C++ programmers to think they need to do this, but they typically don't. Perhaps you could expand further on what you're actually looking to do.
But if you do actually need to be able to do this, your options are:
Write code to compile a new executable with the new code and then run the resulting program.
Write a simple parser and "virtual machine" of your own,
Look at incorporating an embedded scripting/interpreted language such as Lua,
Try and wrap your head around integrating CINT,
See also: Scripting language for C++
Lately, I've been studying on the D language. I've always been kind of confused about the runtime.
From the information I can gather about it, (which isn't a whole lot) I understand that it's sort of a, well, runtime that helps with some of D's features. Like garbage collection, it runs along with your own programs. But since D is compiled to machine code, does it really need features such as garbage collection, if our program doesn't need it?
What really confuses me is statements such as:
"You can write an operating system in D."
I know that you can't really do that because there's more to an operating system than any compiled language can give without using some assembly. But if you had a kernel that called D code, would the D runtime prevent D from running in such a bare-bones environment? Or is the D runtime simpler than that? Can it
be thought of as simply an "automatic" inclusion of sourcefile/libraries, that when compiled with your application make no more of a difference than writing that code yourself?
Maybe I'm just looking at it all wrong. But I'm sure some information on the subject could do a lot of people good.
Yes, indeed, you can implement the functions of DRuntime that the compiler expects right in your main module (or wherever), compile without a runtime, and it'll Just Work (tm).
If you just build your code without a runtime, the compiler will emit errors when it's missing a symbol that it expects to be implemented by the runtime. You can then go and look at how DRuntime implements it to see what it does, and then implement it in whatever way you prefer. This is what XOmB, the kernel written in D (language version 1, though, but same deal), does: http://xomb.net/index.php?title=Main_Page
A lot of DRuntime isn't actually used by many applications, but it's the most convenient way to include the runtime components of D into applications, so that's why it's done as a static library (hopefully a shared library in the future).
It's pretty much the same as C and C++ I expect. The language itsself compiles to native code and just runs. But there is some code that is always needed to set everything up to run your program, for example processing command line parameters.
And some more complex language facilities are better implemented by calling some standard code rather than generating the code everywhere it is used. For example throwing an exception needs to find the relevent handler function. No doubt the compiler could insert the code to do there everywhere it was used, but it's much more sensible to write the code in a library and call that. Plus there are many pre-written library functions in the standard library.
All of this taken together is the runtime.
If you write C you can use it to write an operating system because you can write the startup code yourself, you can write all the code for handing memory allocation yourself, you can write all the code for standard functions like strcat yourself instead of using the provided ones in the runtime. But you'd not want to do that for any application program.
What are the disadvantages of implementing C library in C++? The library is going to be used to build Windows application for regular PC using Visual Studio 2008 or newer. It is not clear why the specs state that it should be C library. I am guessing that what they want is plain C-API, not pure C lib. But my boss disagrees.
Anyway, what I want to do is to extern "C" all function declarations and use C++ in implementation files. I did some testing and everything worked just fine even when the application was compiled as C (by changing project option in Visual Studio).
I've seen people do that for, say, exposing STL collections to C programs. If you are sure that the library will only be used in environments with sane C/C++ compilers (say, VS and gcc only) I think this is a pretty safe thing to do from the technical perspective. N
ow, it sounds like you have some sort of outside requirement at play here, but obviously we can't comment on that. Might be worse double checking with the requirements source?
UPDATE: oh, I should mention that it will affect the DLLs that your library will require. Like the C++ runtime DLL will need to be loaded in addition to CRT.
The extern c is used like all the time to port some functionality from c to c++. For instance the new operator inturn calls the malloc() from std c. This is one good example of c library being given a c++ look. new operator makes it much more easy to allocate memory and in addition to that it also allows a lot of functionality like operator overloading which is not available in c. My guess would be to add more functionality to and to make neat interfaces.
If you are considering about disadvatanges then it might be related compiler specific problems where the ABL generated for a c++ program differs from that of the C and if the compiler is not able to differentiate between the two then you struck with it.
I am not sure if this is what you are seeking for, if not try this link it might be of some assistance.
http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=180
If they are going to use it for a C programm, i.e. the main() function is compiled by a C compiler, then you have to be very carefully with your C++ library. The problem is that the c programm will not execute any constructor for static variables. So you have to omit the usage of any static variables with constructor. This is easy for your library itself, but you have to check every call to a library C++ function if it relies on the existance of a static initialized variable (e.g. std::cout, std::cin etc.).
I have no experience with llvm or clang, yet. From what I read clang is said to be easily embeddable Wikipedia-Clang, however, I did not find any tutorials about how to achieve this. So is it possible to provide the user of a c++ application with scripting-powers by JIT compiling and executing user-defined code at runtime? Would it be possible to call the applications own classes and methods and share objects?
edit: I'd prefer a C-like syntax for the script-languge (or even C++ itself)
I don't know of any tutorial, but there is an example C interpreter in the Clang source that might be helpful. You can find it here: http://llvm.org/viewvc/llvm-project/cfe/trunk/examples/clang-interpreter/
You probably won't have much of a choice of syntax for your scripting language if you go this route. Clang only parses C, C++, and Objective C. If you want any variations, you may have your work cut out for you.
I think here's what exactly you described.
http://root.cern.ch/drupal/content/cling
You can use clang as a library to implement JIT compilation as stated by other answers.
Then, you have to load up the compiled module (say, an .so library).
In order to accomplish this, you can use standard dlopen (unix) or LoadLibrary (windows) to load it, then use dlsym (unix) to dynamically reference compiled functions, say a "script" main()-like function whose name is known. Note that for C++ you would have to use mangled symbols.
A portable alternative is e.g. GNU's libltdl.
As an alternative, the "script" may run automatically at load time by implementing module init functions or putting some static code (the constructor of a C++ globally defined object would be called immediately).
The loaded module can directly call anything in the main application. Of course symbols are known at compilation time by using the proper main app's header files.
If you want to easily add C++ "plugins" to your program, and know the component interface a priori (say your main application knows the name and interface of a loaded class from its .h before the module is loaded in memory), after you dynamically load the library the class is available to be used as if it was statically linked. Just be sure you do not try to instantiate a class' object before you dlopen() its module.
Using static code allows to implement nice automatic plugin registration mechanisms too.
I don't know about Clang but you might want to look at Ch:
http://www.softintegration.com/
This is described as an embeddable or stand-alone c/c++ interpreter. There is a Dr. Dobbs article with examples of embedding it here:
http://www.drdobbs.com/architecture-and-design/212201774
I haven't done more than play with it but it seems to be a stable and mature product. It's commercial, closed-source, but the "standard" version is described as free for both personal and commercial use. However, looking at the license it seems that "commercial" may only include internal company use, not embedding in a product that is then sold or distributed. (I'm not a lawyer, so clearly one should check with SoftIntegration to be certain of the license terms.)
I am not sure that embedding a C or C++ compiler like Clang is a good idea in your case. Because the "script", that is the (C or C++) code fed (at runtime!) can be arbitrary so be able to crash the entire application. You usually don't want faulty user input to be able to crash your application.
Be sure to read What every C programmer should know about undefined behavior because it is relevant and applies to C++ also (including any "C++ script" used by your application). Notice that, unfortunately, a lot of UB don't crash processes (for example a buffer overflow could corrupt some completely unrelated data).
If you want to embed an interpreter, choose something designed for that purpose, like Guile or Lua, and be careful that errors in the script don't crash the entire application. See this answer for a more detailed discussion of interpreter embedding.
I'm just starting to explore C++, so forgive the newbiness of this question. I also beg your indulgence on how open ended this question is. I think it could be broken down, but I think that this information belongs in the same place.
(FYI -- I am working predominantly with the QT SDK and mingw32-make right now and I seem to have configured them correctly for my machine.)
I knew that there was a lot in the language which is compiler-driven -- I've heard about pre-compiler directives, but it seems like someone would be able to write books the different C++ compilers and their respective parameters. In addition, there are commands which apparently precede make (like qmake, for example (is this something only in QT)).
I would like to know if there is any place which gives me an overview of what compilers are out there, and what their different options are. I'd also like to know how each of them views Makefiles (it seems that there is a difference in syntax between them?).
If there is no website regarding, "Everything you need to know about C++ compilers but were afraid to ask," what would be the best way to go about learning the answers to these questions?
Concerning the "numerous options of the various compilers"
A piece of good news: you needn't worry about the detail of most of these options. You will, in due time, delve into this, only for the very compiler you use, and maybe only for the options that pertain to a particular set of features. But as a novice, generally trust the default options or the ones supplied with the make files.
The broad categories of these features (and I may be missing a few) are:
pre-processor defines (now, you may need a few of these)
code generation (target CPU, FPU usage...)
optimization (hints for the compiler to favor speed over size and such)
inclusion of debug info (which is extra data left in the object/binary and which enables the debugger to know where each line of code starts, what the variables names are etc.)
directives for the linker
output type (exe, library, memory maps...)
C/C++ language compliance and warnings (compatibility with previous version of the compiler, compliance to current and past C Standards, warning about common possible bug-indicative patterns...)
compile-time verbosity and help
Concerning an inventory of compilers with their options and features
I know of no such list but I'm sure it probably exists on the web. However, suggest that, as a novice you worry little about these "details", and use whatever free compiler you can find (gcc certainly a great choice), and build experience with the language and the build process. C professionals may likely argue, with good reason and at length on the merits of various compilers and associated runtine etc., but for generic purposes -and then some- the free stuff is all that is needed.
Concerning the build process
The most trivial applications, such these made of a single unit of compilation (read a single C/C++ source file), can be built with a simple batch file where the various compiler and linker options are hardcoded, and where the name of file is specified on the command line.
For all other cases, it is very important to codify the build process so that it can be done
a) automatically and
b) reliably, i.e. with repeatability.
The "recipe" associated with this build process is often encapsulated in a make file or as the complexity grows, possibly several make files, possibly "bundled together in a script/bat file.
This (make file syntax) you need to get familiar with, even if you use alternatives to make/nmake, such as Apache Ant; the reason is that many (most?) source code packages include a make file.
In a nutshell, make files are text files and they allow defining targets, and the associated command to build a target. Each target is associated with its dependencies, which allows the make logic to decide what targets are out of date and should be rebuilt, and, before rebuilding them, what possibly dependencies should also be rebuilt. That way, when you modify say an include file (and if the make file is properly configured) any c file that used this header will be recompiled and any binary which links with the corresponding obj file will be rebuilt as well. make also include options to force all targets to be rebuilt, and this is sometimes handy to be sure that you truly have a current built (for example in the case some dependencies of a given object are not declared in the make).
On the Pre-processor:
The pre-processor is the first step toward compiling, although it is technically not part of the compilation. The purposes of this step are:
to remove any comment, and extraneous whitespace
to substitute any macro reference with the relevant C/C++ syntax. Some macros for example are used to define constant values such as say some email address used in the program; during per-processing any reference to this constant value (btw by convention such constants are named with ALL_CAPS_AND_UNDERSCORES) is replace by the actual C string literal containing the email address.
to exclude all conditional compiling branches that are not relevant (the #IFDEF and the like)
What's important to know about the pre-processor is that the pre-processor directive are NOT part of the C-Language proper, and they serve several important functions such as the conditional compiling mentionned earlier (used for example to have multiple versions of the program, say for different Operating Systems, or indeed for different compilers)
Taking it from there...
After this manifesto of mine... I encourage to read but little more, and to dive into programming and building binaries. It is a very good idea to try and get a broad picture of the framework etc. but this can be overdone, a bit akin to the exchange student who stays in his/her room reading the Webster dictionary to be "prepared" for meeting native speakers, rather than just "doing it!".
Ideally you shouldn't need to care what C++ compiler you are using. The compatability to the standard has got much better in recent years (even from microsoft)
Compiler flags obviously differ but the same features are generally available, it's just a differently named option to eg. set warning level on GCC and ms-cl
The build system is indepenant of the compiler, you can use any make with any compiler.
That is a lot of questions in one.
C++ compilers are a lot like hammers: They come in all sizes and shapes, with different abilities and features, intended for different types of users, and at different price points; ultimately they all are for doing the same basic task as the others.
Some are intended for highly specialized applications, like high-performance graphics, and have numerous extensions and libraries to assist the engineer with those types of problems. Others are meant for general purpose use, and aren't necessarily always the greatest for extreme work.
The technique for using each type of hammer varies from model to model—and version to version—but they all have a lot in common. The macro preprocessor is a standard part of C and C++ compilers.
A brief comparison of many C++ compilers is here. Also check out the list of C compilers, since many programs don't use any C++ features and can be compiled by ordinary C.
C++ compilers don't "view" makefiles. The rules of a makefile may invoke a C++ compiler, but also may "compile" assembly language modules (assembling), process other languages, build libraries, link modules, and/or post-process object modules. Makefiles often contain rules for cleaning up intermediate files, establishing debug environments, obtaining source code, etc., etc. Compilation is one link in a long chain of steps to develop software.
Also, many development environments abstract the makefile into a "project file" which is used by an integrated development environment (IDE) in an attempt to simplify or automate many programming tasks. See a comparison here.
As for learning: choose a specific problem to solve and dive in. The target platform (Linux/Windows/etc.) and problem space will narrow the choices pretty well. Which you choose is often linked to other considerations, such as working for a particular company, or being part of a team. C++ has something like 95% commonality among all its flavors. Learn any one of them well, and learning the next is a piece of cake.