I have a C++ codebase which uses templates a lot and therefore is header-only for the most part. The only time that something gets compiled is for the test cases. The build system used is GNU Autotools.
During work on the codebase I noticed that headers are rarely stand-alone. Headers only work if they are included in the right order and after the ones they (implicitly) depend on. When I add a header myself, I try to make it stand-alone and include the needed bits. Then I see that those bits were not standalone either.
Compiling the unit tests gives me errors, but the test is not as strong as I would like. I think I want to call g++ -c on each header file. If they are formed correctly, they should all compile, right?
How could I tell GNU Autotools in a maintainable way that it should compile each header into an object file in order to see whether dependencies are correctly specified via #include?
Related
I have shared code given to me that compiles on one linux system but not a newer system. The error is uint32_t does not name a type. I realize that this is often fixed by including the <cstdint> or stdint.h. The source code has neither of these includes and I am trying to seek an option that doesn't require modifying due to internal business practices that I can't control. Since it compiles as is on one machine they don't want changes to the source code.
I am not sure if it matters but the older system uses gcc 4.1 while the newer one uses gcc 4.4. I could install different versions of gcc if needed, or add/install library/include files on the newer machine, I have full control of what is on that machine.
What are my options for trying to compile this code on my machine without modifying the source? I can provide other details if needed.
I am not sure if it matters but the older system uses gcc 4.1 while the newer one uses gcc 4.4
GCC stopped including <stdint.h> some time ago. You now have to include something to get it...
I realize that this is often fixed by including the <cstdint> or stdint.h. The source code has neither of these includes and I am trying to seek an option that doesn't require modifying due to internal business practices that I can't control...
I hope I am not splitting hairs... If you can't modify the source files, then are you allowed to modify the build system or configuration files; or the environment? If so, you can use a force include to insert the file. See Include header files using command line option?
You can modify Makefile to force include stdint.h. If the build system honors CFLAGS or CXXFLAGS, then you can force include it in the flags. You last choice is probably to do something like export CC="gcc -include stdint.h".
The reason I am splitting hairs is OpenSSL and FIPS. The OpenSSL source files for the FIPS Object Module are sequestered and cannot be modified. We have to fallback to modifying supporting scripts and the environment to get some things working as expected.
If you really don't want to amend the file you could wrap it. Suppose it's called src.c create a new file src1.c:
#include <stdint.h>
#include "src.c"
And then compile src1.c.
PS: The problem may arise because compilers include other headers in their header files. This can mean some symbols 'officially' defined in other headers are quietly defined when you include a header that isn't specified as including it.
It's an error to write a program relying on a symbol for which the appropriate header hasn't been included - but it's easy to do and difficult to spot.
A changing compiler or version sometimes reveals these quiet issues.
Unfortunately, you can't force your code to work on a newer compiler without modifying something.
If you are allowed to modify the build script and add source files to the project, you might be able to add another source file to the project which, in turn, includes your affected file and headers it really needs. Remove the affected source files from the build, add the new ones, and rebuild.
If your shared source is using macro magic (e.g. an #include SOME_MACRO, where SOME_MACRO can be defined on the command line), you might be able to get away with modifying build options (to define that macro for every compilation of each file). Apart from relying on modifying the build process, it also relies on a possible-but-less-than-usual usage of macros in your project.
It may be possible to modify the standard headers in your compiler/library installation - assuming you have sufficient access (administrative) to do so. The problem with this is that the problem will almost certainly re-emerge whenever an update/patch to the compiler/library is installed. Over time, this approach will lock the code into relying on an older and older compiler/library that has been superseded - no ability to benefit from compiler bug fixes, evolution of standards, etc. This also severely limits your ability to share the code, and ability of others to use it - anyone who receives the code needs to modify their compiler/library installation.
The basic fact, however, is that your shared code relies on a particular implementation (compiler/library) that exhibits non-standard behaviour. Hence it has failed with an update of that implementation - which removed those non-standard occurrences - it is likely to fail with other implementations (porting to different compilers in future, etc). The real technical solution is to modify the source, and #include needed headers correctly. The real business solution is to make a business case justifying the need for such modifications, citing inefficiency - which will grow over time - in terms of cost and effort needed to maintain the shared code whenever it needs to be ported, or whenever a compiler is updated.
look at the second last line of code above your error, you'll find everything above that terminates with a , and only use a ; on the last entery
How can I use a library such as the GMP library in C++ in such a manner that the file can be compiled normally without having the person compiling to install the GMP themselves. This is for personal use(so no license issues etc.). I just want my client to be able to compile my C++ code which needs the GMP library. I tried using g++ -E option, and it kinda works, but the problem is that on top of it are included many files which are themselves part of the library(and not available without the lbrary). Also the linker needs the availability of the library even if I do do that successfully.
How do I copy the entire library per se, maintaining a single file of code, such that it compiles and doesn't cause problems. I'm quite sure it is doable, because copying some functions works like a charm, but I can't copy paste the 3000 line code manually, so how to do it?
If I understand you correctly, I guess what you want is to have your source and the entire GMP library in one file? And you want to do that automated?
I think there is no standard way to do this. You could write a script in you favorite language (bash, python, etc) which traverses the GMP code tree, appending the files (first the header files, then the cpp files) to your file while ignoring all local #include-lines. And hope that there are not too many macros etc which rely on the folder structure to be intact.
However, you could (and probably should) just supply the library and a adequate Makefile with your source code. Then the client wouldn't need to install the GMP lib, but could unpack the archive and run make. If the constraint is to do this in one file, maybe it is wiser to change that constraint...
I have a rather complex SCons script that compiles a big C++ project.
This gcc manual page says:
The compiler performs optimization based on the knowledge it has of the program. Compiling multiple files at once to a single output file mode allows the compiler to use information gained from all of the files when compiling each of them.
So it's better to give all my files to a single g++ invocation and let it drive the compilation however it pleases.
But SCons does not do this. it calls g++ separately for every single C++ file in the project and then links them using ld
Is there a way to make SCons do this?
The main reason to have a build system with the ability to express dependencies is to support some kind of conditional/incremental build. Otherwise you might as well just use a script with the one command you need.
That being said, the result of having gcc/g++ optimize as the manual describe is substantial. In particular if you have C++ templates you use often. Good for run-time performance, bad for recompile performance.
I suggest you try and make your own builder doing what you need. Here is another question with an inspirational answer: SCons custom builder - build with multiple files and output one file
Currently the answer is no.
Logic similar to this was developed for MSVC only.
You can see this in the man page (http://scons.org/doc/production/HTML/scons-man.html) as follows:
MSVC_BATCH When set to any true value, specifies that SCons should
batch compilation of object files when calling the Microsoft Visual
C/C++ compiler. All compilations of source files from the same source
directory that generate target files in a same output directory and
were configured in SCons using the same construction environment will
be built in a single call to the compiler. Only source files that have
changed since their object files were built will be passed to each
compiler invocation (via the $CHANGED_SOURCES construction variable).
Any compilations where the object (target) file base name (minus the
.obj) does not match the source file base name will be compiled
separately.
As always patches are welcome to add this in a more general fashion.
In general this should be left up to the program developer. Trying to compile all together in an amalgamation may introduce unintended behaviour to the program if it even compiles in the first place. Your best bet if you want this kind of optimisation without editing the source yourself is to use a compiler with inter-process optimisation like icc -ipo.
Example where an amalgamation of two .c files would not compile is for example if they use two identical static symbols with different functionality.
How can I check if gcc precompiled headers are supported with autoconf? Is there a macro like AC_CHECK_GCH? The project I'm working on has a lot of templates and includes, I have tried writing a .h with the most commonly used includes and compiling it manually. It could be nice to integrate it with the rest of autotools.
It's not clear what you are hoping to accomplish. Are you hoping to distribute a precompiled header in your tarball? doing so would be almost completely useless. (I actually think it would be completely useless, but I say "almost" because I might be missing something.)
The system headers on the target box (the machine on which your project is being built) are almost certainly different than the ones on your development box. If they are the same, then there's no need to be using autoconf.
If the user of your package happens to be using gcc and wants to use precompiled headers, then they will put the .gch files in the appropriate location and gcc will use them. You don't need to do anything in your package.
SQLite and googletest come with a very easy-to-use, single-file version which makes it a breeze to use it in other projects, as you just need to add a single source file. Both of them use home-brew tools to create the combined source file, so I wonder if there is a more generic tool for this? It should take a list of implementation/header files and spit out a combined header/source, and fix up the local includes. I'm fine if it doesn't handle conditional includes/includes with different #defines before them like Boost.Tuple/MPL uses them. A typical target library would be something like ICU.
try this
https://github.com/vinniefalco/Amalgamate/
may be it can help...
If your includes are correctly defined (that is, there are guards in all header files, and each header/code unit contains all includes that it requires) then you can do it 'half-manually'. Locate system header includes and comment them out, then create a header that just includes everything in any random order and preprocess the header (in gcc that would be gcc -E) and then operate similarly with the code units.
This manual approach can be cumbersome, but if you only need to do it once it will be fine. Then again, even if merging the header files might make sense I prefer not to do it. I would actually leave the files separate, and if you feel like you need to simplify access to it, provide bundling headers that just include the others. This is the approach that some boost libraries take, where you can include the details of what you want or a single header that includes everything else. The code can be compiled/linked into an static lib and used as if it was a single element.
This might be sort of interesting in ICU, which has, in some cases, incompatible defines/includes and mixtures of C and C++ and a number of generated files. Maybe let us know how it goes?
(disclosure: icu developer)