I have a header file called coolStuff.h that contains a function awesomeSauce(arg1) that I would like to use in my cpp source file.
Directory Structure:
RworkingDirectory
sourceCpp
theCppFile.cpp
cppHeaders
coolStuff.h
The Code:
#include <Rcpp.h>
#include <cppHeaders/coolStuff.h>
using namespace Rcpp;
// [[Rcpp::export]]
double someFunctionCpp(double someInput){
double someOutput = awesomeSauce(someInput);
return someOutput;
}
I get the error:
theCppFile.cpp:2:31: error: cppHeaders/coolStuff.h: No such file or directory
I have moved the file and directory all over the place and can't seem to get this to work. I see examples all over the place of using 3rd party headers that say just do this:
#include <boost/array.hpp>
(Thats from Hadley/devtools)
https://github.com/hadley/devtools/wiki/Rcpp
So what gives? I have been searching all morning and can't find an answer to what seems to me like a simple thing.
UPDATE 01.11.12
Ok now that I have figured out how to build packages that use Rcpp in Rstudio let me rephrase the question. I have a stand alone header file coolStuff.h that contains a function I want to use in my cpp code.
1) Where should I place coolStuff.h in the package directory structure so the function it contains can be used by theCppFile.cpp?
2) How do I call coolStuff.h in the cpp files? Thanks again for your help. I learned a lot from the last conversation.
Note: I read the vignette "Writing a package that uses Rcpp" and it does not explain how to do this.
The Answer:
Ok let me summarize the answer to my question since it is scattered across this page. If I get a detail wrong feel free to edit this or let me know and I will edit it:
So you found a .h or .cpp file that contains a function or some other bit of code you want to use in a .cpp file you are writing to use with Rcpp.
Lets keep calling this found code coolStuff.h and call the function you want to use awesomeSauce(). Lets call the file you are writing theCppFile.cpp.
(I should note here that the code in .h files and in .cpp files is all C++ code and the difference between them is for the C++ programer to keep things organized in the proper way. I will leave a discussion of the difference out here, but a simple search here on SO will lead you to discussion of the difference. For you the R programer needing to use a bit o' code you found, there is no real difference.)
IN SHORT: You can use a file like coolStuff.h provided it calls no other libraries, by either cut-and-pasteing into theCppFile.cpp, or if you create a package you can place the file in the \src directory with the theCppFile.cpp file and use #include "coolStuff.h" at the top of the file you are writing. The latter is more flexible and allows you to use functions in coolStuff.h in other .cpp files.
DETAILS:
1) coolStuff.h must not call other libraries. So that means it cannot have any include statements at the top. If it does, what I detail below probably will not work, and the use of found code that calls other libraries is beyond the scope of this answer.
2) If you want to compile the file with sourceCpp() you need to cut and paste coolStuff.h into theCppFile.cpp. I am told there are exceptions, but sourceCpp() is designed to compile one .cpp file, so thats the best route to take.
(NOTE: I make no guarantees that a simple cut and paste will work out of the box. You may have to rename variables, or more likely switch the data types being used to be consistent with those you are using in theCppFile.cpp. But so far, cut-and-paste has worked with minimal fuss for me with 6 different simple .h files)
3) If you only need to use code from coolStuff.h in theCppFile.cpp and nowhere else, then you should cut and paste it into theCppFile.cpp.
(Again I make no guarantees see the note above about cut-and-paste)
4) If you want to use code contained in coolStuff.h in theCppFile.cpp AND other .cpp files, you need to look into building a package. This is not hard, but can be a bit tricky, because the information out there about building packages with Rcpp ranges from the exhaustive thorough documentation you want with any R package (but that is above your head as a newbie), and the newbie sensitive introductions (that may leave out a detail you happen to need).
Here is what I suggest:
A) First get a version of theCppFile.cpp with the code from coolStuff.h cut-and-paste into theCppFile.cpp that compiles with sourceCpp() and works as you expect it to. This is not a must, but if you are new to Rcpp OR packages, it is nice to make sure your code works in this simple situation before you move to the more complicated case below.
B) Now build your package using Rcpp.package.skeleton() or use the Build functionality in RStudio (HIGHLY recommended). You can find details about using Rcpp.package.skeleton() in hadley/devtools or Rcpp Attributes Vignette. The full documentation for writing packages with Rcpp is in the Writing a package that uses Rcpp, however this one assumes you know your way around C++ fairly well, and does not use the new "Attributes" way of doing Rcpp.
Don't forget to "Build & Reload" if using RStudio or compileAttributes() if you are not in RStudio.
C) Now you should see in your \R directory a file called RcppExports.R. Open it and check it out. In RcppExports.R you should see the R wrapper functions for all the .cpp files you have in your \src directory. Pretty sweet.
D) Try out the R function that corresponds to the function you wrote in theCppFile.cpp. Does it work? If so move on.
E) With your package built you can move coolStuff.h into the src folder with theCppFile.cpp.
F) Now you can remove the cut-and-paste code from theCppFile.cpp and at the top of theCppFile.cpp (and any other .cpp file you want to use code from coolStuff.h) put #include "coolStuff.h" just after #include <Rcpp.h>. Note that there are no brackets around ranker.h, rather there are "". This is a C++ convention when including local files provided by the user rather than a library file like Rcpp or STL etc...
G) Now you have to rebuild the package. In RStudio this is just "Build & Reload" in the Build menu. If you are not using RStudio you should run compileAttributes()
H) Now try the R function again just as you did in step D), hopefully it works.
The problem is that sourceCpp is expressly designed to build only a single standalone source file. If you want sourceCpp to have dependencies then they need to either be:
In the system include directories (i.e. /usr/local/lib or /usr/lib); or
In an R package which you list in an Rcpp::depends attribute
As Dirk said, if you want to build more than one source file then you should consider using an R package rather than sourceCpp.
Note that if you are working on a package and perform a sourceCpp on a file within the src directory of the package it will build it as if it's in the package (i.e. you can include files from the src directory or inst/include directory).
I was able to link any library (MPFR in this case) by setting two environment variables before calling sourceCpp:
Sys.setenv("PKG_CXXFLAGS"="-I/usr/include")
Sys.setenv("PKG_LIBS"="-L/usr/lib/x86_64-linux-gnu/ -lm -lmpc -lgmp -lmpfr")
The first variable contains the path of the library headers. The second one includes the path of the library binary and its file name. In this case other dependent libraries are also required. For more details check g++ compilation and link flags. This information can usually be obtained using pkg-config:
pkg-config --cflags --libs mylib
For a better understanding, I recommend using sourceCpp with verbose output in order to print the g++ compilation and linking commands:
sourceCpp("mysource.cpp", verbose=TRUE, rebuild=TRUE)
I was able to link a boost library using the following global command in R before calling sourceCpp
Sys.setenv("PKG_CXXFLAGS"="-I \path-to-boost\")
Basically mirroring this post but with a different compiler option: http://gallery.rcpp.org/articles/first-steps-with-C++11/
Couple of things:
"Third party header libraries" as in your subject makes no sense.
Third-party headers can work via templated code where headers are all you need, ie there is only an include step and the compiler resolves things.
Once you need libraries and actual linking of object code, you may not be able the powerful and useful sourceCpp unless you gave it meta-information via plugins (or env. vars).
So in that case, write a package.
Easy and simple things are just that with Rcpp and the new attributes, or the older inline and cxxfunction. More for complex use --- and external libraries is more complex you need to consult the documentation. We added several vignettes to Rcpp for that.
Angle brackets <> are for system includes, such as the standard libs.
For files local to your own project, use quotes: "".
Also, if you are placing headers in a different directory, the header path should be specified local to the source file including it.
So for your example this ought to work:
#include "../cppHeaders/coolStuff.h"
You can configure the search paths such that the file could be found without doing that, but it's generally only worth doing that for stuff you want to include across multiple projects, or otherwise would expect someone to 'install'.
We can add it by writing path to the header in the PKG_CXXFLAGS variable of the .R/Makevars file as shown below. The following is an example of adding header file of xtensor installed with Anaconda in macOS.
⋊> ~ cat ~/.R/Makevars
CC=/usr/local/bin/gcc-7
CXX=/usr/local/bin/g++-7
CPLUS_INCLUDE_PATH=/opt/local/include:$CPLUS_INCLUDE_PATH
PKG_CXXFLAGS=-I/Users/kuroyanagi/.pyenv/versions/miniconda3-4.3.30/include
LD_LIBRARY_PATH=/opt/local/lib:$LD_LIBRARY_PATH
CXXFLAGS= -g0 -O3 -Wall
MAKE=make -j4
This worked for me in Windows:
Sys.setenv("PKG_CXXFLAGS"='-I"C:/boost/boost_1_66_0"')
Edit: Actually you don't need this if you use Boost Headers (thanks to Ralf Stubner):
// [[Rcpp::depends(BH)]]
Related
Introduction
I am trying to use Toulbar2 as a C++ library in my CMake project, however I am having much trouble linking it to my main executable.
I found many similar questions on this topic, both here and on other similar website, but none of them helped me with my specific issue. I tried literally everything and I did not menage to make it work, I was hoping that some of you may help me with that.
I am running Ubuntu 18.04, CMake version 3.23 and in my project I am using the standard C++11. I am a proficient programmer, but I am just an beginner/intermediate user of both C++ and CMake.
What I've already tried to do
I cannot list all my attempts, so I will only mention those I think are my best ones, to give you an idea of what I may be doing wrong.
1) In my first attempt, I tried to use the same approach I used for any non-standard library I imported, i.e. using find_package() in CMakeLists.txt to then link the found LIBRARIES and include the found INCLUDE_DIRS. However, I soon realised that Toulbar2 provides neither a Find<package>.cmake or <name>Config.cmake file. So, this approach could not work.
2) My second attempt is the one that in my opinion brought me the closest to the solution I hoped for. You can easily compile Toulbar2 as a dynamic library using the command: cmake -DLIBTB2=ON .. in an hypothetical build directory you previously created. After compiling with make you have your .so file in build/lib/Linux. After installation, you can make CMake find this library by itself using the command find_library. So, my CMakeLists.txt ended up looking like this:
[...]
find_library(TB2_LIBRARIES tb2)
if(TB2_LIBRARIES)
set(all_depends ${all_depends} ${TB2_LIBRARIES})
else(TB2_LIBRARIES)
add_compile_definitions("-DNO_TB2")
message("Compiling without Toulbar2, if you want to use it, please install it first")
endif(TB2_LIBRARIES)
[...]
target_link_libraries(main ${all_depends})
[...]
This code works to some extent, meaning that CMake correctly finds the library and runs the linking command, however if I try to #include <toulbar2lib.hpp> the header is not found. So I figured out I should have told CMake where to find that header, so I ended up adding a
include_directories(/path/to/header/file's/directory)
However, I still have another problem. The header is found, but a lot of names used in the header are not found at compilation time. The reason is that in Toulbar2 some variables/types are defined conditionally by using preprocessing directives like #ifdef or #ifndef, and in turn the global variables used in these conditions are defined through CMake at compilation time. If you are interested in an example, I can mention the Cost type that is used in the mentioned header file. I see that there's a piece missing in the puzzle here, but I cannot figure out which one. Since I pre-compiled the library those definitions should exist when I include the header file, because I am correctly linking the correspondent library that contains those definitions.
3) My third attempt is less elegant than the the other two I mentioned, but I was desperately trying to find a solution. So, I copied the whole toulbar2 cloned folder inside my project and I tried to add it as a subdirectory, meaning that my main CMakeLists.txt contains the line:
add_subdirectory(toulbar2)
It provides a CMakeLists.txt too, there should be no problem in doing it. Then I include the src directory of toulbar2, that contains the header file I need, and I should be okay. Right? Wrong. I got the same problem that I had before with (2), i.e. some variables/types conditionally defined were not actually defined when I tried to compile my project, even though the subproject toulbar2 was correctly (no errors) compiled.
I just wanted to mention that any answer is welcome, however if you could help me figure out an elegant solution (see 1 or 2) for this problem it would be way better, as this code is intended to be published soon or later. Thank you in advance for your help.
Solution 2) looks fine. You just need to add the following compilation flags -DNDEBUG -DBOOST -DLONGDOUBLE_PROB -DLONGLONG_COST when compiling your project with toulbar2lib.hpp. See github/toulbar2 README.md how to compile without cmake for those flags (except WCSPFORMATONLY that should not by used in this context).
In the following files:
app/main.cpp
app/config.hpp
app/show.hpp
app/file.hpp
lib/append.hpp
lib/clear.hpp
lib/reopen.hpp
lib/crypt.hpp
I have a problem. Imagine a few of my files use lib/crypt.hpp. For example in app/file.hpp I should write:
#include "../lib/crypt.hpp"
If I take this file to lib folder it does not work anymore. So I need a something like:
#include "LIB/crypt.hpp"
So, this LIB/ has meaning of (empty) in lib directory while it has meaning of "../lib/" in app directory.
Then I will have no worry about moving file.hpp from app to lib. I just need to fix the callers of it. But not what it calls.
Similar to concept of web programming frameworks. Is it possible in C/C++?
According to what you wrote you're searching for a way to move your sources around without worrying for hard-coded paths to your headers.
You didn't specify the compiler you're using so I'm not focusing on a specific one, anyway most C++ compilers have an option to specify one or more header search paths (on gcc you would just use something like -Ilib_parent_dir).
Once you've included your lib's parent path to the search list, you can move your sources around (as long as you don't move lib's headers) and have them include the right headers with something like #include <lib/crypt.hpp> (and keep include paths priority in mind)
This should be a cleaner and simpler way to achieve what you asked rather than using a macro.
Is there an automated way to take a large amount of C++ header files and combine them in a single one?
This operation must, of course, concatenate the files in the right order so that no types, etc. are defined before they are used in upcoming classes and functions.
Basically, I'm looking for something that allows me to distribute my library in two files (libfoo.h, libfoo.a), instead of the current bunch of include files + the binary library.
As your comment says:
.. I want to make it easier for library users, so they can just do one single #include and have it all.
Then you could just spend some time, including all your headers in a "wrapper" header, in the right order. 50 headers are not that much. Just do something like:
// libfoo.h
#include "header1.h"
#include "header2.h"
// ..
#include "headerN.h"
This will not take that much time, if you do this manually.
Also, adding new headers later - a matter of seconds, to add them in this "wrapper header".
In my opinion, this is the most simple, clean and working solution.
A little bit late, but here it is. I just recently stumbled into this same problem myself and coded this solution: https://github.com/rpvelloso/oneheader
How does it works?
Your project's folder is scanned for C/C++ headers and a list of headers found is created;
For every header in the list it analyzes its #include directives and assemble a dependency graph in the following way:
If the included header is not located inside the project's folder then it is ignored (e.g., if it is a system header);
If the included header is located inside the project's folder then an edge is create in the dependency graph, linking the included header to the current header being analyzed;
The dependency graph is topologically sorted to determine the correct order to concatenate the headers into a single file. If a cycle is found in the graph, the process is interrupted (i.e., if it is not a DAG);
Limitations:
It currently only detects single line #include directives (e.g., #include );
It does not handles headers with the same name in different paths;
It only gives you a correct order to combine all the headers, you still need to concatenate them (maybe you want remove or modify some of them prior to merging).
Compiling:
g++ -Wall -ggdb -std=c++1y -lstdc++fs oneheader.cpp -o oneheader[.exe]
Usage:
./oneheader[.exe] project_folder/ > file_sequence.txt
(Adapting an answer to my dupe question:)
There are several other libraries which aim for a single-header form of distribution, but are developed using multiple files; and they too need such a mechanism. For some (most?) it is opaque and not part of the distributed code. Luckily, there is at least one exception: Lyra, a command-line argument parsing library; it uses a Python-based include file fuser/joiner script, which you can find here.
The script is not well-documented, but they way you use it is with 3 command-line arguments:
--src-include - The include file to convert, i.e. to merge its include directives into its body. In your case it's libfoo.h which includes the other files.
--dst-include - The output file to write - the result of the merging.
--src-include-dir - The directory relative to which include files are specified (i.e. an "include search path" of one directory; the script doesn't support the complex mechanism of multiple include paths and search priorities which the C++ compiler offers)
The script acts recursively, so if file1.h includes another file under the --src-include-dir, that should be merged in as well.
Now, I could nitpick at the code of that script, but - hey, it works and it's FOSS - distributed with the Boost license.
If your library is so big that you cannot build and maintain a single wrapping header file like Kiril suggested, this may mean that it is not architectured well enough.
So if your library is really huge (above a million lines of source code), you might consider automating that, with tools like
GCC make dependency generator preprocessor options like -M -MD -MF etc, with another hand made script sorting them
expensive commercial static analysis tools like coverity
customizing a compiler thru plugins or (for GCC 4.6) MELT extensions
But I don't understand why you want an automated way of doing this. If the library is of reasonable size, you should understand it and be able to write and maintain a wrapping header by hand. Automating that task will take you some efforts (probably weeks, not minutes) so is worthwhile only for very large libraries.
If you have a master include file that includes all others available, you could simply hack a C preprocessor re-implementation in Perl. Process only ""-style includes and recursively paste the contents of these files. Should be a twenty-liner.
If not, you have to write one up yourself or try at random. Automatic dependency tracking in C++ is hard. Like in "let's see if this template instantiation causes an implicit instantiation of the argument class" hard. The only automated way I see is to shuffle your include files into a random order, see if the whole bunch compiles, and re-shuffle them until it compiles. Which will take n! time, you might be better off writing that include file by hand.
While the first variant is easy enough to hack, I doubt the sensibility of this hack, because you want to distribute on a package level (source tarball, deb package, Windows installer) instead of a file level.
You really need a build script to generate this as you work, and a preprocessor flag to disable use of the amalgamate (that could be for your uses).
To simplify this script/program, it helps to have your header structures and include hygiene in top form.
Your program/script will need to know your discovery paths (hint: minimise the count of search paths to one if possible).
Run the script or program (which you create) to replace include directives with header file contents.
Assuming your headers are all guarded as is typical, you can keep track of what files you have already physically included and perform no action if there is another request to include them. If a header is not found, leave it as-is (as an include directive) -- this is required for system/third party headers -- unless you use a separate header for external includes (which is not at all a bad idea).
It's good to have a build phase/translation that includes header alone and produces zero warnings or errors (warnings as errors).
Alternatively, you can create a special distribution repository so they never need to do more than pull from it occasionally.
What you want to do sounds "javascriptish" to me :-) . But if you insist, there is always "cat" (or the equivalent in Windows):
$ cat file1.h file2.h file3.h > my_big_file.h
Or if you are using gcc, create a file my_decent_lib_header.h with the following contents:
#include "file1.h"
#include "file2.h"
#include "file3.h"
and then use
$ gcc -C -E my_decent_lib_header.h -o my_big_file.h
and this way you even get file/line directives that will refer to the original files (although that can be disabled, if you wish).
As for how automatic is this for your file order, well, it is not at all; you have to decide the order yourself. In fact, I would be surprised to hear that a tool that orders header dependencies correctly in all cases for C/C++ can be built.
usually you don't want to include every bit of information from all your headers into the special header that enables the potential user to actually use your library. The non-trivial removal of type definitions, further includes or defines, that are not necessary for the user of your interface to know can not be automatedly done. As far as I know.
Short answer to your main question:
No.
My suggestions:
manually make a new header, that contains all relevant information (nothing more, nothing less) for the user of your library interface. Add nice documentation comments for each component it contains.
use forward declarations where possible, instead of full-fledged included definitions. Put the actual includes in your implementation files. The less include statements you have in your headers, the better.
don't build a deeply nested hierarchy of includes. This makes it extremely hard to keep an overview on the contents of every bit you include. The user of your library will look into the header to learn how to use it. And he will probably not be able to distinguish relevant code from irrelevant on the first sight. You want to maximize the ratio of relevant code per total code in the main header for your library.
EDIT
If you really do have a toolkit library, and the order of inclusion really does not matter, and you have a bunch of independent headers, that you want to enumerate just for convenience into a single header, then you can use a simple script. Like the following Python (untested):
import glob
with open("convenience_header.h", 'w') as f:
for header in glob.glob("*.h"):
f.write("#include \"%s\"\n" % header)
I have access to a large C++ project, full of files and with a very complicated makefile courtesy of automake & friends
Here is an idea of the directory structure.
otherproject/
folder1/
some_headers.h
some_files.cpp
...
folderN/
more_headers.h
more_files.cpp
build/
lots_of things here
objs/
lots_of_stuff.o
an_executable_I_dont_need.exe
my_stuff/
my_program.cpp
I want to use a class from the big project, declared in say, "some_header.h"
/* my_program.cpp */
#include "some_header.h"
int main()
{
ThatClass x;
x.frobnicate();
}
I managed to compile my file by painstakingly passing lots of "-I" options to gcc so that it could find all the header files
g++ my_program.cpp -c -o myprog.o -I../other/folder1 ... -I../other/folderN
When it comes to compiling I have to manually include all his ".o"s, which is probably overkill
g++ -o my_executable myprog.o ../other/build/objs/*.o
However, not only do I have to do things like manually removing his "main.o" from the list, but this isn't even enough since I forgot to also link against all the libraries that he happened to use.
otherproject/build/objs/StreamBuffer.h:50: undefined reference to `gzread'
At this point I am starting to feel I am probably doing something very wrong. How should I proceed? What is the usual and what is the best approach this kind of issue?
I need this to work on Linux in case something platform-specific needs to be done.
Generally the project's .o files should come grouped together into a library (on Linux, .a file if it's a static library, or .so if it's a dynamic library), and you link to the library using the -L option to specify the location and the -l option to specify the library name.
For example, if the library file is at /path/to/big_project/libbig_project.a, you would add the options -L /path/to/big_project -l big_project to your gcc command line.
If the project doesn't have a library file that you can link to (e.g. it's not a library but an executable program and you just want some of the code used by the executable program), you might want to try asking the project's author to create such a library file (if he/she is familiar with "automake and friends" it shouldn't be too much trouble for him), or try doing so yourself.
EDIT Another suggestion: you said the project comes with a makefile. Try makeing it with the makefile, and see what its compiler command line looks like. Does it have many includes and individual object files as well?
Treating an application which was not developed as a library as if it was a library isn't likely to work. As an offhand example, omitting the main might wind up cutting out initialization code that the class you want depends upon.
The responsible thing to do here is to read the code, understand it, and turn the functionality you want into a proper library. Build the "exe you don't need" with debug symbols and set breakpoints in the constructors and methods of the class. Step into them so you get a grasp on the functionality and what parts of the program are relevant and irrelevant to your needs.
Hopefully the code is under some kind of version control system that supports branching (such as Git). If not, make your own repository that does. Edit the files until you've organized them into a library and code that uses the library. Make sure it works properly within the context of the original program. Then turn around and use this library in your own program.
If you've done a good job, you might be able to convince the original authors to accept the separation back into their original codebase. If not, at least version control has your back so you can manage integration of future changes.
I have a c++ project with multiple source files and multiple header files. I want to submit my project for a programming contest which requires a single source file. Is there an automated way of collapsing all the files into single .cpp file?
For example if I had a.cpp, a.h, b.cpp, b.h etc., I want to get a main.cpp which will compile and run successfully. If I did this manually, could I simply merge the header files and append the source files to each other? Are there gotchas with externs, include dependencies and forward declarations?
I also needed this for a coding contest. Codingame to be precise. So I wrote a quick JavaScript script to do the trick. You can find it here:
https://www.npmjs.com/package/codingame-cpp-merge
I used it in 1 live contest and one offline game and it never produced bad results. Feel free to suggest changes or make pull requests for it on github!
In general, you cannot do this. Whilst you can happily paste the contents of header files to the locations of the corresponding #includes, you cannot, in general, simply concatenate source files. For starters, you may end up with naming clashes between things with file scope. And given that you will have copy-pasted header files (with class definitions, etc.) into each source file, you'll end up with classes defined multiple times.
There are much better solutions. As has been mentioned, why not simply zip up your entire project directory (after you've cleaned out auto-generated object files, etc.)? And if you really must have a single source file, then just write a single source file!
well, this is possiable, I have seen many project combine source files to single .h and .c/.cpp, such as sqlite
but the code must have some limits, such as you should not have static global variable in one of your source codes.
there may not have a generic tool for combine sources.you should write one base on your code.
here is some examples
gaclib source pack tool
The CIL utility is able to do this:
$TIGRESS_HOME/cilly --merge -c x1.c -o x1.o
$TIGRESS_HOME/cilly --merge -c x2.c -o x2.o
$TIGRESS_HOME/cilly --merge -c x3.c -o x4.o
$TIGRESS_HOME/cilly --merge --keepmerged x1.o x2.o x3.o -o merged --mergedout=merged.c
Usage example taken from here. Read the documentation about CIL and its shortcomings here. A binary distribution for Mac OS X and Linux is provided with Tigress.
I think the project with headers, sources files must be must nicer than the one with only one main file. Not only easier to work and read with but also they know you do good job at separating program's modules.
Due to your solution, I provide this format and I think you have to do hand-work:
// STL headers
// --- prototype
// monster.h
// prince.h
// --- implementation
int main() {
// your main function
return 0;
}
I just found an npm package that works perfectly for me, for exactly that porpouse: cpp-merge
I provide the link:
https://www.npmjs.com/package/cpp-merge
To install:
npm install -g cpp-merge
To use:
cpp-merge file.cpp
That will create a merged file with all the files specified in #include "library.cpp", within file.cpp.
Also, Is worth mentioning that it goes to standard output. For directing it to a file, you can do:
cpp-merge --output output.cpp
Checkout the documentation for more details!
I don't know of a tool that combines .cpp files together but I would just zip all of the files up together and send them over as a gzip file.
If you choose to send an individual file rather than a compressed archive, such as a tarball or a zip file, there are probably a few things you should consider.
First, concatenate the files together as Thomas Matthews already mentioned. With a few changes, you can typically compile the one file. Remove the non-existent #include statements, such as the headers that have now been included.
You will also have to concatenate these files in their respective dependency order. That is, if a.cpp needs a class declared in b.hpp, then you will most likely need to concatenate in the order Thomas Matthews listed.
That being said, I think the best way to share code is via a public repository, such as GitHub.com or compressed archive.