Merge C++ files into a single source file - c++

I have a c++ project with multiple source files and multiple header files. I want to submit my project for a programming contest which requires a single source file. Is there an automated way of collapsing all the files into single .cpp file?
For example if I had a.cpp, a.h, b.cpp, b.h etc., I want to get a main.cpp which will compile and run successfully. If I did this manually, could I simply merge the header files and append the source files to each other? Are there gotchas with externs, include dependencies and forward declarations?

I also needed this for a coding contest. Codingame to be precise. So I wrote a quick JavaScript script to do the trick. You can find it here:
https://www.npmjs.com/package/codingame-cpp-merge
I used it in 1 live contest and one offline game and it never produced bad results. Feel free to suggest changes or make pull requests for it on github!

In general, you cannot do this. Whilst you can happily paste the contents of header files to the locations of the corresponding #includes, you cannot, in general, simply concatenate source files. For starters, you may end up with naming clashes between things with file scope. And given that you will have copy-pasted header files (with class definitions, etc.) into each source file, you'll end up with classes defined multiple times.
There are much better solutions. As has been mentioned, why not simply zip up your entire project directory (after you've cleaned out auto-generated object files, etc.)? And if you really must have a single source file, then just write a single source file!

well, this is possiable, I have seen many project combine source files to single .h and .c/.cpp, such as sqlite
but the code must have some limits, such as you should not have static global variable in one of your source codes.
there may not have a generic tool for combine sources.you should write one base on your code.
here is some examples
gaclib source pack tool

The CIL utility is able to do this:
$TIGRESS_HOME/cilly --merge -c x1.c -o x1.o
$TIGRESS_HOME/cilly --merge -c x2.c -o x2.o
$TIGRESS_HOME/cilly --merge -c x3.c -o x4.o
$TIGRESS_HOME/cilly --merge --keepmerged x1.o x2.o x3.o -o merged --mergedout=merged.c
Usage example taken from here. Read the documentation about CIL and its shortcomings here. A binary distribution for Mac OS X and Linux is provided with Tigress.

I think the project with headers, sources files must be must nicer than the one with only one main file. Not only easier to work and read with but also they know you do good job at separating program's modules.
Due to your solution, I provide this format and I think you have to do hand-work:
// STL headers
// --- prototype
// monster.h
// prince.h
// --- implementation
int main() {
// your main function
return 0;
}

I just found an npm package that works perfectly for me, for exactly that porpouse: cpp-merge
I provide the link:
https://www.npmjs.com/package/cpp-merge
To install:
npm install -g cpp-merge
To use:
cpp-merge file.cpp
That will create a merged file with all the files specified in #include "library.cpp", within file.cpp.
Also, Is worth mentioning that it goes to standard output. For directing it to a file, you can do:
cpp-merge --output output.cpp
Checkout the documentation for more details!

I don't know of a tool that combines .cpp files together but I would just zip all of the files up together and send them over as a gzip file.

If you choose to send an individual file rather than a compressed archive, such as a tarball or a zip file, there are probably a few things you should consider.
First, concatenate the files together as Thomas Matthews already mentioned. With a few changes, you can typically compile the one file. Remove the non-existent #include statements, such as the headers that have now been included.
You will also have to concatenate these files in their respective dependency order. That is, if a.cpp needs a class declared in b.hpp, then you will most likely need to concatenate in the order Thomas Matthews listed.
That being said, I think the best way to share code is via a public repository, such as GitHub.com or compressed archive.

Related

Using 3rd party header files with Rcpp

I have a header file called coolStuff.h that contains a function awesomeSauce(arg1) that I would like to use in my cpp source file.
Directory Structure:
RworkingDirectory
sourceCpp
theCppFile.cpp
cppHeaders
coolStuff.h
The Code:
#include <Rcpp.h>
#include <cppHeaders/coolStuff.h>
using namespace Rcpp;
// [[Rcpp::export]]
double someFunctionCpp(double someInput){
double someOutput = awesomeSauce(someInput);
return someOutput;
}
I get the error:
theCppFile.cpp:2:31: error: cppHeaders/coolStuff.h: No such file or directory
I have moved the file and directory all over the place and can't seem to get this to work. I see examples all over the place of using 3rd party headers that say just do this:
#include <boost/array.hpp>
(Thats from Hadley/devtools)
https://github.com/hadley/devtools/wiki/Rcpp
So what gives? I have been searching all morning and can't find an answer to what seems to me like a simple thing.
UPDATE 01.11.12
Ok now that I have figured out how to build packages that use Rcpp in Rstudio let me rephrase the question. I have a stand alone header file coolStuff.h that contains a function I want to use in my cpp code.
1) Where should I place coolStuff.h in the package directory structure so the function it contains can be used by theCppFile.cpp?
2) How do I call coolStuff.h in the cpp files? Thanks again for your help. I learned a lot from the last conversation.
Note: I read the vignette "Writing a package that uses Rcpp" and it does not explain how to do this.
The Answer:
Ok let me summarize the answer to my question since it is scattered across this page. If I get a detail wrong feel free to edit this or let me know and I will edit it:
So you found a .h or .cpp file that contains a function or some other bit of code you want to use in a .cpp file you are writing to use with Rcpp.
Lets keep calling this found code coolStuff.h and call the function you want to use awesomeSauce(). Lets call the file you are writing theCppFile.cpp.
(I should note here that the code in .h files and in .cpp files is all C++ code and the difference between them is for the C++ programer to keep things organized in the proper way. I will leave a discussion of the difference out here, but a simple search here on SO will lead you to discussion of the difference. For you the R programer needing to use a bit o' code you found, there is no real difference.)
IN SHORT: You can use a file like coolStuff.h provided it calls no other libraries, by either cut-and-pasteing into theCppFile.cpp, or if you create a package you can place the file in the \src directory with the theCppFile.cpp file and use #include "coolStuff.h" at the top of the file you are writing. The latter is more flexible and allows you to use functions in coolStuff.h in other .cpp files.
DETAILS:
1) coolStuff.h must not call other libraries. So that means it cannot have any include statements at the top. If it does, what I detail below probably will not work, and the use of found code that calls other libraries is beyond the scope of this answer.
2) If you want to compile the file with sourceCpp() you need to cut and paste coolStuff.h into theCppFile.cpp. I am told there are exceptions, but sourceCpp() is designed to compile one .cpp file, so thats the best route to take.
(NOTE: I make no guarantees that a simple cut and paste will work out of the box. You may have to rename variables, or more likely switch the data types being used to be consistent with those you are using in theCppFile.cpp. But so far, cut-and-paste has worked with minimal fuss for me with 6 different simple .h files)
3) If you only need to use code from coolStuff.h in theCppFile.cpp and nowhere else, then you should cut and paste it into theCppFile.cpp.
(Again I make no guarantees see the note above about cut-and-paste)
4) If you want to use code contained in coolStuff.h in theCppFile.cpp AND other .cpp files, you need to look into building a package. This is not hard, but can be a bit tricky, because the information out there about building packages with Rcpp ranges from the exhaustive thorough documentation you want with any R package (but that is above your head as a newbie), and the newbie sensitive introductions (that may leave out a detail you happen to need).
Here is what I suggest:
A) First get a version of theCppFile.cpp with the code from coolStuff.h cut-and-paste into theCppFile.cpp that compiles with sourceCpp() and works as you expect it to. This is not a must, but if you are new to Rcpp OR packages, it is nice to make sure your code works in this simple situation before you move to the more complicated case below.
B) Now build your package using Rcpp.package.skeleton() or use the Build functionality in RStudio (HIGHLY recommended). You can find details about using Rcpp.package.skeleton() in hadley/devtools or Rcpp Attributes Vignette. The full documentation for writing packages with Rcpp is in the Writing a package that uses Rcpp, however this one assumes you know your way around C++ fairly well, and does not use the new "Attributes" way of doing Rcpp.
Don't forget to "Build & Reload" if using RStudio or compileAttributes() if you are not in RStudio.
C) Now you should see in your \R directory a file called RcppExports.R. Open it and check it out. In RcppExports.R you should see the R wrapper functions for all the .cpp files you have in your \src directory. Pretty sweet.
D) Try out the R function that corresponds to the function you wrote in theCppFile.cpp. Does it work? If so move on.
E) With your package built you can move coolStuff.h into the src folder with theCppFile.cpp.
F) Now you can remove the cut-and-paste code from theCppFile.cpp and at the top of theCppFile.cpp (and any other .cpp file you want to use code from coolStuff.h) put #include "coolStuff.h" just after #include <Rcpp.h>. Note that there are no brackets around ranker.h, rather there are "". This is a C++ convention when including local files provided by the user rather than a library file like Rcpp or STL etc...
G) Now you have to rebuild the package. In RStudio this is just "Build & Reload" in the Build menu. If you are not using RStudio you should run compileAttributes()
H) Now try the R function again just as you did in step D), hopefully it works.
The problem is that sourceCpp is expressly designed to build only a single standalone source file. If you want sourceCpp to have dependencies then they need to either be:
In the system include directories (i.e. /usr/local/lib or /usr/lib); or
In an R package which you list in an Rcpp::depends attribute
As Dirk said, if you want to build more than one source file then you should consider using an R package rather than sourceCpp.
Note that if you are working on a package and perform a sourceCpp on a file within the src directory of the package it will build it as if it's in the package (i.e. you can include files from the src directory or inst/include directory).
I was able to link any library (MPFR in this case) by setting two environment variables before calling sourceCpp:
Sys.setenv("PKG_CXXFLAGS"="-I/usr/include")
Sys.setenv("PKG_LIBS"="-L/usr/lib/x86_64-linux-gnu/ -lm -lmpc -lgmp -lmpfr")
The first variable contains the path of the library headers. The second one includes the path of the library binary and its file name. In this case other dependent libraries are also required. For more details check g++ compilation and link flags. This information can usually be obtained using pkg-config:
pkg-config --cflags --libs mylib
For a better understanding, I recommend using sourceCpp with verbose output in order to print the g++ compilation and linking commands:
sourceCpp("mysource.cpp", verbose=TRUE, rebuild=TRUE)
I was able to link a boost library using the following global command in R before calling sourceCpp
Sys.setenv("PKG_CXXFLAGS"="-I \path-to-boost\")
Basically mirroring this post but with a different compiler option: http://gallery.rcpp.org/articles/first-steps-with-C++11/
Couple of things:
"Third party header libraries" as in your subject makes no sense.
Third-party headers can work via templated code where headers are all you need, ie there is only an include step and the compiler resolves things.
Once you need libraries and actual linking of object code, you may not be able the powerful and useful sourceCpp unless you gave it meta-information via plugins (or env. vars).
So in that case, write a package.
Easy and simple things are just that with Rcpp and the new attributes, or the older inline and cxxfunction. More for complex use --- and external libraries is more complex you need to consult the documentation. We added several vignettes to Rcpp for that.
Angle brackets <> are for system includes, such as the standard libs.
For files local to your own project, use quotes: "".
Also, if you are placing headers in a different directory, the header path should be specified local to the source file including it.
So for your example this ought to work:
#include "../cppHeaders/coolStuff.h"
You can configure the search paths such that the file could be found without doing that, but it's generally only worth doing that for stuff you want to include across multiple projects, or otherwise would expect someone to 'install'.
We can add it by writing path to the header in the PKG_CXXFLAGS variable of the .R/Makevars file as shown below. The following is an example of adding header file of xtensor installed with Anaconda in macOS.
⋊> ~ cat ~/.R/Makevars
CC=/usr/local/bin/gcc-7
CXX=/usr/local/bin/g++-7
CPLUS_INCLUDE_PATH=/opt/local/include:$CPLUS_INCLUDE_PATH
PKG_CXXFLAGS=-I/Users/kuroyanagi/.pyenv/versions/miniconda3-4.3.30/include
LD_LIBRARY_PATH=/opt/local/lib:$LD_LIBRARY_PATH
CXXFLAGS= -g0 -O3 -Wall
MAKE=make -j4
This worked for me in Windows:
Sys.setenv("PKG_CXXFLAGS"='-I"C:/boost/boost_1_66_0"')
Edit: Actually you don't need this if you use Boost Headers (thanks to Ralf Stubner):
// [[Rcpp::depends(BH)]]

Difference between C++ files

I just started a graphical C++ course and I have problem getting an overview how it is.
we got some starting code, two files; one of type "C++ Source" and another of "C/C++ Header".
its supposed to be a graphical program which fills the screen with color.
also, we are using some custom libraries such as SDL and GLM, in the same folder as those two files there is a folder named gml and loads of subfolders, which I wont get into.
I have downloaded mingw, cmake and Visual Studio 11 beta for c++.
I've tried making a normal Win32 program and also a forms-application for the graphical part, but its always something wrong when compiling.
My question: how are you supposed to handle C++ files? I just got used to java and there its so easy to just open the .java file and paste into your IDE, dealing with C++ makes me really confused.
Hmm... Where to begin...
Somethings that happen behind the scenes in other languages are much more visible in C++. The process of obtaining a binary (say, an executable) from C++ involves first compiling the source code (There are sub-steps of this but the compiler handles them) to obtain object files, then the object files are linked by the linker to generate a binary.
In theory, you could simply #include all the cpp files in a project, and compile them all together and "link" (although there's nothing to link) but that would take a very long time, and more importantly, in complex projects that could deplete the memory available to your compiler.
So, we split our projects into compilation units, and by convention a .cpp file represents a single compilation unit. A compilation unit is the part of your project that gets compiled to generate one object file. Even though compilation units are compiled separately, some code has to be common among them, so that the piece of code in each of them can use the functionalities implemented by the others. .h files conventionally serve this purpose. Things are basically declared (sort of announced) in them, so that each compilation unit knows what to expect when it's a part of a linking process to generate a binary.
There's also the issue with libraries. You can find mainly two kinds of things in libraries;
Already implemented functionality, shipped to you in the form of binary files including CPU instructions that can almost be run (but they've to be inserted in the right place). This form is accompanied by .h files to let your .cpp files know what to expect in the library.
The second type is functionality implemented directly in the .h
files. Yes, this is possible under special cases. There are cases,
where the implementation has to (a weak has to) accompany the
declaration (inlined functions, templated types etc.).
The first type comes in two flavors: A "static library" (.lib in windows, .a in linux), that enters your executable and becomes a part of it during linking, and a "dynamic library", that is exposed to your binary (so it knows about it) but that doesn't become a part of it. So, your executable will be looking for that dynamic library (.dll files in windows and .so files in linux f.x.) while it's run.
So, in order for your .cpp files to be able to receive services from libraries, they have to #include their .h files, to know about what there is in them. Later on, during linking, you have to show the linker where (what path in the file system) to find the binary components of those libraries. Finally, if the library is dynamic, the .dll's (or .so's etc.) must be accessible during run time (keep them in the same folder for instance).
While compiling your compilation units you have to tell the compiler where to find the .h files. Otherwise, all it will see will be #include <something.h> and it won't know where to find that file. with gcc, you tell the compiler with the -I option. Note that, you just tell the folder. Also of importance is that if the include directive looks like #include<somefolder/somefile.h> you shouldn't include somefolder in the path. So the invocation looks like:
g++ mycompilationunit.cpp -IPATH/TO/THE/INCLUDED/FILES -IPATH/TO/OTHER/INCLUDED/FILES -c
The -c option tells the compiler that it shouldn't attempt to make an executable just from this compilation unit, so it creates a .o file, to be linked with others later. Since we don't tell it the output file name, it spits out mycompilationunit.o.
Now we want to generate our binary (you probably want an executable, but you could also want to create a library of yours). So we have to tell the linker everything that goes into the binary. All the object files and all the static and dynamic libraries. So, we say: (Note g++ here also acts as the linker)
g++ objectfile1.o objectfile2.o objectfile3.o -LPATH/TO/LIBRARY/BINARIES -llibrary1 -llibrary2 -o myexecutable
Here, -L option is self explanatory in the example. -l option tells which binaries to look for. The linker will accept both static and dynamic libraries if it finds them on the path, and if it finds both, it'll choose one. Note that what goes after -l is not the full binary name. For instance in linux library names take the form liblibrary.so.0 but they're referred to as -llibrary in the linker command. finally -o tells the compiler what name to give to your executable. You need some other options to f.x. create a dynamic library, but you probably don't need to know about them now.
What is the difference between a .cpp file and a .h file?
Look at this answer. Also a quick google search explains a bit too.
Pretty much .h (header) files are declerations and .cpp (source) files are definitions. It is possible to combine both files into one .cpp file but as projects get bigger and bigger its becomes annoying and almost unreasonable.
Hope that helps.
In C++ there is a notion of a function declaration (the function signature) and a function definition (the actual code).
A header file (*.h) contains the declarations of functions and classes. A source file (*.cpp, *.c++, *.C) contains the definitions.
A header file can be included in a source file using #include directive.
When you define a class in C++, you typically only include the declarations of the member functions (methods in Java lingo), and you put the class definition into a header file. The member function definitions containing the body of each function are typically put outside the class definition and into the source file.
Generally the best thing to do here is to get a book on C++ or C, and to look at some sample code.
Header files (.h) are supposed to contain definitions of classes, methods, and variables. Source file (.cpp) will contain the code. So in your .cpp file you need to include the header file as #include "header-file-name.h".
Then use g++ to compile the .cpp file. Make sure that the path to .h file is correct.
If you are using CodeBlocks or Visual Studio, then just compiling the project and running will do everything for you. You can also add .h or .cpp file from there. You need not worry about anything.
Hope this helps.

C++ class implementation for a beginner to classes

Please submit only one zip file .Your zip file should contain the following files:
ItemListClass.h
ItemListMethods.cpp
ItemListTests.h
ItemListTests.cpp
makefile
numbers.txt
How is it possible to turn in 2 cpp files? Does that mean I am gonna have 2 different projects and combine them into one? I understand classes, but I am unsure how they could be all used in a zip file.
Note: I am not looking for code; I am looking for understanding. Maybe someone has done something similar? this is a grocery list with integers instead of items.
C++ supports a concept called "separate compilation".
Basically, each .cpp file can be compiled independently from all the others, and the final program is made by "linking" all of the compiled files together.
ItemListClass.h
ItemListMethods.cpp
ItemListTests.h
ItemListTests.cpp
makefile
numbers.txt
This implies that ItemListClass.h provides the caller-visible interface for your ItemList, that the out-of-line implementation for the member functions of ItemList go in ItemListMethods.cpp, and that a test program (presumably with a main() function in ItemListTests.cpp) will exercise the ItemList functionality. I can see no particular reason to think that ItemListTests.h is useful... whatever ItemListTests could credibly contain is unlikely to be of use to any code other than ItemListTests.cpp, and if it was then it should really be moved into a "TestSupport.h" header or similar. But, the implication is that ItemListMethods.cpp should include ItemLists.h, and ItemListTests.cpp should include ItemListTests.h. numbers.txt is presumably input data that your ItemListTests.cpp will read through to populate an ItemList object during testing. The makefile should do something vaguely like:
ItemListTest: <tab> ItemList.o ItemListTest.h ItemListTest.cpp
<tab>g++ -g -o ItemListTest ItemList.o ItemListTest.cpp
ItemList.o: <tab> ItemList.h ItemList.cpp
<tab>g++ -g -c ItemList.cpp
You can then type "make" in the same directory to build an executable ItemListTest.
Each .cpp file implements a set of functions. The entire program is the union of those functions… the compiler (specifically the "linker", which is the last stage of the compiler) gathers the functions together and packages it into your executable.
A project usually contains numerous .cpp files. C++ has few rules about how the program is divided over them, but usually each contains one class or group of functions.
Header files exist so that each .cpp file can be aware of the functions defined in the others.
You should read about makefiles.
Basically, your makefile is the project you're submitting and is made up by four files.
A project in almost all C++ IDEs/compilers is allowed to have multiple source files. Typically you couldn't compile them from the zip file unless they are extracted. The reason they ask for a zip is because it is very easy to send/submit. It would have to be extracted after your instructor/examiner receives it.

Combining C++ header files

Is there an automated way to take a large amount of C++ header files and combine them in a single one?
This operation must, of course, concatenate the files in the right order so that no types, etc. are defined before they are used in upcoming classes and functions.
Basically, I'm looking for something that allows me to distribute my library in two files (libfoo.h, libfoo.a), instead of the current bunch of include files + the binary library.
As your comment says:
.. I want to make it easier for library users, so they can just do one single #include and have it all.
Then you could just spend some time, including all your headers in a "wrapper" header, in the right order. 50 headers are not that much. Just do something like:
// libfoo.h
#include "header1.h"
#include "header2.h"
// ..
#include "headerN.h"
This will not take that much time, if you do this manually.
Also, adding new headers later - a matter of seconds, to add them in this "wrapper header".
In my opinion, this is the most simple, clean and working solution.
A little bit late, but here it is. I just recently stumbled into this same problem myself and coded this solution: https://github.com/rpvelloso/oneheader
How does it works?
Your project's folder is scanned for C/C++ headers and a list of headers found is created;
For every header in the list it analyzes its #include directives and assemble a dependency graph in the following way:
If the included header is not located inside the project's folder then it is ignored (e.g., if it is a system header);
If the included header is located inside the project's folder then an edge is create in the dependency graph, linking the included header to the current header being analyzed;
The dependency graph is topologically sorted to determine the correct order to concatenate the headers into a single file. If a cycle is found in the graph, the process is interrupted (i.e., if it is not a DAG);
Limitations:
It currently only detects single line #include directives (e.g., #include );
It does not handles headers with the same name in different paths;
It only gives you a correct order to combine all the headers, you still need to concatenate them (maybe you want remove or modify some of them prior to merging).
Compiling:
g++ -Wall -ggdb -std=c++1y -lstdc++fs oneheader.cpp -o oneheader[.exe]
Usage:
./oneheader[.exe] project_folder/ > file_sequence.txt
(Adapting an answer to my dupe question:)
There are several other libraries which aim for a single-header form of distribution, but are developed using multiple files; and they too need such a mechanism. For some (most?) it is opaque and not part of the distributed code. Luckily, there is at least one exception: Lyra, a command-line argument parsing library; it uses a Python-based include file fuser/joiner script, which you can find here.
The script is not well-documented, but they way you use it is with 3 command-line arguments:
--src-include - The include file to convert, i.e. to merge its include directives into its body. In your case it's libfoo.h which includes the other files.
--dst-include - The output file to write - the result of the merging.
--src-include-dir - The directory relative to which include files are specified (i.e. an "include search path" of one directory; the script doesn't support the complex mechanism of multiple include paths and search priorities which the C++ compiler offers)
The script acts recursively, so if file1.h includes another file under the --src-include-dir, that should be merged in as well.
Now, I could nitpick at the code of that script, but - hey, it works and it's FOSS - distributed with the Boost license.
If your library is so big that you cannot build and maintain a single wrapping header file like Kiril suggested, this may mean that it is not architectured well enough.
So if your library is really huge (above a million lines of source code), you might consider automating that, with tools like
GCC make dependency generator preprocessor options like -M -MD -MF etc, with another hand made script sorting them
expensive commercial static analysis tools like coverity
customizing a compiler thru plugins or (for GCC 4.6) MELT extensions
But I don't understand why you want an automated way of doing this. If the library is of reasonable size, you should understand it and be able to write and maintain a wrapping header by hand. Automating that task will take you some efforts (probably weeks, not minutes) so is worthwhile only for very large libraries.
If you have a master include file that includes all others available, you could simply hack a C preprocessor re-implementation in Perl. Process only ""-style includes and recursively paste the contents of these files. Should be a twenty-liner.
If not, you have to write one up yourself or try at random. Automatic dependency tracking in C++ is hard. Like in "let's see if this template instantiation causes an implicit instantiation of the argument class" hard. The only automated way I see is to shuffle your include files into a random order, see if the whole bunch compiles, and re-shuffle them until it compiles. Which will take n! time, you might be better off writing that include file by hand.
While the first variant is easy enough to hack, I doubt the sensibility of this hack, because you want to distribute on a package level (source tarball, deb package, Windows installer) instead of a file level.
You really need a build script to generate this as you work, and a preprocessor flag to disable use of the amalgamate (that could be for your uses).
To simplify this script/program, it helps to have your header structures and include hygiene in top form.
Your program/script will need to know your discovery paths (hint: minimise the count of search paths to one if possible).
Run the script or program (which you create) to replace include directives with header file contents.
Assuming your headers are all guarded as is typical, you can keep track of what files you have already physically included and perform no action if there is another request to include them. If a header is not found, leave it as-is (as an include directive) -- this is required for system/third party headers -- unless you use a separate header for external includes (which is not at all a bad idea).
It's good to have a build phase/translation that includes header alone and produces zero warnings or errors (warnings as errors).
Alternatively, you can create a special distribution repository so they never need to do more than pull from it occasionally.
What you want to do sounds "javascriptish" to me :-) . But if you insist, there is always "cat" (or the equivalent in Windows):
$ cat file1.h file2.h file3.h > my_big_file.h
Or if you are using gcc, create a file my_decent_lib_header.h with the following contents:
#include "file1.h"
#include "file2.h"
#include "file3.h"
and then use
$ gcc -C -E my_decent_lib_header.h -o my_big_file.h
and this way you even get file/line directives that will refer to the original files (although that can be disabled, if you wish).
As for how automatic is this for your file order, well, it is not at all; you have to decide the order yourself. In fact, I would be surprised to hear that a tool that orders header dependencies correctly in all cases for C/C++ can be built.
usually you don't want to include every bit of information from all your headers into the special header that enables the potential user to actually use your library. The non-trivial removal of type definitions, further includes or defines, that are not necessary for the user of your interface to know can not be automatedly done. As far as I know.
Short answer to your main question:
No.
My suggestions:
manually make a new header, that contains all relevant information (nothing more, nothing less) for the user of your library interface. Add nice documentation comments for each component it contains.
use forward declarations where possible, instead of full-fledged included definitions. Put the actual includes in your implementation files. The less include statements you have in your headers, the better.
don't build a deeply nested hierarchy of includes. This makes it extremely hard to keep an overview on the contents of every bit you include. The user of your library will look into the header to learn how to use it. And he will probably not be able to distinguish relevant code from irrelevant on the first sight. You want to maximize the ratio of relevant code per total code in the main header for your library.
EDIT
If you really do have a toolkit library, and the order of inclusion really does not matter, and you have a bunch of independent headers, that you want to enumerate just for convenience into a single header, then you can use a simple script. Like the following Python (untested):
import glob
with open("convenience_header.h", 'w') as f:
for header in glob.glob("*.h"):
f.write("#include \"%s\"\n" % header)

Compiling a program with multiple files

I just started learning C++ with Dev C++ as my IDE. One of the tutorials I'm using has a page in it about compiling a program made up of multiple files. It's simple stuff at this point, I have one file with a function in it, and the other file has all the other required code to call the function and output the results. The problem is that the tutorial doesn't tell me how to join these files so I can compile the program and have it work. There's seems to be multiple ways of doing this and I'd like them all but I'm mainly look for the simplest one right now.
I should also mention that I'm new at this so please try and keep your explanations simple and understandable.
In general, you would add both .cpp files to your project under the same target. It IDE will automatically add both files to the build and link them together.
That said, Dev-C++ is very, very old and unmaintained. It has not seen updates in several years. I strongly urge you to use a different IDE. There are many to choose from, including a fork of Dev-C++ called wxDev-C++. I'd actually recommend Code::Blocks or Visual Studio Express, which are both much more modern and have better support for debugging and many other features.
I am not sure of Dev-C++, but the concepts remain the same. So, here is how you can try to get both the files to work together
Each C++ file is a compilation unit - meaning, the compiler will convert one .cpp / .cxx file to one .obj / .o file (on Windows and Linux (or any Unix)) respectively
The obj files, called the object files contain the machine code (am skipping few internal details here) for the classes and functions present in that particular file
If you want to access the functions present in a different compilation unit, you need to link those two object files
Linking is a term that is used to, well, link two object files
There is a separate process (other than the compiler) which does the linking of the object files
So,in your case, you need to use the dev-c++ compiler and create separate object files
Then using the linker you link both the object files to create the final executable
If there are functions that exist in the .cpp files that you want to reference, you use the header files. The header files contain the function/class declarations. The .cpp files will have the implementations. So, in one of your .cpp file, (say) A.cpp, you include the header B.hpp and use the functions in the B.hpp file. The inclusion of headers will tell the compiler that the function declarations exist elsewhere and that the linker will take care of stringing all these references together to create the final executable.
Hope this helps, else, please don't hesitate to mention the files you are using and I can suggest how to link both the .cpp files together.
You must include the other files by using the #include preprocessor directive
in the top of the file where you have the main() function
For example:
#include "filename.h"
...
/* rest of code containing main function goes here */
...
#include "path/filename.c"
main
{
...
...
...
}