Please submit only one zip file .Your zip file should contain the following files:
ItemListClass.h
ItemListMethods.cpp
ItemListTests.h
ItemListTests.cpp
makefile
numbers.txt
How is it possible to turn in 2 cpp files? Does that mean I am gonna have 2 different projects and combine them into one? I understand classes, but I am unsure how they could be all used in a zip file.
Note: I am not looking for code; I am looking for understanding. Maybe someone has done something similar? this is a grocery list with integers instead of items.
C++ supports a concept called "separate compilation".
Basically, each .cpp file can be compiled independently from all the others, and the final program is made by "linking" all of the compiled files together.
ItemListClass.h
ItemListMethods.cpp
ItemListTests.h
ItemListTests.cpp
makefile
numbers.txt
This implies that ItemListClass.h provides the caller-visible interface for your ItemList, that the out-of-line implementation for the member functions of ItemList go in ItemListMethods.cpp, and that a test program (presumably with a main() function in ItemListTests.cpp) will exercise the ItemList functionality. I can see no particular reason to think that ItemListTests.h is useful... whatever ItemListTests could credibly contain is unlikely to be of use to any code other than ItemListTests.cpp, and if it was then it should really be moved into a "TestSupport.h" header or similar. But, the implication is that ItemListMethods.cpp should include ItemLists.h, and ItemListTests.cpp should include ItemListTests.h. numbers.txt is presumably input data that your ItemListTests.cpp will read through to populate an ItemList object during testing. The makefile should do something vaguely like:
ItemListTest: <tab> ItemList.o ItemListTest.h ItemListTest.cpp
<tab>g++ -g -o ItemListTest ItemList.o ItemListTest.cpp
ItemList.o: <tab> ItemList.h ItemList.cpp
<tab>g++ -g -c ItemList.cpp
You can then type "make" in the same directory to build an executable ItemListTest.
Each .cpp file implements a set of functions. The entire program is the union of those functions… the compiler (specifically the "linker", which is the last stage of the compiler) gathers the functions together and packages it into your executable.
A project usually contains numerous .cpp files. C++ has few rules about how the program is divided over them, but usually each contains one class or group of functions.
Header files exist so that each .cpp file can be aware of the functions defined in the others.
You should read about makefiles.
Basically, your makefile is the project you're submitting and is made up by four files.
A project in almost all C++ IDEs/compilers is allowed to have multiple source files. Typically you couldn't compile them from the zip file unless they are extracted. The reason they ask for a zip is because it is very easy to send/submit. It would have to be extracted after your instructor/examiner receives it.
Related
Super-simple, totally boring setup: I have a directory full of .hpp and .cpp files. Some of these .cpp files need to be built into executables; naturally, these .cpp files #include some of the .hpp files in the same directory, which may then include others, etc. etc. Most of those .hpp files have corresponding .cpp files, which is to say: if some_application.cpp #includes foo.hpp, either directly or transitively, then chances are there's also a foo.cpp file that needs to be compiled and linked into the some_application executable.
Super-simple, but I'm still clueless about what the "best" way to build it is, either in SCons or CMake (neither of which I have any expertise in yet, other than staring at documentation for the last day or so and becoming sad). I fear that the sort of solution I want may actually be impossible (or at least grossly overcomplicated) to pull off in most build systems, but if so, it'd be nice to know that so I can just give up and be less picky. Naturally, I'm hoping I'm wrong, which wouldn't be surprising given how ignorant I am about build systems (in general, and about CMake and SCons in particular).
CMake and SCons can, of course, both automatically detect that some_application.cpp needs to be recompiled whenever any of the header files it depends on (either directly or transitively) changes, since they can "parse" C++ files well enough to pick out those dependencies. OK, great: we don't have to list each .cpp-#includes-.hpp dependency by hand. But: we still need to decide what subset of object files need to get sent to the linker when it's time to actually generate each executable.
As I understand it, the two most straightforward alternatives to dealing with that part of the problem are:
A. Explicitly and laboriously enumerating the "anything using this object file needs to use these other object files too" dependencies by hand, even though those dependencies are exactly mirrored by the corresponding-.cpp-transitively-includes-the-corresponding-.hpp dependencies that the build system already went to the trouble of figuring out for us. Why? Because computers.
B. Dumping all the object files in this directory into a single "library", and then having all executables depend on and link in that one library. This is much simpler, and what I understand most people would do, but it's also kinda sloppy. Most of the executables don't actually need everything in that library, and wouldn't actually need to be rebuilt if only the contents of one or two .cpp files changed. Isn't this setting up exactly the kind of unnecessary computation a supposed "build system" should be avoiding? (I suppose maybe they wouldn't need to be rebuilt if the library were dynamically linked, but suffice it to say I dislike dynamically linked libraries for other reasons.)
Can either CMake or SCons do better than this in any remotely straightforward fashion? I see a bunch of limited ways to twiddle the automatically generated dependency graph, but no general-purpose way to do so interactively ("OK, build system, what do you think the dependencies are? Ah. Well, based on that, add the following dependencies and think again: ..."). I'm not too surprised about that. I haven't yet found a special-purpose mechanism in either build system for dealing with the super-common case where link-time dependencies should mirror corresponding compile-time #include dependencies, though. Did I miss something in my (admittedly somewhat cursory) reading of the documentation, or does everyone just go with option (B) and quietly hate themselves and/or their build systems?
Your statement in point A) "anything using this object file needs to use these other object files too" is something that will indeed need to be done by hand. Compilers dont automatically find object files needed by a binary. You have to explicitly list them at link time. If I understand your question correctly, you dont want to have to explicitly list the objects needed by a binary, but want the build tool to automatically find them. I doubt there is any build too that does this: SCons and Cmake definitely dont do this.
If you have an application some_application.cpp that includes foo.hpp (or other headers used by these cpp files), and subsequently needs to link the foo.cpp object, then in SCons, you will need to do something like this:
env = Environment()
env.Program(target = 'some_application',
source = ['some_application.cpp', 'foo.cpp'])
This will only link when 'some_application.cpp', 'foo.hpp', or 'foo.cpp' have changed. Assuming g++, this will effectively translate to something like the following, independently of SCons or Cmake.
g++ -c foo.cpp -o foo.o
g++ some_application.cpp foo.o -o some_application
You mention you have "a directory full of .hpp and .cpp files", I would suggest you organize those files into libraries. Not all in one library, but logically organize them into smaller, cohesive libraries. Then your applications/binaries would link the libraries they need, thus minimizing recompilations due to not used objects.
I had more or less the same problem as you have and I solved it as follows:
import SCons.Scanner
import os
def header_to_source(header_file):
"""Specify the location of the source file corresponding to a given
header file."""
return header_file.replace('include/', 'src/').replace('.hpp', '.cpp')
def source_files(main_file, env):
"""Returns list of source files the given main_file depends on. With
the function header_to_source one must specify where to look for
the source file corresponding to a given header. The resulting
list is filtered for existing files. The resulting list contains
main_file as first element."""
## get the dependencies
node = File(main_file)
scanner = SCons.Scanner.C.CScanner()
path = SCons.Scanner.FindPathDirs("CPPPATH")(env)
deps = node.get_implicit_deps(env, scanner, path)
## collect corresponding source files
root_path = env.Dir('#').get_abspath()
res = [main_file]
for dep in deps:
source_path = header_to_source(
os.path.relpath(dep.get_abspath(), root_path))
if os.path.exists(os.path.join(root_path, source_path)):
res.append(source_path)
return res
The header_to_source method is the one you need to modify such that it returns the source file corresponding to a given header file. Then the method source_file gives you all the source files you need to build the given main_file (including the main_file as first element). Non existing files are automatically removed. So the following should be sufficient to define the target for an executable:
env.Program(source_files('main.cpp', env))
I am not sure whether this works in all possible setups, but at least for me it works.
How do compilers know when it is not necessary to recompile certain parts of code especially in larger projects?
For example, let's say in C++ we have two C++ files and two header files. The header files depend on one another. (They use the classes specified in each others files.)
Does a compiler always need to parse both header files, (and maybe C++ files for method implementation,) to obtain the class information in order to generate either of the two C++ files?
I always thought that when you run the compiler at the command prompt, it closes immediately after outputting the object files - so it would be impossible to cache the Abstract Syntax Trees or intermediate code. Do most C++ compilers know when a certain file doesn't need to output to an object file, and is therefore skipped?
All of the compilers I know compile every source file they're
told to. Always. And they generate a new version of the object
file for every source file they compile.
Only compiling what is necessary is a job generally left to the
build system (make or other). Knowing which objects need to be
regenerated depend on what each source file includes, directly
or indirectly; most compilers have options to output this
information in some format, either on the fly or as a separate
invocation, and the build systems (the usable ones, at least)
use this information to determine dependencies.
As said above, compilers will compiler every file that it is asked to compile. It is up to tools like make to decide what needs to be compiled.
In make one sets up rules. Each rule has a target, list of dependencies followed by the commands to run if those dependencies are not met. For example
target.o : target.c
gcc -c -o target.o target.c
On most file systems, each file has a timestamp. If target.o has a newer timestamp than target.c (the rule dependency) then make does not run the gcc command below. This is because one firsts edits a source file and then compiles the source file into an object file.
If however the dependent source file is newer than the target, then we know the source file was edited after the compile took place and another compile is in order. make will therefore execute the build command for the rule.
It gets a lot more complex when rules are dependent on other rules but the same principle applies.
I don't know how they (don't) implement it (because many don't... Don't ask me why) but I'm quite sure it would be VERY easy. You save in the intermediate (obj) file the name and the hash of the source file and of every dependent file you are compiling, together with the compilation options that are being used, the hash of the compiler (or its internal version) and the compilation result (ok/error). Next time the user tries to recompile the file, the compiler checks if there is already the intermediate file, checks if all the hashes are the same, if the compilation options are the same and if the compiler is the same... If everything is the same, it gives the pre-saved error message and exits without doing anything.
The intermediate files would be a little bigger (probably some kb each).
I just started a graphical C++ course and I have problem getting an overview how it is.
we got some starting code, two files; one of type "C++ Source" and another of "C/C++ Header".
its supposed to be a graphical program which fills the screen with color.
also, we are using some custom libraries such as SDL and GLM, in the same folder as those two files there is a folder named gml and loads of subfolders, which I wont get into.
I have downloaded mingw, cmake and Visual Studio 11 beta for c++.
I've tried making a normal Win32 program and also a forms-application for the graphical part, but its always something wrong when compiling.
My question: how are you supposed to handle C++ files? I just got used to java and there its so easy to just open the .java file and paste into your IDE, dealing with C++ makes me really confused.
Hmm... Where to begin...
Somethings that happen behind the scenes in other languages are much more visible in C++. The process of obtaining a binary (say, an executable) from C++ involves first compiling the source code (There are sub-steps of this but the compiler handles them) to obtain object files, then the object files are linked by the linker to generate a binary.
In theory, you could simply #include all the cpp files in a project, and compile them all together and "link" (although there's nothing to link) but that would take a very long time, and more importantly, in complex projects that could deplete the memory available to your compiler.
So, we split our projects into compilation units, and by convention a .cpp file represents a single compilation unit. A compilation unit is the part of your project that gets compiled to generate one object file. Even though compilation units are compiled separately, some code has to be common among them, so that the piece of code in each of them can use the functionalities implemented by the others. .h files conventionally serve this purpose. Things are basically declared (sort of announced) in them, so that each compilation unit knows what to expect when it's a part of a linking process to generate a binary.
There's also the issue with libraries. You can find mainly two kinds of things in libraries;
Already implemented functionality, shipped to you in the form of binary files including CPU instructions that can almost be run (but they've to be inserted in the right place). This form is accompanied by .h files to let your .cpp files know what to expect in the library.
The second type is functionality implemented directly in the .h
files. Yes, this is possible under special cases. There are cases,
where the implementation has to (a weak has to) accompany the
declaration (inlined functions, templated types etc.).
The first type comes in two flavors: A "static library" (.lib in windows, .a in linux), that enters your executable and becomes a part of it during linking, and a "dynamic library", that is exposed to your binary (so it knows about it) but that doesn't become a part of it. So, your executable will be looking for that dynamic library (.dll files in windows and .so files in linux f.x.) while it's run.
So, in order for your .cpp files to be able to receive services from libraries, they have to #include their .h files, to know about what there is in them. Later on, during linking, you have to show the linker where (what path in the file system) to find the binary components of those libraries. Finally, if the library is dynamic, the .dll's (or .so's etc.) must be accessible during run time (keep them in the same folder for instance).
While compiling your compilation units you have to tell the compiler where to find the .h files. Otherwise, all it will see will be #include <something.h> and it won't know where to find that file. with gcc, you tell the compiler with the -I option. Note that, you just tell the folder. Also of importance is that if the include directive looks like #include<somefolder/somefile.h> you shouldn't include somefolder in the path. So the invocation looks like:
g++ mycompilationunit.cpp -IPATH/TO/THE/INCLUDED/FILES -IPATH/TO/OTHER/INCLUDED/FILES -c
The -c option tells the compiler that it shouldn't attempt to make an executable just from this compilation unit, so it creates a .o file, to be linked with others later. Since we don't tell it the output file name, it spits out mycompilationunit.o.
Now we want to generate our binary (you probably want an executable, but you could also want to create a library of yours). So we have to tell the linker everything that goes into the binary. All the object files and all the static and dynamic libraries. So, we say: (Note g++ here also acts as the linker)
g++ objectfile1.o objectfile2.o objectfile3.o -LPATH/TO/LIBRARY/BINARIES -llibrary1 -llibrary2 -o myexecutable
Here, -L option is self explanatory in the example. -l option tells which binaries to look for. The linker will accept both static and dynamic libraries if it finds them on the path, and if it finds both, it'll choose one. Note that what goes after -l is not the full binary name. For instance in linux library names take the form liblibrary.so.0 but they're referred to as -llibrary in the linker command. finally -o tells the compiler what name to give to your executable. You need some other options to f.x. create a dynamic library, but you probably don't need to know about them now.
What is the difference between a .cpp file and a .h file?
Look at this answer. Also a quick google search explains a bit too.
Pretty much .h (header) files are declerations and .cpp (source) files are definitions. It is possible to combine both files into one .cpp file but as projects get bigger and bigger its becomes annoying and almost unreasonable.
Hope that helps.
In C++ there is a notion of a function declaration (the function signature) and a function definition (the actual code).
A header file (*.h) contains the declarations of functions and classes. A source file (*.cpp, *.c++, *.C) contains the definitions.
A header file can be included in a source file using #include directive.
When you define a class in C++, you typically only include the declarations of the member functions (methods in Java lingo), and you put the class definition into a header file. The member function definitions containing the body of each function are typically put outside the class definition and into the source file.
Generally the best thing to do here is to get a book on C++ or C, and to look at some sample code.
Header files (.h) are supposed to contain definitions of classes, methods, and variables. Source file (.cpp) will contain the code. So in your .cpp file you need to include the header file as #include "header-file-name.h".
Then use g++ to compile the .cpp file. Make sure that the path to .h file is correct.
If you are using CodeBlocks or Visual Studio, then just compiling the project and running will do everything for you. You can also add .h or .cpp file from there. You need not worry about anything.
Hope this helps.
I just started learning C++ with Dev C++ as my IDE. One of the tutorials I'm using has a page in it about compiling a program made up of multiple files. It's simple stuff at this point, I have one file with a function in it, and the other file has all the other required code to call the function and output the results. The problem is that the tutorial doesn't tell me how to join these files so I can compile the program and have it work. There's seems to be multiple ways of doing this and I'd like them all but I'm mainly look for the simplest one right now.
I should also mention that I'm new at this so please try and keep your explanations simple and understandable.
In general, you would add both .cpp files to your project under the same target. It IDE will automatically add both files to the build and link them together.
That said, Dev-C++ is very, very old and unmaintained. It has not seen updates in several years. I strongly urge you to use a different IDE. There are many to choose from, including a fork of Dev-C++ called wxDev-C++. I'd actually recommend Code::Blocks or Visual Studio Express, which are both much more modern and have better support for debugging and many other features.
I am not sure of Dev-C++, but the concepts remain the same. So, here is how you can try to get both the files to work together
Each C++ file is a compilation unit - meaning, the compiler will convert one .cpp / .cxx file to one .obj / .o file (on Windows and Linux (or any Unix)) respectively
The obj files, called the object files contain the machine code (am skipping few internal details here) for the classes and functions present in that particular file
If you want to access the functions present in a different compilation unit, you need to link those two object files
Linking is a term that is used to, well, link two object files
There is a separate process (other than the compiler) which does the linking of the object files
So,in your case, you need to use the dev-c++ compiler and create separate object files
Then using the linker you link both the object files to create the final executable
If there are functions that exist in the .cpp files that you want to reference, you use the header files. The header files contain the function/class declarations. The .cpp files will have the implementations. So, in one of your .cpp file, (say) A.cpp, you include the header B.hpp and use the functions in the B.hpp file. The inclusion of headers will tell the compiler that the function declarations exist elsewhere and that the linker will take care of stringing all these references together to create the final executable.
Hope this helps, else, please don't hesitate to mention the files you are using and I can suggest how to link both the .cpp files together.
You must include the other files by using the #include preprocessor directive
in the top of the file where you have the main() function
For example:
#include "filename.h"
...
/* rest of code containing main function goes here */
...
#include "path/filename.c"
main
{
...
...
...
}
I have a c++ project with multiple source files and multiple header files. I want to submit my project for a programming contest which requires a single source file. Is there an automated way of collapsing all the files into single .cpp file?
For example if I had a.cpp, a.h, b.cpp, b.h etc., I want to get a main.cpp which will compile and run successfully. If I did this manually, could I simply merge the header files and append the source files to each other? Are there gotchas with externs, include dependencies and forward declarations?
I also needed this for a coding contest. Codingame to be precise. So I wrote a quick JavaScript script to do the trick. You can find it here:
https://www.npmjs.com/package/codingame-cpp-merge
I used it in 1 live contest and one offline game and it never produced bad results. Feel free to suggest changes or make pull requests for it on github!
In general, you cannot do this. Whilst you can happily paste the contents of header files to the locations of the corresponding #includes, you cannot, in general, simply concatenate source files. For starters, you may end up with naming clashes between things with file scope. And given that you will have copy-pasted header files (with class definitions, etc.) into each source file, you'll end up with classes defined multiple times.
There are much better solutions. As has been mentioned, why not simply zip up your entire project directory (after you've cleaned out auto-generated object files, etc.)? And if you really must have a single source file, then just write a single source file!
well, this is possiable, I have seen many project combine source files to single .h and .c/.cpp, such as sqlite
but the code must have some limits, such as you should not have static global variable in one of your source codes.
there may not have a generic tool for combine sources.you should write one base on your code.
here is some examples
gaclib source pack tool
The CIL utility is able to do this:
$TIGRESS_HOME/cilly --merge -c x1.c -o x1.o
$TIGRESS_HOME/cilly --merge -c x2.c -o x2.o
$TIGRESS_HOME/cilly --merge -c x3.c -o x4.o
$TIGRESS_HOME/cilly --merge --keepmerged x1.o x2.o x3.o -o merged --mergedout=merged.c
Usage example taken from here. Read the documentation about CIL and its shortcomings here. A binary distribution for Mac OS X and Linux is provided with Tigress.
I think the project with headers, sources files must be must nicer than the one with only one main file. Not only easier to work and read with but also they know you do good job at separating program's modules.
Due to your solution, I provide this format and I think you have to do hand-work:
// STL headers
// --- prototype
// monster.h
// prince.h
// --- implementation
int main() {
// your main function
return 0;
}
I just found an npm package that works perfectly for me, for exactly that porpouse: cpp-merge
I provide the link:
https://www.npmjs.com/package/cpp-merge
To install:
npm install -g cpp-merge
To use:
cpp-merge file.cpp
That will create a merged file with all the files specified in #include "library.cpp", within file.cpp.
Also, Is worth mentioning that it goes to standard output. For directing it to a file, you can do:
cpp-merge --output output.cpp
Checkout the documentation for more details!
I don't know of a tool that combines .cpp files together but I would just zip all of the files up together and send them over as a gzip file.
If you choose to send an individual file rather than a compressed archive, such as a tarball or a zip file, there are probably a few things you should consider.
First, concatenate the files together as Thomas Matthews already mentioned. With a few changes, you can typically compile the one file. Remove the non-existent #include statements, such as the headers that have now been included.
You will also have to concatenate these files in their respective dependency order. That is, if a.cpp needs a class declared in b.hpp, then you will most likely need to concatenate in the order Thomas Matthews listed.
That being said, I think the best way to share code is via a public repository, such as GitHub.com or compressed archive.