I have a class Game that serves as an interface, and two classes Poker and BlackJack that implement this interface. I have this method that returns a list of Game objects that I can then use:
#include "Poker.h"
#include "BlackJack.h"
...
std::vector<Game*> getGames() {
std::vector<Game*> gs;
Game* p = new Poker();
gs.push_back(p);
Game* b = new BlackJack();
gs.push_back(b);
return gs;
}
At the moment it can only return a list containing these two classes. I want to be able to write more implementations of this interface or get other people's implementations, drop the CPP file in some directory, and be able to call this method, except now with the vector also containing an instance of this new class.
So far I have:
std::string path = "/Games";
std::vector<std::string> filePaths;
for (const auto& file : std::filesystem::directory_iterator(path)) {
filePaths.push_back(file.path().string());
}
Which creates a list of file paths in some directory.
I have no idea how to continue from here since in order to use these files I would have to add an include statement each time. The end goal is to have a program that users could very easily import their own games onto. How could I go about doing this?
How to include CPP files that don't exist yet?
You cannot. You can only include files that already exist.
In theory, you could achieve what you want using meta-programming. That is: Write a program that writes a header that includes all the headers. Then include the header generated by the meta-program. To update the list, re-run the meta-program. You can automate the build process to do it before compilation.
However, I don't see how it could be a good idea in this case. In order for getGames to change, you would have to change the function anyway. Just add the include directive when you add code that uses the header... Unless you use meta programming to generate the function as well, which could be reasonble. But take into consideration the complexity added by the meta-programming before choosing such design.
The end goal is to have a program that users could very easily import their own games onto.
If the goal is for the users to do this after your program has been compiled, then it won't work at all. You'll need to describe your goal more thoroughly for me to suggest a better approach.
P.S. Avoid the use of bare owning pointers. In this case, I recommend to use std::vector<std::unique_ptr<Game>> instead.
I just started on a few C++ tutorials, and I have run into something that I just can't seem to make much sense of.
In C++ it seems people are using a code file and a header file, for me this just seem inconvinient. Why would I want to swap around between two files just to write a simple getter method.
Is it considered the "correct" way to use headers in C++? Or is it just the tutorial I have picked up that uses this?
I get the idea of splitting code to make it look more clean, but is it good for anything else other than that?
Thanks in advance.
There are some reasons for using hpp(header)- and cpp(code)-files. One of them is the following: A library (dll- or so-file) cannot be "used" like a jar-file in java. If you write a library, you have to provide declarations of the classes, methos,... in form of a hpp-file.
Think about using the class you wrote in other files. If you had the class definition in a separate file, you could help the compiler to figure out how to use the class by including the header file in places where you are planning to use this code.
The compiler only needs to know whether you are using the classes right(it does not care about how to run it, until linking), therefore all you need to give the compiler is the declaration of the class(header file), to do the error checking. When you say "include", the preprocessor just copies and pastes the header file contents into the new file, so that the new file now knows how to use the class you wrote.
A header file in c++ stores alot of information, if c++ have been made using every single "header" file in c++ in each program you make, when you then write a function from iostream for example, the program will go through every single header file just to find the right header file. so instead they made the #inlcude function in c++, so you could specify where your functions are from.
And when you create a program you could make own header files, so the code is more nicely set up. and then instead of having to make alot of lines of code in one main source file, you could import others. like if you are making a game, one header file for Animals and in that header file you have a Class for Cats, and one for dogs. having a more clean code.
In C/C++, headers are used to share the class structure (among other things) between classes.
so one can use
include "classFOO.h"
in classBAR.h (or classBAR.cpp) and use classFOO.
So I'm learning C++ and learning to also use SQLite in my practices for data persistence across application runs, which is lots of fun.
But I bumped into this issue:
The program is a Grade Book, classic Ditel C++ book exercise. I'm structuring my classes as follows:
~/classes/Database.h/cpp // A small wrapper for sqlite3
~/classes/Student.h/cpp // The Student object with name and grades (Uses Database)
~/classes/GradeBook.h/cpp // Takes care of most of the application logic and UI (Uses Database and Student)
~/main.cpp // contains just the main function and base Instances of Database and GradeBook
This is so I can instantiate a Single Database Object from main() and pass it by reference to GradeBook and Student so they can use the Database functions. I tried all possible order of includes and as it turns out only this order has works for me.
Student includes Database.
GradeBook includes Student, gets access to Database.
main.cpp includes GradeBook, gets access to both Database and Student.
The question is, is this right? It seems utterly counter-intuitive that the includes seems to "cascade" backwards from deepest classes to the main.cpp file, in other words, Am I doing this right, or am I missing something?
If so, a little explanation or pointers on how this "cascading" works would be pretty awesome.
Thanks!
First, your header files should use include guards to prevent multiple inclusion:
#ifndef MY_HEADER_H
#define MY_HDEADER_H
// code...
#endif // this file will only ever be copied in once to another file
Secondly, you should explicitly include all of the header files that you need to do what you want to do. Relying on header A to include header B for you is just clunky and, since you're using include guards, you never have to worry about including the same file twice.
So, to answer your question, no, it's not "right" in the sense that it could be "better". main.cpp should include the all of the header files that it needs. All of them. #include is a simple text substitution mechanism. When you #include a file it is literally pasted in. That's it.
I'm trying to learn C++ for Qt development, and I'm a little scared of header files. What I'd like to know is, what's the best workflow for keeping *.cpp and *.h files synched? For example, is the norm to write the class file and then copy the relevant info over to the header?
Sorry if this doesn't make any sense...I'm just looking for an efficient workflow for this.
Thanks!
Justin
For example, is the norm to write the class file and then copy the relevant info over to the header?
While there is no single standard approach, its usually a good idea to:
first think about the public interface
put that in the header
implement in the source file accordingly
update the header if needed
Jumping straight into the implementation can make for a painful refactoring later on.
So:
You'll get linker errors if something's in the header, used by a client, and not in the cpp.
You'll get compile errors if something in the cpp is not defined or is defined differently in the .h
What scenario are you worried about?
Create the test code that uses your class/functions (this way you can play with the end results before even writing any code)
Create the header with the class and the methods.
Implement those methods in your cpp file
If you run into the case where you have to change the signature of the methods, you will need to manually change it both place (header + source).
You can shell out a small amount of money and buy Visual Assist X. With Visual Assist X, you right click the method in your cpp file, choose Refactor -> Change Signature. Perform the change and hit Ok. It is changed in both places.
So, in short, there is no way to keep them in sync automatically, but with the right refactor tool, your life will be better.
In general for the first version I write the class file in the .h and when done I copy the method declarations to the .cpp and there change them into definitions (i.e. methods and their bodies). In this phase I declared only the public methods of the class, because I'm more concerned about its interface than the internals.
Later, when I'm implementing the public methods, if I need a new private method I begin by calling it wherever it's needed, so I get the parameters well defined. Then I write its declaration in the .h and go back to the .cpp to write its body.
It may be of interest to you the concepts of Tracer Bullet Design, which fits well with this workflow. It's described in the book The Pragmatic Programmer:
http://www.amazon.com/Pragmatic-Programmer-Journeyman-Master/dp/020161622X/ref=sr_1_1?ie=UTF8&s=books&qid=1279150854&sr=8-1
This page contains a brief description
http://www.artima.com/intv/tracer2.html
You keep them both open at the same time (I frequently use a horizontal split for this) and when you change one file you change the other file. It's just the same as how you keep the prototypes at the top of a file in sync with the function definitions at the bottom of the file when you're writing a program that fits in a single .cpp file.
My personal style with C++ has always to put class declarations in an include file, and definitions in a .cpp file, very much like stipulated in Loki's answer to C++ Header Files, Code Separation. Admittedly, part of the reason I like this style probably has to do with all the years I spent coding Modula-2 and Ada, both of which have a similar scheme with specification files and body files.
I have a coworker, much more knowledgeable in C++ than I, who is insisting that all C++ declarations should, where possible, include the definitions right there in the header file. He's not saying this is a valid alternate style, or even a slightly better style, but rather this is the new universally-accepted style that everyone is now using for C++.
I'm not as limber as I used to be, so I'm not really anxious to scrabble up onto this bandwagon of his until I see a few more people up there with him. So how common is this idiom really?
Just to give some structure to the answers: Is it now The Way™, very common, somewhat common, uncommon, or bug-out crazy?
Your coworker is wrong, the common way is and always has been to put code in .cpp files (or whatever extension you like) and declarations in headers.
There is occasionally some merit to putting code in the header, this can allow more clever inlining by the compiler. But at the same time, it can destroy your compile times since all code has to be processed every time it is included by the compiler.
Finally, it is often annoying to have circular object relationships (sometimes desired) when all the code is the headers.
Bottom line, you were right, he is wrong.
EDIT: I have been thinking about your question. There is one case where what he says is true. templates. Many newer "modern" libraries such as boost make heavy use of templates and often are "header only." However, this should only be done when dealing with templates as it is the only way to do it when dealing with them.
EDIT: Some people would like a little more clarification, here's some thoughts on the downsides to writing "header only" code:
If you search around, you will see quite a lot of people trying to find a way to reduce compile times when dealing with boost. For example: How to reduce compilation times with Boost Asio, which is seeing a 14s compile of a single 1K file with boost included. 14s may not seem to be "exploding", but it is certainly a lot longer than typical and can add up quite quickly when dealing with a large project. Header only libraries do affect compile times in a quite measurable way. We just tolerate it because boost is so useful.
Additionally, there are many things which cannot be done in headers only (even boost has libraries you need to link to for certain parts such as threads, filesystem, etc). A Primary example is that you cannot have simple global objects in header only libs (unless you resort to the abomination that is a singleton) as you will run into multiple definition errors. NOTE: C++17's inline variables will make this particular example doable in the future.
As a final point, when using boost as an example of header only code, a huge detail often gets missed.
Boost is library, not user level code. so it doesn't change that often. In user code, if you put everything in headers, every little change will cause you to have to recompile the entire project. That's a monumental waste of time (and is not the case for libraries that don't change from compile to compile). When you split things between header/source and better yet, use forward declarations to reduce includes, you can save hours of recompiling when added up across a day.
The day C++ coders agree on The Way, lambs will lie down with lions, Palestinians will embrace Israelis, and cats and dogs will be allowed to marry.
The separation between .h and .cpp files is mostly arbitrary at this point, a vestige of compiler optimizations long past. To my eye, declarations belong in the header and definitions belong in the implementation file. But, that's just habit, not religion.
Code in headers is generally a bad idea since it forces recompilation of all files that includes the header when you change the actual code rather than the declarations. It will also slow down compilation since you'll need to parse the code in every file that includes the header.
A reason to have code in header files is that it's generally needed for the keyword inline to work properly and when using templates that's being instanced in other cpp files.
What might be informing you coworker is a notion that most C++ code should be templated to allow for maximum usability. And if it's templated, then everything will need to be in a header file, so that client code can see it and instantiate it. If it's good enough for Boost and the STL, it's good enough for us.
I don't agree with this point of view, but it may be where it's coming from.
I think your co-worker is smart and you are also correct.
The useful things I found that putting everything into the headers is that:
No need for writing & sync headers and sources.
The structure is plain and no circular dependencies force the coder to make a "better" structure.
Portable, easy to embedded to a new project.
I do agree with the compiling time problem, but I think we should notice that:
The change of source file are very likely to change the header files which leads to the whole project be recompiled again.
Compiling speed is much faster than before. And if you have a project to be built with a long time and high frequency, it may indicates that your project design has flaws. Seperate the tasks into different projects and module can avoid this problem.
Lastly I just wanna support your co-worker, just in my personal view.
Often I'll put trivial member functions into the header file, to allow them to be inlined. But to put the entire body of code there, just to be consistent with templates? That's plain nuts.
Remember: A foolish consistency is the hobgoblin of little minds.
As Tuomas said, your header should be minimal. To be complete I will expand a bit.
I personally use 4 types of files in my C++ projects:
Public:
Forwarding header: in case of templates etc, this file get the forwarding declarations that will appear in the header.
Header: this file includes the forwarding header, if any, and declare everything that I wish to be public (and defines the classes...)
Private:
Private header: this file is a header reserved for implementation, it includes the header and declares the helper functions / structures (for Pimpl for example or predicates). Skip if unnecessary.
Source file: it includes the private header (or header if no private header) and defines everything (non-template...)
Furthermore, I couple this with another rule: Do not define what you can forward declare. Though of course I am reasonable there (using Pimpl everywhere is quite a hassle).
It means that I prefer a forward declaration over an #include directive in my headers whenever I can get away with them.
Finally, I also use a visibility rule: I limit the scopes of my symbols as much as possible so that they do not pollute the outer scopes.
Putting it altogether:
// example_fwd.hpp
// Here necessary to forward declare the template class,
// you don't want people to declare them in case you wish to add
// another template symbol (with a default) later on
class MyClass;
template <class T> class MyClassT;
// example.hpp
#include "project/example_fwd.hpp"
// Those can't really be skipped
#include <string>
#include <vector>
#include "project/pimpl.hpp"
// Those can be forward declared easily
#include "project/foo_fwd.hpp"
namespace project { class Bar; }
namespace project
{
class MyClass
{
public:
struct Color // Limiting scope of enum
{
enum type { Red, Orange, Green };
};
typedef Color::type Color_t;
public:
MyClass(); // because of pimpl, I need to define the constructor
private:
struct Impl;
pimpl<Impl> mImpl; // I won't describe pimpl here :p
};
template <class T> class MyClassT: public MyClass {};
} // namespace project
// example_impl.hpp (not visible to clients)
#include "project/example.hpp"
#include "project/bar.hpp"
template <class T> void check(MyClass<T> const& c) { }
// example.cpp
#include "example_impl.hpp"
// MyClass definition
The lifesaver here is that most of the times the forward header is useless: only necessary in case of typedef or template and so is the implementation header ;)
To add more fun you can add .ipp files which contain the template implementation (that is being included in .hpp), while .hpp contains the interface.
As apart from templatized code (depending on the project this can be majority or minority of files) there is normal code and here it is better to separate the declarations and definitions. Provide also forward-declarations where needed - this may have effect on the compilation time.
Generally, when writing a new class, I will put all the code in the class, so I don't have to look in another file for it.. After everything is working, I break the body of the methods out into the cpp file, leaving the prototypes in the hpp file.
I personally do this in my header files:
// class-declaration
// inline-method-declarations
I don't like mixing the code for the methods in with the class as I find it a pain to look things up quickly.
I would not put ALL of the methods in the header file. The compiler will (normally) not be able to inline virtual methods and will (likely) only inline small methods without loops (totally depends on the compiler).
Doing the methods in the class is valid... but from a readablilty point of view I don't like it. Putting the methods in the header does mean that, when possible, they will get inlined.
I think that it's absolutely absurd to put ALL of your function definitions into the header file. Why? Because the header file is used as the PUBLIC interface to your class. It's the outside of the "black box".
When you need to look at a class to reference how to use it, you should look at the header file. The header file should give a list of what it can do (commented to describe the details of how to use each function), and it should include a list of the member variables. It SHOULD NOT include HOW each individual function is implemented, because that's a boat load of unnecessary information and only clutters the header file.
If this new way is really The Way, we might have been running into different direction in our projects.
Because we try to avoid all unnecessary things in headers. That includes avoiding header cascade. Code in headers will propably need some other header to be included, which will need another header and so on. If we are forced to use templates, we try avoid littering headers with template stuff too much.
Also we use "opaque pointer"-pattern when applicable.
With these practices we can do faster builds than most of our peers. And yes... changing code or class members will not cause huge rebuilds.
I put all the implementation out of the class definition. I want to have the doxygen comments out of the class definition.
IMHO, He has merit ONLY if he's doing templates and/or metaprogramming. There's plenty of reasons already mentioned that you limit header files to just declarations. They're just that... headers. If you want to include code, you compile it as a library and link it up.
Doesn't that really depends on the complexity of the system, and the in-house conventions?
At the moment I am working on a neural network simulator that is incredibly complex, and the accepted style that I am expected to use is:
Class definitions in classname.h
Class code in classnameCode.h
executable code in classname.cpp
This splits up the user-built simulations from the developer-built base classes, and works best in the situation.
However, I'd be surprised to see people do this in, say, a graphics application, or any other application that's purpose is not to provide users with a code base.
Template code should be in headers only. Apart from that all definitions except inlines should be in .cpp. The best argument for this would be the std library implementations which follow the same rule. You would not disagree the std lib developers would be right regarding this.
I think your co-worker is right as long as he does not enter in the process to write executable code in the header.
The right balance, I think, is to follow the path indicated by GNAT Ada where the .ads file gives a perfectly adequate interface definition of the package for its users and for its childs.
By the way Ted, have you had a look on this forum to the recent question on the Ada binding to the CLIPS library you wrote several years ago and which is no more available (relevant Web pages are now closed). Even if made to an old Clips version, this binding could be a good start example for somebody willing to use the CLIPS inference engine within an Ada 2012 program.