In languages like java, a package name is a domain name, like com.foobar.mystuff.
So if you own com.foobar, it is highly unlikely that someone would have used the package name com.foobar, you can be reasonably sure there will not be a collision.
But in c++, you can choose any namespace name. How do you know if a library you are linking to, is not already using a particular namespace name ? Is there a way to test it, especially if you don't have access to source code or documentation ? Is there some guideline to avoid this problem ?
Is there some guideline to avoid this problem?
While it may surprise you to find that one of the third party libraries you are using uses a namespace as your application, it shouldn't cause too much problem, if any.
In the worst case scenario, you'll have to create a namespace that is specific to your application and create nested namespaces under the main namespace.
In C++ the standard library is wrapped in the std namespace and the programmer is not supposed to define anything inside that namespace. Of course the standard include files don't step on each other names inside the standard library (so it's never a problem to include a standard header).
Then why isn't the whole standard library included by default instead of forcing programmers to write for example #include <vector> each time? This would also speed up compilation as the compilers could start with a pre-built symbol table for all the standard headers.
Pre-including everything would also solve some portability problems: for example when you include <map> it's defined what symbols are taken into std namespace, but it's not guaranteed that other standard symbols are not loaded into it and for example you could end up (in theory) with std::vector also becoming available.
It happens sometimes that a programmer forgets to include a standard header but the program compiles anyway because of an include dependence of the specific implementation. When moving the program to another environment (or just another version of the same compiler) the same source code could however stop compiling.
From a technical point of view I can image a compiler just preloading (with mmap) an optimal perfect-hash symbol table for the standard library.
This should be faster to do than loading and doing a C++ parse of even a single standard include file and should be able to provide faster lookup for std:: names. This data would also be read-only (thus probably allowing a more compact representation and also shareable between multiple instances of the compiler).
These are however just shoulds as I never implemented this.
The only downside I see is that we C++ programmers would lose compilation coffee breaks and Stack Overflow visits :-)
EDIT
Just to clarify the main advantage I see is for the programmers that today, despite the C++ standard library being a single monolithic namespace, are required to know which sub-part (include file) contains which function/class. To add insult to injury when they make a mistake and forget an include file still the code may compile or not depending on the implementation (thus leading to non-portable programs).
Short answer is because it is not the way the C++ language is supposed to be used
There are good reasons for that:
namespace pollution - even if this could be mitigated because std namespace is supposed to be self coherent and programmer are not forced to use using namespace std;. But including the whole library with using namespace std; will certainly lead to a big mess...
force programmer to declare the modules that he wants to use to avoid inadvertently calling a wrong standard function because standard library is now huge and not all programmers know all modules
history: C++ has still strong inheritance from C where namespace do not exist and where the standard library is supposed to be used as any other library.
To go in your sense, Windows API is an example where you only have one big include (windows.h) that loads many other smaller include files. And in fact, precompiled headers allows that to be fast enough
So IMHO a new language deriving from C++ could decide to automatically declare the whole standard library. A new major release could also do it, but it could break code intensively using using namespace directive and having custom implementations using same names as some standard modules.
But all common languages that I know (C#, Python, Java, Ruby) require the programmer to declare the parts of the standard library that he wants to use, so I suppose that systematically making available every piece of the standard library is still more awkward than really useful for the programmer, at least until someone find how to declare the parts that should not be loaded - that's why I spoke of a new derivative from C++
Most of the C++ standard libraries are template based which means that the code they'll generate will depend ultimately in how you use them. In other words, there is very little that could be compiled before instantiate a template like std::vector<MyType> m_collection;.
Also, C++ is probably the slowest language to compile and there is a lot parsing work that compilers have to do when you #include a header file that also includes other headers.
Well, first thing first, C++ tries to adhere to "you only pay for what you use".
The standard-library is sometimes not part of what you use at all, or even of what you could use if you wanted.
Also, you can replace it if there's a reason to do so: See libstdc++ and libc++.
That means just including it all without question isn't actually such a bright idea.
Anyway, the committee are slowly plugging away at creating a module-system (It takes lots of time, hopefully it will work for C++1z: C++ Modules - why were they removed from C++0x? Will they be back later on?), and when that's done most downsides to including more of the standard-library than strictly neccessary should disappear, and the individual modules should more cleanly exclude symbols they need not contain.
Also, as those modules are pre-parsed, they should give the compilation-speed improvement you want.
You offer two advantages of your scheme:
Compile-time performance. But nothing in the standard prevents an implementation from doing what you suggest[*] with a very slight modification: that the pre-compiled table is only mapped in when the translation unit includes at least one standard header. From the POV of the standard, it's unnecessary to impose potential implementation burden over a QoI issue.
Convenience to programmers: under your scheme we wouldn't have to specify which headers we need. We do this in order to support C++ implementations that have chosen not to implement your idea of making the standard headers monolithic (which currently is all of them), and so from the POV of the C++ standard it's a matter of "supporting existing practice and implementation freedom at a cost to programmers considered acceptable". Which is kind of the slogan of C++, isn't it?
Since no C++ implementation (that I know of) actually does this, my suspicion is that in point of fact it does not grant the performance improvement you think it does. Microsoft provides precompiled headers (via stdafx.h) for exactly this performance reason, and yet it still doesn't give you an option for "all the standard libraries", instead it requires you to say what you want in. It would be dead easy for this or any other implementation to provide an implementation-specific header defined to have the same effect as including all standard headers. This suggests to me that at least in Microsoft's opinion there would be no great overall benefit to providing that.
If implementations were to start providing monolithic standard libraries with a demonstrable compile-time performance improvement, then we'd discuss whether or not it's a good idea for the C++ standard to continue permitting implementations that don't. As things stand, it has to.
[*] Except perhaps for the fact that <cassert> is defined to have different behaviour according to the definition of NDEBUG at the point it's included. But I think implementations could just preprocess the user's code as normal, and then map in one of two different tables according to whether it's defined.
I think the answer comes down to C++'s philosophy of not making you pay for what you don't use. It also gives you more flexibility: you aren't forced to use parts of the standard library if you don't need them. And then there's the fact that some platforms might not support things like throwing exceptions or dynamically allocating memory (like the processors used in the Arduino, for example). And there's one other thing you said that is incorrect. As long as it's not a template class, you are allowed to add swap operators to the std namespace for your own classes.
First of all, I am afraid that having a prelude is a bit late to the game. Or rather, seeing as preludes are not easily extensible, we have to content ourselves with a very thin one (built-in types...).
As an example, let's say that I have a C++03 program:
#include <boost/unordered_map.hpp>
using namespace std;
using boost::unordered_map;
static unordered_map<int, string> const Symbols = ...;
It all works fine, but suddenly when I migrate to C++11:
error: ambiguous symbol "unordered_map", do you mean:
- std::unordered_map
- boost::unordered_map
Congratulations, you have invented the least backward compatible scheme for growing the standard library (just kidding, whoever uses using namespace std; is to blame...).
Alright, let's not pre-include them, but still bundle the perfect hash table anyway. The performance gain would be worth it, right?
Well, I seriously doubt it. First of all because the Standard Library is tiny compared to most other header files that you include (hint: compare it to Boost). Therefore the performance gain would be... smallish.
Oh, not all programs are big; but the small ones compile fast already (by virtue of being small) and the big ones include much more code than the Standard Library headers so you won't get much mileage out of it.
Note: and yes, I did benchmark the file look-up in a project with "only" a hundred -I directives; the conclusion was that pre-computing the "include path" to "file location" map and feeding it to gcc resulted in a 30% speed-up (after using ccache already). Generating it and keeping it up-to-date was complicated, so we never used it...
But could we at least include a provision that the compiler could do it in the Standard?
As far as I know, it is already included. I cannot remember if there is a specific blurb about it, but the Standard Library is really part of the "implementation" so resolving #include <vector> to an internal hash-map would fall under the as-if rule anyway.
But they could do it, still!
And lose any flexibility. For example Clang can use either the libstdc++ or the libc++ on Linux, and I believe it to be compatible with the Dirkumware's derivative that ships with VC++ (or if not completely, at least greatly).
This is another point of customization: if the Standard library does not fit your needs, or your platforms, by virtue of being mostly treated like any other library you can replace part of most of it with relative ease.
But! But!
#include <stdafx.h>
If you work on Windows, you will recognize it. This is called a pre-compiled header. It must be included first (or all benefits are lost) and in exchange instead of parsing files you are pulling in an efficient binary representation of those parsed files (ie, a serialized AST version, possibly with some type resolution already performed) which saves off maybe 30% to 50% of the work. Yep, this is close to your proposal; this is Computer Science for you, there's always someone else who thought about it first...
Clang and gcc have a similar mechanism; though from what I've heard it can be so painful to use that people prefer the more transparent ccache in practice.
And all of these will come to naught with modules.
This is the true solution to this pre-processing/parsing/type-resolving madness. Because modules are truly isolated (ie, unlike headers, not subject to inclusion order), an efficient binary representation (like pre-compiled headers) can be pre-computed for each and every module you depend on.
This not only means the Standard Library, but all libraries.
Your solution, more flexible, and dressed to the nines!
One could use an alternative implementation of the C++ Standard Library to the one shipped with the compiler. Or wrap headers with one's definitions, to add, enable or disable features (see GNU wrapper headers). Plain text headers and the C inclusion model are a more powerful and flexible mechanism than a binary black box.
I am writing c++ in eclipse. When I add a new class it asks for a namespace. I added boost because I am using boost. When I looked at the class eclipse surrounded my class with namespace boost{}. I thought namespace was for java/c# not a c++. Also, I did a little research and it looks like adding namespace boost is a big no no. So I deleted namespace boost. One question, is there a good reason why I should of kept namespace boost and will erasing it haunt me later in the dev cycle?
Thanks, that what I thought. I don't know why eclipse wants me to add a namespace
Erm. #Aaron, "adding" a namespace is good practice, unless you want to be thrown back into the era of C APIs with eternal clashes library symbols or clumy libraryx_myfunction naming conventions.
In fact, chosing a proper namespace for your classes is so elementary that I'd rephrase that. It's not
adding a namespace to your class;
Rather it is
creating a namespace for your to (later) add types and functions to.
Namespaces serve to separate your code from LibraryX and LibraryX from LibraryY etc. Therefore it is indeed bad practice to chose a namespace name that is likely to collide. Defining user defined types is expressly prohibit in the std namespace (or Undefined Behaviour ensues).
Putting stuff into other lirary's namespaces is asking for trouble with
future extensions of the library
internal, undocumented symbols in those libraries
subtle bugs due to ADL (where the library might start finding your function(s) instead of it's own without knowing)
A good reason why I should keep namespace boost?
Why? Are you extending the boost library? I don't see any reason to keep namespace boost, in fact, for small projects, you do not need a namespace at all. Namespaces are designed to prevent name conflict, and for a small project, I doubt you'll have any (be careful about swap though).
Basically speaking, your question is almost the same as asking why I shouldn't surround my code with namespace std, after all, I am using the standard library, right?
Coming from a c# and java background, namespaces are nothing new to me and I have come to love the organizational structuring it brings to code.
I've been doing more work in c++ lately and I try to keep my coding habits as modern as possible. I have been toying around with some sandbox code, trying some new things and I would like to get some feedback on something I've been playing around with.
What I did was create a header file that explicitly declares a series of structured namespaces. The reason I did this is I wanted to layout a predefined structuring for namespaces to be used in the project. I realize that obviously the namespaces could be declared within any classes as the classes are defined but I like the idea of predefined structuring of the namespaces, such as (just an example):
namespace System
{
namespace IO
{
}
namespace Serialization
{
}
namespace Security
{
namespace Cryptography
{
}
}
}
etc. You get the idea.
My question is: Is there anything wrong with this approach? I figured it could be a good practice since it follows good coding practices with encapsulation and naming, etc.
I have seen some strange comments about similar things by old-school c++ devs, such as: "You're trying to combine Java and c++", or "Namespaces are just a way to mangle type names to avoid naming conflicts". I completely disagree with those opinions and don't feel those people are fully understanding the utility that namespaces can provide.
I don't think there's anything inherently "wrong" with it: you're documenting the intended structure of your namespaces.
But, ultimately, it's also pretty pointless, since nothing about this file stops namespaces with the same names from being [accidentally] created in a different structure, leaving you with a broken hierarchy there with actual types declared in it, and your intended hierarchy here with nothing at all in it.
Consequently I'd probably stick to standalone documentation for this sort of thing.
Declaring a namespace hierarchy in a header file is not very useful in practice; including the header file in any other files has no effect.
It seams to me you are trying to create a rule for people working on that project to follow, which or may or may not stick...it's all a matter of style and preference.
But there is nothing wrong with the approach it can do no harm.
In my project I use two libraries, v8 and boost. Boost uses the .hpp extension for its headers, while v8 uses the .h extension for its headers.
In the end of day, my source code starts like that:
#include "v8.h"
#include "boost/filesystem.hpp"
...
In other question I asked about this subject, the general answer was that it is okay, but I just should be consistent between names.
This code compiles well, but, coding styles/standards - is it okay? Is there any solution for this problem (like changing all .hpp to .h automatically somehow?)
Thanks. And sorry for those stupid questions.
Don't worry about the inconsistency, it doesn't matter. Too much time is often spent obsessing about such details, and everyone is guilty of it.
Just be consistent with your own coding standards.
You'll eventually use some 3rd party library or several that use different conventions than you. There's nothing you can do about it, and often 2 of those libraries you use will be conflicting with your standards and with each other. That's not only for include extensions, but also for naming convetions like function_that_does_something vs FunctionThatDoesSomthing .It's fine.
I would definitely strongly advice against trying to change someone else's library to fit into your coding standard. I.e. for example renaming boost .hpp to .h. This is a bad idea and when you want to upgrade to newer versions of the library it will be a nightmare.
Spend your time solving the problem you're solving in a more elegant way rather than worrying about details like this.
It's fine. Coding standards don't really come into it since you have to go with what you're given. If the v8 people only provide .h and the boost people only provide .hpp then, short of copying one set of files to the other choice or providing your own wrapper header files, you have few options.
Both of those option have their downsides for what is really dubious benefits, so I wouldn't concern yourself with the fact that you have to include two different file extensions.