Why do some languages, like C++ and Python, require the namespace of an object be specified even when no ambiguity exists? I understand that there are backdoors to this, like using namespace x in C++, or from x import * in Python. However, I can't understand the rationale behind not wanting the language to just "do the right thing" when only one accessible namespace contains a given identifier and no ambiguity exists. To me it's just unnecessary verbosity and a violation of DRY, since you're being forced to specify something the compiler already knows.
For example:
import foo # Contains someFunction().
someFunction() # imported from foo. No ambiguity. Works.
Vs.
import foo # Contains someFunction()
import bar # Contains someFunction() also.
# foo.someFunction or bar.someFunction? Should be an error only because
# ambiguity exists.
someFunction()
One reason is to protect against accidentally introducing a conflict when you change the code (or for an external module/library, when someone else changes it) later on. For example, in Python you can write
from foo import *
from bar import *
without conflicts if you know that modules foo and bar don't have any variables with the same names. But what if in later versions both foo and bar include variables named rofl? Then bar.rofl will cover up foo.rofl without you knowing about it.
I also like to be able to look up to the top of the file and see exactly what names are being imported and where they're coming from (I'm talking about Python, of course, but the same reasoning could apply for C++).
Python takes the view that 'explicit is better than implicit'.
(type import this into a python interpreter)
Also, say I'm reading someone's code. Perhaps it's your code; perhaps it's my code from six months ago. I see a reference to bar(). Where did the function come from? I could look through the file for a def bar(), but if I don't find it, what then? If python is automatically finding the first bar() available through an import, then I have to search through each file imported to find it. What a pain! And what if the function-finding recurses through the import heirarchy?
I'd rather see zomg.bar(); that tells me where the function is from, and ensures I always get the same one if code changes (unless I change the zomg module).
The problem is about abstraction and reuse : you don't really know if there will not be any future ambiguity.
For example, It's very common to setup different libraries in a project just to discover that they all have their own string class implementation, called "string".
You compiler will then complain that there is ambiguity if the libraries are not encapsulated in separate namespaces.
It's then a delightful pleasure to dodge this kind of ambiguity by specifying wich implementation (like the standard std::string one) you wants to use at each specific instruction or context (read : scope).
And if you think that it's obvious in a particular context (read : in a particular function or .cpp in c++, .py file in python - NEVER in C++ header files) you just have to express yourself and say that "it should be obvious", adding the "using namespace" instruction (or import *). Until the compiler complain because it is not.
If you use using in specific scopes, you don't break the DRY rule at all.
There have been languages where the compiler tried to "do the right thing" - Algol and PL/I come to mind. The reason they are not around anymore is that compilers are very bad at doing the right thing, but very good at doing the wrong one, given half a chance!
The ideal this rule strives for is to make creating reusable components easy - and if you reuse your component, you just don't know which symbols will be defined in other namespaces the client uses. So the rule forces you to make your intention clear with respect to further definitions you don't know about yet.
However, this ideal has not been reached for C++, mainly because of Koenig lookup.
Is it really the right thing?
What if I have two types ::bat and ::foo::bar
I want to reference the bat type but accidentally hit the r key instead of t (they're right next to each others).
Is it "the right thing" for the compiler to then go searching through every namespace to find ::foo::bar without giving me even a warning?
Or what if I use "bar" as shorthand for the "::foo::bar" type all over my codebase.
Then one day I include a library which defines a ::bar datatype. Suddenly an ambiguity exists where there was none before. And suddenly, "the right thing" has become wrong.
The right thing for the compiler to do in this case would be to assume I meant the type I actually wrote. If I write bar with no namespace prefix, it should assume I'm referring to a type bar in the global namespace. But if it does that in our hypothetical scenario, it'll change what type my code references without even alerting me.
Alternatively, it could give me an error, but come on, that'd just be ridiculous, because even with the current language rules, there should be no ambiguity here, since one of the types is hidden away in a namespace I didn't specify, so it shouldn't be considered.
Another problem is that the compiler may not know what other types exist. In C++, the order of definitions matters.
In C#, types can be defined in separate assemblies, and referenced in your code. How does the compiler know that another type with the same name doesn't exist in another assembly, just in a different namespace? How does it know that one won't be added to another assembly later on?
The right thing is to do what gives the programmer the fewest nasty surprises. Second-guessing the programmer based on incomplete data is generally not the right thing to do.
Most languages give you several tools to avoid having to specify the namespace.
In c++, you have "using namespace foo", as well as typedefs. If you don't want to repeat the namespace prefix, then don't. Use the tools made available by the language so you don't have to.
This all depends on your definition of "right thing". Is it the right thing for the compiler to guess your intention if there's only one match?
There are arguments for both sides.
Interesting question. In the case of C++, as I see it, provided the compiler flagged an error as soon as there was a conflict, the only problem this could cause would be:
Auto-lookup of all C++ namespaces would remove the ability to hide the names of internal parts of library code.
Library code often contains parts (types, functions, global variables) that are never intended to be visible to the "outside world." C++ has unnamed namespaces for exactly this reason -- to avoid "internal parts" clogging up the global namespace, even when those library namespaces are explicitly imported with using namespace xyz;.
Example: Suppose C++ did do auto-lookup, and a particular implementation of the C++ Standard Library contained an internal helper function, std::helper_func(). Suppose a user Joe develops an application containing a function joe::helper_func() using a different library implementation that does not contain std::helper_func(), and calls his own method using unqualified calls to helper_func(). Now Joe's code will compile fine in his environment, but any other user who tries to compile that code using the first library implementation will hit compiler error messages. So the first thing required to make Joe's code portable is to either insert the appropriate using declarations/directives or use fully qualified identifiers. In other words, auto-lookup buys nothing for portable code.
Admittedly, this doesn't seem like a problem that's likely to come up very often. But since typing explicit using declarations/directives (e.g. using namespace std;) is not a big deal for most people, solves this problem completely, and would be required for portable development anyway, using them (heh) seems like a sensible way to do things.
NOTE: As Klaim pointed out, you would never in any circumstances want to rely on auto-lookup inside a header file, as this would immediately prevent your module from being used at the same time as any module containing a conflicting name. (This is just a logical extension of why you don't do using namespace xyz; inside headers in C++ as it stands.)
Related
On my project I often see people defining global functions in .cpp files, i.e functions that are not restricted to file scope, class scope or any particular namespace.
These are clearly local helper functions that the author only wanted to be able to access in that file.
I know this is bad practice and the solution is to restrict them to file scope either by using the static keyword or better yet use an anonymous namespace.
But my question is, if these functions are not declared in the header file, what can actually go wrong?
I would like to advise these people against this practice but I feel my argument would have more weight if I could clearly describe what could go wrong. Or even what what might already be going wrong that we are not aware of!
Thanks.
One, you are cluttering the namespace. The result can be multiple definitions, i.e. linker errors, and programmers choosing awkward function names to circumvent this. Imagine one source file defining its helper() function, the next one a my_helper() because helper() resulted in an error, then a third a other_helper() and so on... in any case, the cleaner the namespace, the easier it becomes to understand what is actually going on.
Two, and this is an extension of the above, imagine a helper( int x ) and a helper( long y ), and you can imagine the kind of ambiguity that could arise from this. If you are lucky (and using appropriate warning options), the compiler will warn you about these conditions, but you might end up calling a different function than what you expected.
Three, and this is from a maintainer's point of view, if you see a function that is static or declared in an anonymous namespace, you know that you only have to check the current source file for calls to this function. This makes refactorings that much easier. ("Does anyone actually use this exotic but buggy feature, or can I optimize it away?")
Ulrich Drepper's paper on shared ELF libraries is relevant for you if you produce dynamically shared objects, usually shared libraries. I assume that some considerations also apply to applications which just are dynamically linked. The paper discusses the GNU tools but similar concerns will likely apply to other tool chains.
In short there may be build-time, load-time and run-time penalties for global objects and functions.
Build- and load-time penalties are rooted in the number of (string) comparisons needed to resolve dependencies which are not necessary for locally-defined symbols like file static functions and variables. Drepper discusses this at page 8 using the example of OpenOffice.
The reason for run-time penalties is ELF specifying that even locally-defined but global symbols could be replaced at run time with definitions in other objects. Therefore function code cannot be inlined and further optimized, even though it is visible at compile time; and the function call proper is more complicated than necessary, involving more indirections. See Drepper's paper, pp. 17 and 18.
Every once in a while, I stumble across some code that I'm maintaining that challenges the way I think about my code style. Today was one of those days...
I'm aware that about why you would want to use the scope operator to define global scope. In fact, here scope resolution operator without a scope is a great link tell you why.
However, I saw something that made me think today. All the classes in question were wrapped into a namespace for the project (good!), but I did see copious usage of the global scope operator. Namely, it was used for everything from C libraries (with the exception of uint8_t and the like... yes, the programmer used the .h version of this library since apparently the version of g++ they were running still threw warnings about the new C++ standard). Is this useful? I view this as just as a waste of characters (reminds me of using the this pointer... except in the case of the copy constructor and assignment operator where it helps in clarifying which object is which). Am I missing something? Sure, someone can come around and name something along the lines of usleep() or stderr (where I saw the most usage of "::"), but won't they know that doing so will probably break something horribly? At what point do you say "screw it" in terms of the scope operator and just tell yourself that someone who names a function a certain way within your namespace is asking for trouble?
So my question is... what is the "correct" (subjective I understand) way of using the global scope operator in this context? Should everything not included in std or your own namespaces have the global scope explicitly defined? I tend to err on the side of caution and use "std::" to avoid the using directive, but is anything gained from using the global scope operator here? I tend to think for clarity's sake it does lend to the fact that we aren't getting the variable or function in question from the current namespace, but I'm torn between including it and not given today's developments.
As always, thanks for the help and guidance as I look to make my code cleaner, more readable, and (of course) significantly more awesome.
I use it quite infrequently; only when some ambiguity needs resolving for whatever reason. This is pretty subjective, though.
There may be some occasions (say, inside a template) where you're worried about ADL causing ambiguities only in certain cases:
template <typename T>
void foo(T t)
{
::bar(t); // :: just in case there's a `bar` in `T`'s namespace
}
There is almost no correct answer to this as it's almost totally style related, with the exception that if you think you may want to change from where you are going to import some declaration(s)/definition(s), in which case, when you use them, you don't specify any scope and import them using the using directive (to import a subset from the namespace) or the using namespace directive to import the entire set from the namespace.
Mostly the using directive is used as a convenience directive, but it is a powerful way to direct which declarations/definitions are used. My preference is to not specify the scope and import the declarations. Doing this allows for easy changes if ever they are needed while reducing visual noise. Also, specifying the scope would mean I'd be "locked in" from where I am getting the declarations (well, I'd have to do a global search and replace to change it).
If ever there is a conflict (you try an use a declared item with the same name that has been imported from more than one namespace) the compiler will let you know, so there's no real danger.
Readable code is that has the least amount of noise. namespace prefixes normally provide nothing but noise. So baseline is to not have them at all.
Namespaces were introduced to C++ mainly to handle 3rd party stuff out of one's control. To allow libraries drop prefixing, while clients can use terse names by applying using.
Just because you can have the same name in many namespaces does not imply it is a good idea to too. If a project uses some environment, platform API, library set, whatever that puts name in global, those names are better be avoided for other purposes. As with or without prefixes they will bear mental overhead.
Use of :: is rare in well-shaped code, and frequent uses appear in wrapper classes for that same functionality.
Consider the following cases.
Public library.
You are writing an exportable library with public headers. And you absolutely have no clue in what environment your headers will be included. For example, someone may do:
namespace std = my_space::std;
#include "your_header"
And all your definitions will be corrupted, if you simply use: std::list<int>, etc. So, it's a good practice to prepend :: to everything global. This way you can be absolutely sure what you're using. Of course, you can do using (in C++11) or typedef - but it's a wrong way to go in headers.
Collaborative .cc/.cpp files.
Inside your own code that is not exposed to public in any way, but still editable not only by you - it's a good practice to announce, what you're going to use from outside of your namespace. Say, your project allows to use a number of vectors - not only an std::vector. Then, where it's appropriate, you put a using directive:
// some includes
using vector = ::std::vector<int>; // C++11
typedef ::std::vector<int> vector; // C++03
namespace my_namespace {
...
} // namespace my_namespace
It may be put after all includes, or inside specific functions. It's not only gives control over the code, but also makes it readable and shortens expressions.
Another common case - is a mixing of global C functions and a C++ code. It's not a good idea to do any typedefs with function names. In this situation you should prepend C functions with the global scope resolution operator :: - to avoid compilation problems, when someone implements a function with a same signature in the same namespace.
Don't use :: for relative types and namespaces - or you'll lose the benefit of using namespaces at all.
Your own code.
Do what you want and how you want. There is only one good way - the way you fill comfortable with your own code. Really, don't bother about global scope resolution in this situation, if it's not requiered to resolve an ambiguity.
Just had a 'Fundamentals of Programming' lecture at uni and was told that the convention for using/declaring functions is to have the main() function at the top of the program, with functions/procedures below it and to use forward declarations to prevent compiler errors.
However, I've always done it the other way - functions at top main() at bottom and not using forward declarations and don't think I've ever seen it otherwise.
Which is right? Or is it more a case of personal preference? Some clarification would be hugely appreciated.
It's up to you. Personally, I keep main on bottom because the times when I have other functions in there, it's either just one or two other little functions or a code snippet.
In real code, you'd have hopefully split your project up (having multiple "unrelated" functions in a file is bad) and so main would likely be nearly alone in the file. You'd just #include the things main needs to get things going and use them.
There could be the case when your functions are related to each other. If you simply write them above the main() without forward-declaration you have to order them so they'll know the functions they depend on. In some cases (circular references) it won't even be possible to compile without forward declaration.
When forward declaring the functions you won't run into this issue.
Additionally when having main() as first function, you'll produce more readable code, but thats perhaps just personal preference.
It could also be more readable cause another coder has on overview about the functions he'll find in the file.
If there is a standard, it's to have the function declarations in a .h file which is included. That way, it doesn't matter what the order of functions in the file is. I almost never write functions without a declaration, and it takes me aback when other people do.
The standard most professional C++ developers use is this.
two files per class: the header file (named e.g., Class.h), which only declares all of the class' data members and functions; and the implementation file (named Class.cpp in the same example), containing only implementations of those functions.
the main function (if your solution has it) goes into a file named, e.g. Main.cpp all by itself.
It is OK to put multiple classes or even the entire program into one file when you work on small-scale home or school projects. Because of the small scale, it doesn't really matter how you order the classes or the functions. You can do what is more convenient to you. But if your professor requires students to follow certain code standards for homework (e.g. put main() first), then you have to follow those standards.
This approach is most likely favoured by your professor and most likely the reason she's teaching it as convention. There are two options here, go with the flow (i.e. don't try to disrupt and potentially get marked down for stuff because your professor feels that you're not following "convention") or explain to her - there is no requirement to do it this way (or any other way), as long as the intention of the code is clear!
The advice to have declarations (function prototypes) is more related to C then to C++, because in C++ there are no implicit function declarations, so you must always declare before use. Therefore you will definitely have at least one function declaration which is not definition if you have a recursion involving more then one function.
For a small project use whatever style you want but be consistent.
For a large project you'll probably need several .cpp files and have the interfaces (be they classes or functions) defined in the header files. Also be consistent at least within a single file.
And the last thing, have I said to be consistent?
I normally use the second form because you have to maintain function declarations - they must have the right signature and name, and imo, they're just excessive typing when you could just define the functions first. No maintenance, no wasted time.
I think the principle that applies here is "Don't Repeat Yourself". If you use a forward declaration, you're repeating yourself unnecessarily.
In C++, declaration and definition of functions, variables and constants can be separated like so:
function someFunc();
function someFunc()
{
//Implementation.
}
In fact, in the definition of classes, this is often the case. A class is usually declared with it's members in a .h file, and these are then defined in a corresponding .C file.
What are the advantages & disadvantages of this approach?
Historically this was to help the compiler. You had to give it the list of names before it used them - whether this was the actual usage, or a forward declaration (C's default funcion prototype aside).
Modern compilers for modern languages show that this is no longer a necessity, so C & C++'s (as well as Objective-C, and probably others) syntax here is histotical baggage. In fact one this is one of the big problems with C++ that even the addition of a proper module system will not solve.
Disadvantages are: lots of heavily nested include files (I've traced include trees before, they are surprisingly huge) and redundancy between declaration and definition - all leading to longer coding times and longer compile times (ever compared the compile times between comparable C++ and C# projects? This is one of the reasons for the difference). Header files must be provided for users of any components you provide. Chances of ODR violations. Reliance on the pre-processor (many modern languages do not need a pre-processor step), which makes your code more fragile and harder for tools to parse.
Advantages: no much. You could argue that you get a list of function names grouped together in one place for documentation purposes - but most IDEs have some sort of code folding ability these days, and projects of any size should be using doc generators (such as doxygen) anyway. With a cleaner, pre-processor-less, module based syntax it is easier for tools to follow your code and provide this and more, so I think this "advantage" is just about moot.
It's an artefact of how C/C++ compilers work.
As a source file gets compiled, the preprocessor substitutes each #include-statement with the contents of the included file. Only afterwards does the compiler try to interpret the result of this concatenation.
The compiler then goes over that result from beginning to end, trying to validate each statement. If a line of code invokes a function that hasn't been defined previously, it'll give up.
There's a problem with that, though, when it comes to mutually recursive function calls:
void foo()
{
bar();
}
void bar()
{
foo();
}
Here, foo won't compile as bar is unknown. If you switch the two functions around, bar won't compile as foo is unknown.
If you separate declaration and definition, though, you can order the functions as you wish:
void foo();
void bar();
void foo()
{
bar();
}
void bar()
{
foo();
}
Here, when the compiler processes foo it already knows the signature of a function called bar, and is happy.
Of course compilers could work in a different way, but that's how they work in C, C++ and to some degree Objective-C.
Disadvantages:
None directly. If you're using C/C++ anyway, it's the best way to do things. If you've got a choice of language/compiler, then maybe you can pick one where this is not an issue. The only thing to consider with splitting declarations into header files is to avoid mutually recursive #include-statements - but that's what include guards are for.
Advantages:
Compilation speed: As all included files are concatenated and then parsed, reducing the amount and complexity of code in included files will improve compilation time.
Avoid code duplication/inlining: If you fully define a function in a header file, each object file that includes this header and references this function will contain it's own version of that function. As a side-note, if you want inlining, you need to put the full definition into the header file (on most compilers).
Encapsulation/clarity: A well defined class/set of functions plus some documentation should be enough for other developers to use your code. There is (ideally) no need for them to understand how the code works - so why require them to sift through it? (The counter-argument that it's may be useful for them to access the implementation when required still stands, of course).
And of course, if you're not interested in exposing a function at all, you can usually still choose to define it fully in the implementation file rather than the header.
The standard requires that when using a function, a declaration must be in scope. This means, that the compiler should be able to verify against a prototype (the declaration in a header file) what you are passing to it. Except of course, for functions that are variadic - such functions do not validate arguments.
Think of C, when this was not required. At that time, compilers treated no return type specification to be defaulted to int. Now, assume you had a function foo() which returned a pointer to void. However, since you did not have a declaration, the compiler will think that it has to return an integer. On some Motorola systems for example, integeres and pointers would be be returned in different registers. Now, the compiler will no longer use the correct register and instead return your pointer cast to an integer in the other register. The moment you try to work with this pointer -- all hell breaks loose.
Declaring functions within the header is fine. But remember if you declare and define in the header make sure they are inline. One way to achieve this is to put the definition inside the class definition. Otherwise prepend the inline keyword. You will run into ODR violation otherwise when the header is included in multiple implementation files.
There are two main advantages to separating declaration and definition into C++ header and source files. The first is that you avoid problems with the One Definition Rule when your class/functions/whatever are #included in more than one place. Secondly, by doing things this way, you separate interface and implementation. Users of your class or library need only to see your header file in order to write code that uses it. You can also take this one step farther with the Pimpl Idiom and make it so that user code doesn't have to recompile every time the library implementation changes.
You've already mentioned the disadvantage of code repetition between the .h and .cpp files. Maybe I've written C++ code for too long, but I don't think it's that bad. You have to change all user code every time you change a function signature anyway, so what's one more file? It's only annoying when you're first writing a class and you have to copy-and-paste from the header to the new source file.
The other disadvantage in practice is that in order to write (and debug!) good code that uses a third-party library, you usually have to see inside it. That means access to the source code even if you can't change it. If all you have is a header file and a compiled object file, it can be very difficult to decide if the bug is your fault or theirs. Also, looking at the source gives you insight into how to properly use and extend a library that the documentation might not cover. Not everyone ships an MSDN with their library. And great software engineers have a nasty habit of doing things with your code that you never dreamed possible. ;-)
Advantage
Classes can be referenced from other files by just including the declaration. Definitions can then be linked later on in the compilation process.
You basically have 2 views on the class/function/whatever:
The declaration, where you declare the name, the parameters and the members (in the case of a struct/class), and the definition where you define what the functions does.
Amongst the disadvantages are repetition, yet one big advantage is that you can declare your function as int foo(float f) and leave the details in the implementation(=definition), so anyone who wants to use your function foo just includes your header file and links to your library/objectfile, so library users as well as compilers just have to care for the defined interface, which helps understanding the interfaces and speeds up compile times.
One advantage that I haven't seen yet: API
Any library or 3rd party code that is NOT open source (i.e. proprietary) will not have their implementation along with the distribution. Most companies are just plain not comfortable with giving away source code. The easy solution, just distribute the class declarations and function signatures that allow use of the DLL.
Disclaimer: I'm not saying whether it's right, wrong, or justified, I'm just saying I've seen it a lot.
One big advantage of forward declarations is that when used carefully you can cut down the compile time dependencies between modules.
If ClassA.h needs to refer to a data element in ClassB.h, you can often use just a forward references in ClassA.h and include ClassB.h in ClassA.cc rather than in ClassA.h, thus cutting down a compile time dependency.
For big systems this can be a huge time saver on a build.
Disadvantage
This leads to a lot of repetition. Most of the function signature needs to be put in two or more (as Paulious noted) places.
Separation gives clean, uncluttered view of program elements.
Possibility to create and link to binary modules/libraries without disclosing sources.
Link binaries without recompiling sources.
When done correctly, this separation reduces compile times when only the implementation has changed.
I come from a c# background where everything has its own namespace, but this practice appears to be uncommon in the c++ world. Should I wrap my code in it's own namespace, the unnamed namespace, or no namespace?
Many C++ developers do not use namespaces, sadly. When I started with C++, I didn't use them for a long time, until I came to the conclusion that I can do better using namespaces.
Many libraries work around namespaces by putting prefixes before names. For example, wxWidgets puts the characters "wx" before everything. Qt puts "Q" before everything. It's nothing really wrong with that, but it requires you to type that prefix all over again, even though when it can be deduced from the context which declarations you mean. Namespaces have a hierarchic order. Names that are lexically closer to the point that reference them are found earlier. So if you reference "Window" within your GUI framework, it will find "my::gui::Window", instead of "::Window".
Namespaces enable some nice features that can't be used without them. For example, if you put your class into a namespace, you can define free functions within that namespace. You then call the function without putting the namespace in front by importing all names, or selectively only some of them into the current scope ("using declaration").
Nowadays, I don't do any project anymore without using them. They make it so easy not to type the same prefix all over again, but still have good organization and avoidance of name-pollution of the global namespace.
Depends, if your code is library code, please wrap it in namespaces, that is the practice in C++. If your code is only a very simple application that doesn't interact with anything else, like a hello world sort of app, there is no need for namespaces, because its redundant.
And since namespaces aren't required the code snippets and examples on the web rarely use them but most real projects do use them.
I just discovered Google's c++ style guide and they have namespace guidelines.
The whole guide is worth reading, but to summarize, they say:
Add unnamed namespaces to .cc files, but not .h files.
Wrap entire (after includes/declarations) .cc and .h files in named namespaces.
Namespaces do not increment the indent level.
At the closing brace for a namespace write } // namespace.
Don't declare anything in std, because it is undefined.
using the using directive is forbidden.
the using declaration is allowed in functions, methods, and classes.
namespace aliases are allowed anywhere.
It really depends upon whether you expect there to be any conflicts.
Two scenarios;
1) If you are creating code that may be used by others (e.g libraries) then there could be namespace clashes so using your own namespace is a good idea.
2) If you are using third-party libraries their code may not be namespaced and could conflict with yours.
I would also say that if you expect your code to be sizable and cover many different areas (math/physics/rendering) then using namespaces does make the code easier to grok, particularly for types that are not obviously classified.
We had problems wrapping code in managed C++ that uses our common libraries here.
Some classes had the same names as System class in the .NET library (i.e. System.Console).
We had to do some ugly macro patches to workaround these problems.
Using namespaces at the beginning would have prevented this.
You only really need namespaces if there's a risk of names conflict - some globally seen function or variable or class is defined more than once. Otherwise you'll do just fine with no namespace - just name your classes so that they don't duplicate runtime library classes and make global functions/variable to be static members of some reasonable classes.
I'd say that it's a good idea, if for no other reason than to keep your stuff from getting stepped on when mixed with other third-party code.
I try to go even farther than that by putting as many variables and functions as I can into classes. But that's sometimes harder to do than it should be (compared to other OO languages).
but this practice appears to be
uncommon in the c++ world
Really. All the code I see seems to be wrapped in a namespace.
I use the same type of convention I see in Java (except I drop the com).
In Java
package com.<Company>.<Product>.<Package>;
In C++
namespace <Company>
{
namespace <Product>
{
namespace <Package>
{
}
}
}