A macro for long and short function names in C++ - c++

I am currently working on a general-purpose C++ library.
Well, I like using real-word function names and actually my project has a consistent function naming system. The functions (or methods) start with a verb if they do not return bool (in this case they start with is_)
The problem is this can be somewhat problematic for some programmers. Consider this function:
#include "something.h"
int calculate_geometric_mean(int* values)
{
//insert code here
}
I think such functions seem to be formal, so I name my functions so.
However I designed a simple Macro system for the user to switch function names.
#define SHORT_NAMES
#include "something.h"
#ifdef SHORT_NAMES
int calc_geometric_mean(int* values)
#else
int calculate_geometric_mean(int* values)
#endif
{
//some code
}
Is this wiser than using alias (since each alias of function will be allocated in the memory), or is this solution a pure evil?

FWIW, I don't think this dual-naming system adds a lot of value. It does, however, has the potential for causing a lot of confusion (to put it mildly).
In any case, if you are convinced is a great idea, I would implement it through inline functions rather than macros.
// something.h
int calculate_geometric_mean(int* values); // defined in the .cpp file
inline int calc_geo_mean(int* values) {
return calculate_geometric_mean(values);
}

What symbols will be exported to the object file/library? What if you attempt to use the other version? Will you distribute two binaries with their own symbols?
So - no, bad idea.

Usually, the purpose behind a naming system is to aid the readability and understanding of the code.
Now, you effectively have 2 systems, each of which has a rationale. You're already forcing the reader/maintainer to keep two approaches to naming in mind, which dilutes the end goal of readability. Never mind the ugly #defines that end up polluting your code base.
I'd say choose one system and stick to it, because consistency is the key. I wouldn't say this solution is pure evil per se - I would say that this is not a solution to begin with.

Related

C++ #define or typedef and functions?

Although #define and typedefs can be used to get to the same results, I was wondering if one of these options is more elegant or faster than the other. Is there any reason why I should prefer typedefs and functions over #define?
#define unt uint32_t
#define msin(x) std::sin(x)
namespace LL {
typedef uint32_t unt;
float sin(float x) {
return std::sin(x);
}
} // namespace LL
I was wondering if one of these options is ... faster
Neither typedef nor macro is faster as far as runtime is considered. Maybe in theory one may be faster to compile, but probably in an insignificant way.
Function is unlikely to be any slower as long as it is expanded inline.
I was wondering if one of these options is more elegant
The typedef and function are more elegant. Always use them instead of a macros. They are regular identifier and follows namespace rules. Macros are not affected by namespace.
Function like macros are especially problematic and their arguments do not behave like beginners would expect them to behave. Furthermore, error messages will much more understandable with functions.
But also consider whether a typedef or a wrapper function is necessary in the first place. It can often be counter productive to obfuscate the underlying type or function.
Macros are useful for: header guards (if you dislike #pragma once) , repetitive declarations (some may dislike this use case), cross system porting (for example for feature detection and system specific attributes)

Is it practical to use Header files without a partner Class/Cpp file in C++

I've recently picked up C++ as part of my course, and I'm trying to understand in more depth the partnership between headers and classes. From every example or tutorial I've looked up on header files, they all use a class file with a constructor and then follow up with methods if they were included. However I'm wondering if it's fine just using header files to hold a group of related functions without the need to make an object of the class every time you want to use them.
//main file
#include <iostream>
#include "Example.h"
#include "Example2.h"
int main()
{
//Example 1
Example a; //I have to create an object of the class first
a.square(4); //Then I can call the function
//Example 2
square(4); //I can call the function without the need of a constructor
std::cin.get();
}
In the first example I create an object and then call the function, i use the two files 'Example.h' and 'Example.cpp'
//Example1 cpp
#include <iostream>
#include "Example.h"
void Example::square(int i)
{
i *= i;
std::cout << i << std::endl;
}
//Example1 header
class Example
{
public:
void square(int i);
};
In example2 I call the function directly from file 'Example2.h' below
//Example2 header
void square(int i)
{
i *= i;
std::cout << i;
}
Ultimately I guess what I'm asking is, if it's practical to use just the header file to hold a group of related functions without creating a related class file. And if the answer is no, what's the reason behind that. Either way I'm sure I've over looked something, but as ever I appreciate any kind of insight from you guys on this!
Of course, it's just fine to have only headers (as long as you consider the One Definition Rule as already mentioned).
You can as well write C++ sources without any header files.
Strictly speaking, headers are nothing else than filed pieces of source code which might be #included (i.e. pasted) into multiple C++ source files (i.e. translation units). Remembering this basic fact was sometimes quite helpful for me.
I made the following contrived counter-example:
main.cc:
#include <iostream>
// define float
float aFloat = 123.0;
// make it extern
extern float aFloat;
/* This should be include from a header
* but instead I prevent the pre-processor usage
* and simply do it by myself.
*/
extern void printADouble();
int main()
{
std::cout << "printADouble(): ";
printADouble();
std::cout << "\n"
"Surprised? :-)\n";
return 0;
}
printADouble.cc:
/* This should be include from a header
* but instead I prevent the pre-processor usage
* and simply do it by myself.
*
* This is intentionally of wrong type
* (to show how it can be done wrong).
*/
// use extern aFloat
extern double aFloat;
// make it extern
extern void printADouble();
void printADouble()
{
std::cout << aFloat;
}
Hopefully, you have noticed that I declared
extern float aFloat in main.cc
extern double aFloat in printADouble.cc
which is a disaster.
Problem when compiling main.cc? No. The translation unit is consistent syntactically and semantically (for the compiler).
Problem when compiling printADouble.cc? No. The translation unit is consistent syntactically and semantically (for the compiler).
Problem when linking this mess together? No. Linker can resolve every needed symbol.
Output:
printADouble(): 5.55042e-315
Surprised? :-)
as expected (assuming you expected as well as me nothing with sense).
Live Demo on wandbox
printADouble() accessed the defined float variable (4 bytes) as double variable (8 bytes). This is undefined behavior and goes wrong on multiple levels.
So, using headers doesn't support but enables (some kind of) modular programming in C++. (I didn't recognize the difference until I once had to use a C compiler which did not (yet) have a pre-processor. So, this above sketched issue hit me very hard but was really enlightening for me, also.)
IMHO, header files are a pragmatic replacement for an essential feature of modular programming (i.e. the explicit definion of interfaces and separation of interfaces and implementations as language feature). This seems to have annoyed other people as well. Have a look at A Few Words on C++ Modules to see what I mean.
C++ has a One Definition Rule (ODR). This rule states that functions and objects should be defined only once. Here's the problem: headers are often included more than once. Your square(int) function might therefore be defined twice.
The ODR is not an absolute rule. If you declare square as
//Example2 header
inline void square(int i)
// ^^^
{
i *= i;
std::cout << i;
}
then the compiler will inform the linker that there are multiple definitions possible. It's your job then to make sure all inline definitions are identical, so don't redefine square(int) elsewhere.
Templates and class definitions are exempt; they can appear in headers.
C++ is a multi paradigm programming language, it can be (at least):
procedural (driven by condition and loops)
functional (driven by recursion and specialization)
object oriented
declarative (providing compile-time arithmetic)
See a few more details in this quora answer.
Object oriented paradigm (classes) is only one of the many that you can leverage programming in C++.
You can mix them all, or just stick to one or a few, depending on what's the best approach for the problem you have to solve with your software.
So, to answer your question:
yes, you can group a bunch of (better if) inter-related functions in the same header file. This is more common in "old" C programming language, or more strictly procedural languages.
That said, as in MSalters' answer, just be conscious of the C++ One Definition Rule (ODR). Use inline keyword if you put the declaration of the function (body) and not only its definition (templates exempted).
See this SO answer for description of what "declaration" and "definition" are.
Additional note
To enforce the answer, and extend it to also other programming paradigms in C++,
in the latest few years there is a trend of putting a whole library (functions and/or classes) in a single header file.
This can be commonly and openly seen in open source projects, just go to github or gitlab and search for "header-only":
The common way is and always has been to put code in .cpp files (or whatever extension you like) and declarations in headers.
There is occasionally some merit to putting code in the header, this can allow more clever inlining by the compiler. But at the same time, it can destroy your compile times since all code has to be processed every time it is included by the compiler.
Finally, it is often annoying to have circular object relationships (sometimes desired) when all the code is the headers.
Some exception case is Templates. Many newer "modern" libraries such as boost make heavy use of templates and often are "header only." However, this should only be done when dealing with templates as it is the only way to do it when dealing with them.
Some downsides of writing header only code
If you search around, you will see quite a lot of people trying to find a way to reduce compile times when dealing with boost. For example: How to reduce compilation times with Boost Asio, which is seeing a 14s compile of a single 1K file with boost included. 14s may not seem to be "exploding", but it is certainly a lot longer than typical and can add up quite quickly. When dealing with a large project. Header only libraries do affect compile times in a quite measurable way. We just tolerate it because boost is so useful.
Additionally, there are many things which cannot be done in headers only (even boost has libraries you need to link to for certain parts such as threads, filesystem, etc). A Primary example is that you cannot have simple global objects in header only libs (unless you resort to the abomination that is a singleton) as you will run into multiple definition errors. NOTE: C++17's inline variables will make this particular example doable in the future.
To be more specific boost, Boost is library, not user level code. so it doesn't change that often. In user code, if you put everything in headers, every little change will cause you to have to recompile the entire project. That's a monumental waste of time (and is not the case for libraries that don't change from compile to compile). When you split things between header/source and better yet, use forward declarations to reduce includes, you can save hours of recompiling when added up across a day.

C++ One Header Multiple Sources

I have a large class Foo1:
class Foo {
public:
void apples1();
void apples2();
void apples3();
void oranges1();
void oranges2();
void oranges3();
}
Splitting the class is not an option2, but the foo.cpp file has grown rather large. Are there any major design flaws to keeping the definition of the class in foo.h and splitting the implementation of the functions into foo_apples.cpp and foo_oranges.cpp.
The goal here is purely readability and organization for myself and other developers working on the system that includes this class.
1"Large" means some 4000 lines, not machine-generated.
2Why? Well, apples and oranges are actually categories of algorithms that operate on graphs but use each other quite extensively. They can be separated but due to the research nature of the work, I'm constantly rewiring the way each algorithm works which I found for me does not (in the early stage) jive well with the classic OOP principles.
Are there any major design flaws to keeping the definition of the class in foo.h and splitting the implementation of the functions into foo_apples.cpp and foo_oranges.cpp.
to pick nits: Are there any major design flaws to keeping the declaration of the class in foo.h and splitting the definitions of the methods into foo_apples.cpp and foo_oranges.cpp.
1) apples and oranges may use the same private programs. an example of this would be implementation found in an anonymous namespace.
in that case, one requirement would be to ensure your static data is not multiply defined. inline functions are not really a problem if they do not use static data (although their definitions may be multiply exported).
to overcome those problems, you may then be inclined to utilise storage in the class -- which could introduce dependencies by increasing of data/types which would have otherwise been hidden. in either event, it can increase complexity or force you to write your program differently.
2) it increases complexity of static initialization.
3) it increases compile times
the alternative i use (which btw many devs detest) in really large programs is to create a collection of exported local headers. these headers are visible only to the package/library. in your example, it can be illustrated by creating the following headers: Foo.static.exported.hpp (if needed) + Foo.private.exported.hpp (if needed) + Foo.apples.exported.hpp + Foo.oranges.exported.hpp.
then you would write Foo.cpp like so:
#include "DEPENDENCIES.hpp"
#include "Foo.static.exported.hpp" /* if needed */
#include "Foo.private.exported.hpp" /* if needed */
#include "Foo.apples.exported.hpp"
#include "Foo.oranges.exported.hpp"
/* no definitions here */
you can easily adjust how those files are divided based on your needs. if you write your programs using c++ conventions, there are rarely collisions across huge TUs. if you write like a C programmer (lots of globals, preprocessor abuse, low warning levels and free declarations), then this approach will expose a lot of issues you probably won't care to correct.
From a technical standpoint, there is no penalty to doing this at all, but I have never seen it done in practice. This is simply a issue of style, and in that spirit, if it helps you to better read the class, then you would be doing yourself a disservice by not using multiple source files.
edit: Adding to that though, are you physically scrolling through your source, like, with your middle mouse wheel? As someone else already mentioned, IDE's almost universally let you right click on a function declaration, and go to the definition. And even if that's not the case for your IDE, and you use notepad or something, it will at least have ctrl+f. I would be lost without find and replace.
Yes, you can define the class in one header file and split the function implementations accross multiple source files. It is not usually the common practice but yes it will work and there will be no overheads.
If the aim to do so, is just plain readability, then perhaps it is not a good idea to do so, because it is not so common practice to have class function definitions accross multipls source files and might just confuse someone.
Actually i don't see any reasons to split implementation because other developers should work with the interface, but not the implementation.
Also any normal IDE provide an easy ability to jump from function declaration to it's defenition. So there is no reason to search the function implementations manually.

how to handle optimizations in code

I am currently writing various optimizations for some code. Each of theses optimizations has a big impact on the code efficiency (hopefully) but also on the source code. However I want to keep the possibility to enable and disable any of them for benchmarking purpose.
I traditionally use the #ifdef OPTIM_X_ENABLE/#else/#endif method, but the code quickly become too hard to maintain.
One can also create SCM branches for each optimizations. It's much better for code readability until you want to enable or disable more than a single optimization.
Is there any other and hopefully better way work with optimizations ?
EDIT :
Some optimizations cannot work simultaneously. I may need to disable an old optimization to bench a new one and see which one I should keep.
I would create a branch for an optimization, benchmark it until you know it has a significant improvement, and then simply merge it back to trunk. I wouldn't bother with the #ifdefs once it's back on trunk; why would you need to disable it once you know it's good? You always have the repository history if you want to be able to rollback a particular change.
There are so many ways of choosing which part of your code that will execute. Conditional inclusion using the preprocessor is usually the hardest to maintain, in my experience. So try to minimize that, if you can. You can separate the functionality (optimized, unoptimized) in different functions. Then call the functions conditionally depending on a flag. Or you can create an inheritance hierarchy and use virtual dispatch. Of course it depends on your particular situation. Perhaps if you could describe it in more detail you would get better answers.
However, here's a simple method that might work for you: Create two sets of functions (or classes, whichever paradigm you are using). Separate the functions into different namespaces, one for optimized code and one for readable code. Then simply choose which set to use by conditionally using them. Something like this:
#include <iostream>
#include "optimized.h"
#include "readable.h"
#define USE_OPTIMIZED
#if defined(USE_OPTIMIZED)
using namespace optimized;
#else
using namespace readable;
#endif
int main()
{
f();
}
Then in optimized.h:
namespace optimized
{
void f() { std::cout << "optimized selected" << std::endl; }
}
and in readable.h:
namespace readable
{
void f() { std::cout << "readable selected" << std::endl; }
}
This method does unfortunately need to use the preprocessor, but the usage is minimal. Of course you can improve this by introducing a wrapper header:
wrapper.h:
#include "optimized.h"
#include "readable.h"
#define USE_OPTIMIZED
#if defined(USE_OPTIMIZED)
using namespace optimized;
#else
using namespace readable;
#endif
Now simply include this header and further minimize the potential preprocessor usage. Btw, the usual separation of header/cpp should still be done.
Good luck!
I would work at class level (or file level for C) and embed all the various versions in the same working software (no #ifdef) and choose one implementation or the other at runtime through some configuration file or command line options.
It should be quite easy as optimizations should not change anything at internal API level.
Another way if you'are using C++ can be to instantiate templates to avoid duplicating high level code or selecting a branch at run-time (even if this is often an acceptable option, some switches here and there are often not such a big issue).
In the end various optimized backend could eventually be turned to libraries.
Unit Tests should be able to work without modifying them with every variant of implementation.
My rationale is that embedding every variant mostly change software size, and it's very rarely a problem. This approach also has other benefits : you can take care easily of changing environment. An optimization for some OS or some hardware may not be one on another. In many cases it will even be easy to choose the best version at runtime.
You may have two (three/more) version of function you optimise with names like:
function
function_optimized
which have identical arguments and return same results.
Then you may #define selector in som header like:
#if OPTIM_X_ENABLE
#define OPT(f) f##_optimized
#else
#define OPT(f) f
#endif
Then call functions having optimized variants as OPT(function)(argument, argument...). This method is not so aestetic but it does so.
You may go further and use re#define names for all your optimized functions:
#if OPTIM_X_ENABLE
#define foo foo_optimized
#define bar bar_optimized
...
#endif
And leave caller code as is. Preprocessor does function substitution for you. I like it most, because it works transparently while per-function (and also per datatype and per variable) grained which is enough in most cases for me.
More exotic method is to make separate .c file for non-optimized and optimized code and compile only one of them. They may have same names but with different paths, so switching can be made by change single option in command line.
I'm confused. Why don't you just find out where each performance problem is, fix it, and continue. Here's an example.

Does using a C++ namespace increase coupling?

I understand that a C++ library should use a namespace to avoid name collisions, but since I already have to:
#include the correct header (or forward declare the classes I intend to use)
Use those classes by name
Don't these two parameters infer the same information conveyed by a namespace. Using a namespace now introduces a third parameter - the fully qualified name. If the implementation of the library changes, there are now three potential things I need to change. Is this not, by definition an increase in coupling between the library code and my code?
For example, look at Xerces-C: It defines a pure-virtual interface called Parser within the namespace XERCES_CPP_NAMESPACE. I can make use of the Parser interface in my code by including the appropriate header file and then either importing the namespace using namespace XERCES_CPP_NAMESPACE or prefacing declarations/definitions with XERCES_CPP_NAMESPACE::.
As the code evolves, perhaps there is a need to drop Xerces in favour of a different parser. I'm partially "protected" from the change in the library implementation by the pure-virtual interface (even more so if I use a factory to construct my Parser), but as soon as I switch from Xerces to something else, I need to comb through my code and change all my using namespace XERCES_CPP_NAMESPACE and XERCES_CPP_NAMESPACE::Parser code.
I ran into this recently when I refactored an existing C++ project to split-out some existing useful functionality into a library:
foo.h
class Useful; // Forward Declaration
class Foo
{
public:
Foo(const Useful& u);
...snip...
}
foo.cpp
#include "foo.h"
#include "useful.h" // Useful Library
Foo::Foo(const Useful& u)
{
... snip ...
}
Largely out of ignorance (and partially out of laziness) at the time, the all of the functionality of useful.lib was placed in the global namespace.
As the contents of useful.lib grew (and more clients started to use the functionality), it was decided to move all the code from useful.lib into its own namespace called "useful".
The client .cpp files were easy to fix, just add a using namespace useful;
foo.cpp
#include "foo.h"
#include "useful.h" // Useful Library
using namespace useful;
Foo::Foo(const Useful& u)
{
... snip ...
}
But the .h files were really labour intensive. Instead of polluting the global namespace by putting using namespace useful; in the header files, I wrapped the existing forward declarations in the namespace:
foo.h
namespace useful {
class Useful; // Forward Declaration
}
class Foo
{
public:
Foo(const useful::Useful& u);
...snip...
}
There were dozens (and dozens) of files and this ended up being a major pain! It should not have been that difficult. Clearly I did something wrong with either the design and/or implementation.
Although I know that library code should be in its own namespace, would it have been advantageous for the library code to remain in the global namespace, and instead try to manage the #includes?
It sounds to me like your problem is due primarily to how you're (ab)using namespaces, not due to the namespaces themselves.
It sounds like you're throwing a lot of minimally related "stuff" into one namespace, mostly (when you get down to it) because they happen to have been developed by the same person. At least IMO, a namespace should reflect logical organization of the code, not just the accident that a bunch of utilities happened to be written by the same person.
A namespace name should usually be fairly long and descriptive to prevent any more than the most remote possibility of a collision. For example, I usually include my name, date written, and a short description of the functionality of the namespace.
Most client code doesn't need to (and often shouldn't) use the real name of the namespace directly. Instead, it should define a namespace alias, and only the alias name should be used in most code.
Putting points two and three together, we can end up with code something like this:
#include "jdate.h"
namespace dt = Jerry_Coffin_Julian_Date_Dec_21_1999;
int main() {
dt::Date date;
std::cout << "Please enter a date: " << std::flush;
std::cin>>date;
dt::Julian jdate(date);
std::cout << date << " is "
<< jdate << " days after "
<< dt::Julian::base_date()
<< std::endl;
return 0;
}
This removes (or at least drastically reduces) coupling between the client code and a particular implementation of the date/time classes. For example, if I wanted to re-implement the same date/time classes, I could put them in a different namespace, and switch between one and the other just by changing the alias and re-compiling.
In fact, I've used this at times as a kind of compile-time polymorphism mechanism. For one example, I've written a couple versions of a small "display" class, one that displays output in a Windows list-box, and another that displays output via iostreams. The code then uses an alias something like:
#ifdef WINDOWED
namespace display = Windowed_Display
#else
namespace display = Console_Display
#endif
The rest of the code just uses display::whatever, so as long as both namespaces implement the entire interface, I can use either one, without changing the rest of the code at all, and without any runtime overhead from using a pointer/reference to a base class with virtual functions for the implementations.
The namespace has nothing to do with coupling. The same coupling exists whether you call it useful::UsefulClass or just UsefulClass. Now the fact that you needed to do all that work refactoring only tells you to what extent your code does depend on your library.
To ease the forwarding you could have written a forward header (there are a couple in the STL, you can surely find it in libraries) like usefulfwd.h that only forward defined the library interface (or implementing classes or whatever you need). But this has nothing to do with coupling.
Still, coupling and namespaces are just unrelated. A rose would smell as sweet by any other name, and your classes are as coupled in any other namespace.
(a) interfaces/classes/functions from the library
Not any more than you already have. Using namespace-ed library components helps you from namespace pollution.
(b) implementation details inferred by the namespace?
Why? All you should be including is a header useful.h. The implementation should be hidden (and reside in the useful.cpp and probably in a dynamic library form).
You can selectively include only those classes that you need from useful.h by having using useful::Useful declarations.
I'd like to expand on the second paragraph of David Rodríguez - dribeas' answer (upvoted):
To ease the forwarding you could have written a forward header (there are a couple in the STL, you can surely find it in libraries) like usefulfwd.h that only forward defined the library interface (or implementing classes or whatever you need). But this has nothing to do with coupling.
I think this points to the core of your problem. Namespaces are a red herring here, you were bitten by underestimating the need to contain syntactic dependencies.
I can understand your "laziness": it is not right to overengineer (enterprise HelloWorld.java), but if you keep your code low-profile in the beginning (which is not necessarily wrong) and the code proves successful, the success will drag it above its league. the trick is to sense the right moment to switch to (or employ from the first moment the need appears) a technique that scratches your itch in a forward compatible way.
Sparkling forward declarations over a project is just begging for a second and subsequent rounds. You don't really need to be a C++ programmer to have read the advice "don't forward-declare standard streams, use <iosfwd> instead" (though it's been a few years when this was relevant; 1999? VC6 era, definitely). You can hear a lot of painful shrieks from programmers who didn't heed the advice if you pause a little.
I can understand the urge to keep it low-brow, but you must admit that #include <usefulfwd.h> is no more pain than class Useful, and scales. Just this simple delegation would spare you N-1 changes from class Useful to class useful::Useful.
Of course, it wouldn't help you with all the uses in the client code. Easy help: in fact, if you use a library in a large application, you should wrap the forward headers supplied with the library in application-specific headers. Importance of this grows with the scope of the dependency and the volatility of the library.
src/libuseful/usefulfwd.h
#ifndef GUARD
#define GUARD
namespace useful {
class Useful;
} // namespace useful
#endif
src/myapp/myapp-usefulfwd.h
#ifndef GUARD
#define GUARD
#include <usefulfwd.h>
using useful::Useful;
#endif
Basically, it's a matter of keeping the code DRY. You might not like catchy TLAs, but this one describes a truly core programming principle.
If you have multiple implementations of your "useful" library, then isn't it equally probable (if not under your control) that they would use the same namespace, whether it be the global namespace or the useful namespace?
Put another way, using a named namespace versus the global namespace has nothing to do with how "coupled" you are to a library/implementation.
Any coherent library evolution strategy should maintain the same namespace for the API. The implementation could utilize different namespaces hidden from you, and these could change in different implementations. Not sure if this is what you mean by "implementation details inferred by the namespace."
no you are not increasing the coupling. As others have said - I dont see how the namespace use leaks the implementation out
the consumer can choose to do
using useful;
using useful::Foo;
useful::Foo = new useful::Foo();
my vote is always for the last one - its the least polluting
THe first one should be strongly discouraged (by firing squad)
Well, the truth is there's no way to easily avoid entanglement of code in C++. Using global namespace is the worst idea, though, because then you have no way to pick between implementations. You wind up with Object instead of Object. This works ok in house because you can edit the source as you go but if someone ships code like that to a customer they should not expect them to be for long.
Once you use the using statement you may as well be in global, but it can be nice in cpp files to use it. So I'd say you should have everything in a namespace, but for inhouse it should all be the same namespace. That way other people can use your code still without a disaster.