How can I minimize both boilerplate and coupling in object construction? - c++

I have a C++20 program where the configuration is passed externally via JSON. According to the “Clean Architecture” I would like to transfer the information into a self-defined structure as soon as possible. The usage of JSON is only to be apparent in the “outer ring” and not spread through my whole program. So I want my own Config struct. But I am not sure how to write the constructor in a way that is safe against missing initializations, avoids redundancy and also separates the external library from my core entities.
One way of separation would be to define the structure without a constructor:
struct Config {
bool flag;
int number;
};
And then in a different file I can write a factory function that depends on the JSON library.
Config make_config(json const &json_config) {
return {.flag = json_config["flag"], .number = json_config["number"]};
}
This is somewhat safe to write, because one can directly see how the struct field names correspond to the JSON field. Also I don't have so much redundancy. But I don't really notice if fields are not initialized.
Another way would be to have a an explicit constructor. Clang-tidy would warn me if I forget to initialize a field:
struct Config {
Config(bool const flag, int const number) : flag(flag), number(number) {}
bool flag;
int number;
};
And then the factory would use the constructor:
Config make_config(json const &json_config) {
return Config(json_config["flag"], json_config["number"]);
}
I just have to specify the name of the field five times now. And in the factory function the correspondence is not clearly visible. Surely the IDE will show the parameter hints, but it feel brittle.
A really compact way of writing it would be to have a constructor that takes JSON, like this:
struct Config {
Config(json const &json_config)
: flag(json_config["flag"]), number(json_config["number"]) {}
bool flag;
int number;
};
That is really short, would warn me about uninitialized fields, the correspondence between fields and JSON is directly visible. But I need to import the JSON header in my Config.h file, which I really dislike. It also means that I need to recompile everything that uses the Config class if I should change the way that the configuration is loaded.
Surely C++ is a language where a lot of boilerplate code is needed. And in theory I like the second variant the best. It is the most encapsulated, the most separated one. But it is the worst to write and maintain. Given that in the realistic code the number of fields is significantly larger, I would sacrifice compilation time for less redundancy and more maintainability.
Is there some alternative way to organize this, or is the most separated variant also the one with the most boilerplate code?

I'd go with the constructor approach, however:
// header, possibly config.h
// only pre-declare!
class json;
struct Config
{
Config(json const& json_config); // only declare!
bool flag;
int number;
};
// now have a separate source file config.cpp:
#include "config.h"
#include <json.h>
Config::Config(json const& json_config)
: flag(json_config["flag"]), number(json_config["number"])
{ }
Clean approach and you avoid indirect inclusions of the json header. Sure, the constructor is duplicated as declaration and definition, but that's the usual C++ way.

Related

Debug-only members

Is there a nice way of including certain members only in the debug build of a program?
I have an indexed data structure of which I use a large number of instances, which carry certain status flag in case some contents of the data structure have changed, but the index hasn't been updated.
The status flag are only used to check that all uses of the index call the update functionality in case the data has been changed, but for performance and storage reasons, since there are lots of instances and the data structure might be changed a lot before update is called, I would like to keep this data only for the debug build.
There are basically two types of operations on these flags:
Setting/Resetting the flag
asserting that the flag is not set, i.e. certain parts of the index are still valid.
Are there any nicer ways of achieving this than sprinkling my code with #ifndef NDEBUG statements?
Note: In my special use case, the performance hit might not be that large, but I'm still looking for a general way to approach this, since there are probably much more complex use cases for the same idea.
You can reduce amount of #ifdefing by providing a base class with debug facilities:
class
t_MyDebugHelper
{
#ifdef NDEBUG
public: void
Set_Something(int value)
{
(void) value; // not used
}
public: void
Verify_Something(void)
{}
#else
private: ::std::string m_some_info;
private: int m_some_value;
public: void
Set_Something(int value)
{
m_some_value = value;
}
public: void
Verify_Something(void)
{
// implementation
}
#endif
};
class
t_MyClass
: public t_MyDebugHelper
{
public: void
SomeMethod(void)
{
t_MyDebugHelper::Verify_Something();
t_MyDebugHelper::Set_Something(42);
...
}
};
This method does not allow you to get rid of ifdef completely, however it allows to avoid them in the main code logic. In Release build all the debug helper functions will result in noop and t_MyDebugHelper class won't increase target class size due to empty base class optimization. If debug helper needs an access to t_MyClass methods then a CRTP could be applied.

Can Boost Proto adapt structures to a getter setter type API

This seems like a common problem. I've got two massive sets of code that need to be glued together: one that uses simple structs to hold data, the other has APIs that only expose getter/setter methods.
Is it possible to use Boost.Proto to define a mapping that could then be used to auto-generate the code that invokes the getters/setters? Conceptually, the piece that seems most difficult is the synthesis of the function names to call, as that would involve compile-time string concatenation. Other challenges involve mapping enum types from one to the other and custom initialization or conversion code.
Having a drop-in-place Proto-based solution that solves this problem would be a huge benefit to a wide variety of people.
E.g., I have an API with types such as this:
// These classes use getters/setters.
class Wheel
{
int number_of_lugnuts_;
public:
void initialize_wheel(bool);
void set_number_of_lugnuts(int);
int get_number_of_lugnuts();
};
class Engine
{
public:
enum Gas_type_t {
Unleaded,
Premium
};
private:
Gas_type_t gas_type_;
public:
void initialize_engine(bool);
void set_gas_type(Gas_type_t);
Gas_type_t get_gas_type();
};
While I also have millions of lines of code with the same data in simple directly-accessed structs:
// This code has simple data structures.
struct Car
{
// These POD members are used by a large body of existing code.
int lugnut_count;
enum FUEL_TYPES {
NORMAL_FUEL,
HI_OCTANE
};
FUEL_TYPES fuelType;
};
Now the old-fashioned way is to add a lot of converters:
// The manual way to accomplish this for only the Wheel API.
// This has to be repeated similarly for the Engine API.
void convert_to_wheel(Wheel& w)
{
w.initialize_wheel(true); // how can initialization be handled?
w.set_number_of_lugnuts(lugnut_count);
}
void convert_from_wheel(Wheel& w)
{
lugnut_count = w.get_number_of_lugnuts();
}
But, in the style of Boost.Spirit, I would like to use Proto to create an EDSL allowing me to specify the mapping, and have the compiler generate the repetitive code for me.
I can define enough Proto terminals to get this constructor to compile:
Car()
{
// So can we define an API mapping like this?
// Would strings be the only way to accomplish this?
// This appears structurally similar to how Spirit grammars are defined.
// This is a very rough attempt because it's unclear if this is possible.
define_api_mapping<Wheel>
(initialization_code((_dest_ ->* &Wheel::initialize_wheel)(true)))
(map_member(lugnut_count) = map_getset("number_of_lugnuts"))
;
define_api_mapping<Engine>
(initialization_code((_dest_ ->* &Engine::initialize_engine)(true)))
(map_member(fuelType) = map_getset("gas_type"))
;
define_enum_mapping<FUEL_TYPES>
(enum_value(NORMAL_FUEL) = enum_value(Engine::Unleaded))
(enum_value(HI_OCTANE) = enum_value(Engine::Premium))
;
}
Conversion could conceptually be a simple function call:
// Declare some objects.
Car c;
Engine e;
Wheel w1, w2;
// Set some values.
c.lugnut_count = 20;
// Convert the old fashioned way.
c.convert_to_wheel(w1);
// Convert the new way.
convert(c, w2);
It's an ugly start, but I'm now baffled by how to mangle the names to generate the calls to the getters and setters.
Is this possible? What would the solution look like?

C++ Best practices for constants

I have a whole bunch of constants that I want access to in different parts of my code, but that I want to have easy access to as a whole:
static const bool doX = true;
static const bool doY = false;
static const int maxNumX = 5;
etc.
So I created a file called "constants.h" and stuck them all in there and #included it in any file that needs to know a constant.
Problem is, this is terrible for compile times, since every time I change a constant, all files that constants.h reference have to be rebuilt. (Also, as I understand it, since they're static, I'm generating a copy of doX/doY/maxNumX in code every time I include constants.h in a new .cpp, leading to kilobytes of wasted space in the compiled EXE -- is there any way to see this?).
So, I want a solution. One that isn't "declare constants only in the files that use them", if possible.
Any suggestions?
The only alternative is to make your constants extern and define them in another .cpp file, but you'll lose potential for optimization, because the compiler won't know what value they have when compiling each .cpp`.
By the way, don't worry about the size increase: for integral types your constants are likely to be inlined directly in the generated machine code.
Finally, that static is not necessary, since by default const global variables are static in C++.
You declare them as extern in the header and define them in an implementation file.
That way, when you want to change their value, you modify the implementation file and no full re-compilation is necessary.
The problem in your variant isn't compilation-related, but logic related. They will not be globals since each translation unit will have its own copy of the variable.
EDIT:
The C++-ish way of doing it would actually wrapping them in a class:
//constants.h
class Constants
{
public:
static const bool doX;
static const bool doY;
static const int maxNumX;
}
//constants.cpp
const bool Constants::doX = true;
const bool Constants::doY = false;
const int Constants::maxNumX = 5;
I think your base assumption is off.
Your other headers are usually organized by keeping together what works together. For example, a class and its related methods or two classes heavily interlinked.
Why group all constants in a single header ? It does not make sense. It's about as bad an idea as a "global.h" header to include every single dependency easily.
In general, the constants are used in a particular context. For example, an enum used as a flag for a particular function:
class File {
public:
enum class Mode {
Read,
Write,
Append
};
File(std::string const& filename, Mode mode);
// ...
};
In this case, it is only natural that those constants live in the same header that the class they are bound to (and even within the class).
The other category of constants are those that just permeate the whole application. For example:
enum class Direction {
Up,
Down,
Right,
Left,
Forward,
Backward
};
... in a game where you want to express objects' move regarding the direction they are facing.
In this case, creating one header file for this specific set of constants is fine.
And if you really are worried about grouping those files together:
constants/
Direction.hpp
Sandwich.hpp
State.hpp
And you will neatly sidestep the issue of recompiling the whole application when you add a constant... though if you need to, do it, you're paying the cost only once, better than a wrong-sided design you'll have to live off with for the rest of your work.
What is the problem with this usage?
Do not declare a static type in header file, It does not do what you think it does.
When you declare a static in header file a copy of that variable gets created in each Translation Unit(TU) where you include that header file, SO each TU sees a different variable, this is opposite to your expectation of having a global.
Suggested Solution:
You should declare them as extern in a header file and define them in exactly one cpp file while include the header with extern in every cpp file where you want to access them.
Good Read:
How should i use extern?
Another approach which is best for compile times (but has some minor run-time cost) is to make the constants accessible via static methods in a class.
//constants.h
class Constants
{
public:
static bool doX();
static bool doY();
static int maxNumX();
};
//constants.cpp
bool Constants::doX() { return true; }
bool Constants::doY() { return false; }
int Constants::maxNumX() { return 42; }
The advantage of this approach is that you only recompile everything if you add/remove/change the declaration of a method in the header, while changing the value returned by any method requires only compiling constants.cpp (and linking, of course).
As with most things, this may or may not be the best is your particular case, but it is another option to consider.
The straight forward way is, to create non const symbols:
const bool doX = true;
const bool doY = false;
const int maxNumX = 5;
These values will be replaced by the compiler with the given values. Thats the most efficient way. This also of course leads to recompilation as soon as you modify or add values. But in most cases this should not raise practical problems.
Of course there are different solutions:
Using static consts, (or static const class members) the values can be modified without recompilation of all refered files - but thereby the values are held in a const data segment that will be called during runtime rather than being resolved at compile tine. If runtime perfomance is no issue (as it is for 90% of most typical code) thats OK.
The straight C++ way is using class enums rather than global const identifiers (as noted my Mathieu). This is more typesafe and besides this it works much as const: The symbols will be resolved at compile time.

Is there any way to prepare a struct for future additions?

I have the following struct which will be used to hold plugin information. I am very sure this will change (added to most probably) over time. Is there anything better to do here than what I have done assuming that this file is going to be fixed?
struct PluginInfo
{
public:
std::string s_Author;
std::string s_Process;
std::string s_ReleaseDate;
//And so on...
struct PluginVersion
{
public:
std::string s_MajorVersion;
std::string s_MinorVersion;
//And so on...
};
PluginVersion o_Version;
//For things we aren't prepared for yet.
void* p_Future;
};
Further, is there any precautions I should take when building shared objects for this system. My hunch is I'll run into lots of library incompatibilities. Please help. Thanks
What about this, or am I thinking too simple?
struct PluginInfo2: public PluginInfo
{
public:
std::string s_License;
};
In your application you are probably passing around only pointers to PluginInfos, so version 2 is compatible to version 1. When you need access to the version 2 members, you can test the version with either dynamic_cast<PluginInfo2 *> or with an explicit pluginAPIVersion member.
Either your plugin is compiled with the same version of C++ compiler and std library source (or its std::string implementation may not be compatible, and all your string fields will break), in which case you have to recompile the plugins anyway, and adding fields to the struct won't matter
Or you want binary compatibility with previous plugins, in which case stick to plain data and fixed size char arrays ( or provide an API to allocate the memory for the strings based on size or passing in a const char* ), in which case it's not unheard of to have a few unused fields in the struct, and then change these to be usefully named items when the need arises. In such cases, it's also common to have a field in the struct to say which version it represents.
But it's very rare to expect binary compatibility and make use of std::string. You'll never be able to upgrade or change your compiler.
As was said by someone else, for binary compatibility you will most likely restrict yourself to a C API.
The Windows API in many places maintains binary compatibility by putting a size member into the struct:
struct PluginInfo
{
std::size_t size; // should be sizeof(PluginInfo)
const char* s_Author;
const char* s_Process;
const char* s_ReleaseDate;
//And so on...
struct PluginVersion
{
const char* s_MajorVersion;
const char* s_MinorVersion;
//And so on...
};
PluginVersion o_Version;
};
When you create such a beast, you need to set the size member accordingly:
PluginInfo pluginInfo;
pluginInfo.size = sizeof(pluginInfo);
// set other members
When you compile your code against a newer version of the API, where the struct has additional members, its size changes, and that is noted in its size member. The API functions, when being passed such a struct presumably will first read its size member and branch into different ways to handle the struct, depending on its size.
Of course, this assumes that evolution is linear and new data is always only added at the end of the struct. That is, you will never have different versions of such a type that have the same size.
However, using such a beast is a nice way of ensuring that user introduce errors into their code. When they re-compile their code against a new API, sizeof(pluginInfo) will automatically adapt, but the additional members won't be set automatically. A reasonably safety would be gained by "initializing" the struct the C way:
PluginInfo pluginInfo;
std::memset( &pluginInfo, 0, sizeof(pluginInfo) );
pluginInfo.size = sizeof(pluginInfo);
However, even putting aside the fact that, technically, zeroing memory might not put a reasonable value into each member (for example, there could be architectures where all bits set to zero is not a valid value for floating point types), this is annoying and error-prone because it requires three-step construction.
A way out would be to design a small and inlined C++ wrapper around that C API. Something like:
class CPPPluginInfo : PluginInfo {
public:
CPPPluginInfo()
: PluginInfo() // initializes all values to 0
{
size = sizeof(PluginInfo);
}
CPPPluginInfo(const char* author /* other data */)
: PluginInfo() // initializes all values to 0
{
size = sizeof(PluginInfo);
s_Author = author;
// set other data
}
};
The class could even take care of storing the strings pointed to by the C struct's members in a buffer, so that users of the class wouldn't even have to worry about that.
Edit: Since it seems this isn't as clear-cut as I thought it is, here's an example.
Suppose that very same struct will in a later version of the API get some additional member:
struct PluginInfo
{
std::size_t size; // should be sizeof(PluginInfo)
const char* s_Author;
const char* s_Process;
const char* s_ReleaseDate;
//And so on...
struct PluginVersion
{
const char* s_MajorVersion;
const char* s_MinorVersion;
//And so on...
};
PluginVersion o_Version;
int fancy_API_version2_member;
};
When a plugin linked to the old version of the API now initializes its struct like this
PluginInfo pluginInfo;
pluginInfo.size = sizeof(pluginInfo);
// set other members
its struct will be the old version, missing the new and shiny data member from version 2 of the API. If it now calls a function of the second API accepting a pointer to PluginInfo, it will pass the address of an old PluginInfo, short one data member, to the new API's function. However, for the version 2 API function, pluginInfo->size will be smaller than sizeof(PluginInfo), so it will be able catch that, and treat the pointer as pointing to an object that doesn't have the fancy_API_version2_member. (Presumably, internal of the host app's API, PluginInfo is the new and shiny one with the fancy_API_version2_member, and PluginInfoVersion1 is the new name of the old type. So all the new API needs to do is to cast the PluginInfo* it got handed be the plugin into a PluginInfoVersion1* and branch off to code that can deal with that dusty old thing.)
The other way around would be a plugin compiled against the new version of the API, where PluginInfo contains the fancy_API_version2_member, plugged into an older version of the host app that knows nothing about it. Again, the host app's API functions can catch that by checking whether pluginInfo->size is greater than the sizeof their own PluginInfo. If so, the plugin presumably was compiled against a newer version of the API than the host app knows about. (Or the plugin write failed to properly initialize the size member. See below for how to simplify dealing with this somewhat brittle scheme.)
There's two ways to deal with that: The simplest is to just refuse to load the plugin. Or, if possible, the host app could work with this anyhow, simply ignoring the binary stuff at the end of the PluginInfo object it was passed which it doesn't know how to interpret.
However, the latter is tricky, since you need to decide this when you implement the old API, without knowing exactly what the new API will look like.
what rwong suggest (std::map<std::string, std::string>) is a good direction. This is makes it possible to add deliberate string fields. If you want to have more flexibility you might declare an abstract base class
class AbstractPluginInfoElement { public: virtual std::string toString() = 0;};
and
class StringPluginInfoElement : public AbstractPluginInfoElement
{
std::string m_value;
public:
StringPluginInfoElement (std::string value) { m_value = value; }
virtual std::string toString() { return m_value;}
};
You might then derive more complex classes like PluginVersion etc. and store a map<std::string, AbstractPluginInfoElement*>.
One hideous idea:
A std::map<std::string, std::string> m_otherKeyValuePairs; would be enough for the next 500 years.
Edit:
On the other hand, this suggestion is so prone to misuse that it may qualify for a TDWTF.
Another equally hideous idea:
a std::string m_everythingInAnXmlBlob;, as seen in real software.
(hideous == not recommended)
Edit 3:
Advantage: The std::map member is not subject to object slicing. When older source code copies an PluginInfo object that contains new keys in the property bag, the entire property bag is copied.
Disadvantage: many programmers will start adding unrelated things to the property bag, and even starts writing code that processes the values in the property bag, leading to maintenance nightmare.
Here's an idea, not sure whether it works with classes, it for sure works with structs: You can make the struct "reserve" some space to be used in the future like this:
struct Foo
{
// Instance variables here.
int bar;
char _reserved[128]; // Make the class 128 bytes bigger.
}
An initializer would zero out whole struct before filling it, so newer versions of the class which would access fields that would now be within the "reserved" area are of sane default values.
If you only add fields in front of _reserved, reducing its size accordingly, and not modify/rearrange other fields you should be OK. No need for any magic. Older software will not touch the new fields as they don't know about them, and the memory footprint will remain the same.

Proper way to make a global "constant" in C++

Typically, the way I'd define a true global constant (lets say, pi) would be to place an extern const in a header file, and define the constant in a .cpp file:
constants.h:
extern const pi;
constants.cpp:
#include "constants.h"
#include <cmath>
const pi=std::acos(-1.0);
This works great for true constants such as pi. However, I am looking for a best practice when it comes to defining a "constant" in that it will remain constant from program run to program run, but may change, depending on an input file. An example of this would be the gravitational constant, which is dependent on the units used. g is defined in the input file, and I would like it to be a global value that any object can use. I've always heard it is bad practice to have non-constant globals, so currently I have g stored in a system object, which is then passed on to all of the objects it generates. However this seems a bit clunky and hard to maintain as the number of objects grow.
Thoughts?
It all depends on your application size. If you are truly absolutely sure that a particular constant will have a single value shared by all threads and branches in your code for a single run, and that is unlikely to change in the future, then a global variable matches the intended semantics most closely, so it's best to just use that. It's also something that's trivial to refactor later on if needed, especially if you use distinctive prefixes for globals (such as g_) so that they never clash with locals - which is a good idea in general.
In general, I prefer to stick to YAGNI, and don't try to blindly placate various coding style guides. Instead, I first look if their rationale applies to a particular case (if a coding style guide doesn't have a rationale, it is a bad one), and if it clearly doesn't, then there is no reason to apply that guide to that case.
I can understand the predicament you're in, but I am afraid that you are unfortunately not doing this right.
The units should not affect the program, if you try to handle multiple different units in the heart of your program, you're going to get hurt badly.
Conceptually, you should do something like this:
Parse Input
|
Convert into SI metric
|
Run Program
|
Convert into original metric
|
Produce Output
This ensure that your program is nicely isolated from the various metrics that exist. Thus if one day you somehow add support to the French metric system of the 16th century, you'll just add to configure the Convert steps (Adapters) correctly, and perhaps a bit of the input/output (to recognize them and print them correctly), but the heart of the program, ie the computation unit, would remain unaffected by the new functionality.
Now, if you are to use a constant that is not so constant (for example the acceleration of gravity on earth which depends on the latitude, longitude and altitude), then you can simply pass it as arguments, grouped with the other constants.
class Constants
{
public:
Constants(double g, ....);
double g() const;
/// ...
private:
double mG;
/// ...
};
This could be made a Singleton, but that goes against the (controversed) Dependency Injection idiom. Personally I stray away from Singleton as much as I can, I usually use some Context class that I pass in each method, makes it much easier to test the methods independently from one another.
A legitimate use of singletons!
A singleton class constants() with a method to set the units?
You can use a variant of your latter approach, make a "GlobalState" class that holds all those variables and pass that around to all objects:
struct GlobalState {
float get_x() const;
float get_y() const;
...
};
struct MyClass {
MyClass(GlobalState &s)
{
// get data from s here
... = s.get_x();
}
};
It avoids globals, if you don't like them, and it grows gracefully as more variables are needed.
It's bad to have globals which change value during the lifetime of the run.
A value that is set once upon startup (and remains "constant" thereafter) is a perfectly acceptable use for a global.
Why is your current solution going to be hard to maintain? You can split the object up into multiple classes as it grows (one object for simulation parameters such as your gravitational constant, one object for general configuration, and so on)
My typical idiom for programs with configurable items is to create a singleton class named "configuration". Inside configuration go things that might be read from parsed configuration files, the registry, environment variables, etc.
Generally I'm against making get() methods, but this is my major exception. You can't typically make your configuration items consts if they have to be read from somewhere at startup, but you can make them private and use const get() methods to make the client view of them const.
This actually brings to mind the C++ Template Metaprogramming book by Abrahams & Gurtovoy - Is there a better way to manage your data so that you don't get poor conversions from yards to meters or from volume to length, and maybe that class knows about gravity being a form acceleration.
Also you already have a nice example here, pi = the result of some function...
const pi=std::acos(-1.0);
So why not make gravity the result of some function, which just happens to read that from file?
const gravity=configGravity();
configGravity() {
// open some file
// read the data
// return result
}
The problem is that because the global is managed prior to main being called you cannot provide input into the function - what config file, what if the file is missing or doesn't have g in it.
So if you want error handling you need to go for a later initialization, singletons fit that better.
Let's spell out some specs. So, you want:
(1) the file holding the global info (gravity, etc.) to outlive your runs of the executable using them;
(2) the global info to be visible in all your units (source files);
(3) your program to not be allowed to change the global info, once read from the file;
Well,
(1) Suggests a wrapper around the global info whose constructor takes an ifstream or file name string reference (hence, the file must exist before the constructor is called and it will still be there after the destructor is invoked);
(2) Suggests a global variable of the wrapper. You may, additionally, make sure that that is the only instance of this wrapper, in which case you need to make it a singleton as was suggested. Then again, you may not need this (you may be okay with having multiple copies of the same info, as long as it is read-only info!).
(3) Suggests a const getter from the wrapper. So, a sample may look like this:
#include <iostream>
#include <string>
#include <fstream>
#include <cstdlib>//for EXIT_FAILURE
using namespace std;
class GlobalsFromFiles
{
public:
GlobalsFromFiles(const string& file_name)
{
//...process file:
std::ifstream ginfo_file(file_name.c_str());
if( !ginfo_file )
{
//throw SomeException(some_message);//not recommended to throw from constructors
//(definitely *NOT* from destructors)
//but you can... the problem would be: where do you place the catcher?
//so better just display an error message and exit
cerr<<"Uh-oh...file "<<file_name<<" not found"<<endl;
exit(EXIT_FAILURE);
}
//...read data...
ginfo_file>>gravity_;
//...
}
double g_(void) const
{
return gravity_;
}
private:
double gravity_;
};
GlobalsFromFiles Gs("globals.dat");
int main(void)
{
cout<<Gs.g_()<<endl;
return 0;
}
Globals aren't evil
Had to get that off my chest first :)
I'd stick the constants into a struct, and make a global instance of that:
struct Constants
{
double g;
// ...
};
extern Constants C = { ... };
double Grav(double m1, double m2, double r) { return C.g * m1 * m2 / (r*r); }
(Short names are ok, too, all scientists and engineers do that.....)
I've used the fact that local variables (i.e. members, parameters, function-locals, ..) take precedence over the global in a few cases as "apects for the poor":
You could easily change the method to
double Grav(double m1, double m2, double r, Constants const & C = ::C)
{ return C.g * m1 * m2 / (r*r); } // same code!
You could create an
struct AlternateUniverse
{
Constants C;
AlternateUniverse()
{
PostulateWildly(C); // initialize C to better values
double Grav(double m1, double m2, double r) { /* same code! */ }
}
}
The idea is to write code with least overhead in the default case, and preserving the implementation even if the universal constants should change.
Call Scope vs. Source Scope
Alternatively, if you/your devs are more into procedural rather thsn OO style, you could use call scope instead of source scope, with a global stack of values, roughly:
std::deque<Constants> g_constants;
void InAnAlternateUniverse()
{
PostulateWildly(C); //
g_constants.push_front(C);
CalculateCoreTemp();
g_constants.pop_front();
}
void CalculateCoreTemp()
{
Constants const & C= g_constants.front();
// ...
}
Everything in the call tree gets to use the "most current" constants. OYu can call the same tree of coutines - no matter how deeply nested - with an alternate set of constants. Of course it should be encapsulated better, made exception safe, and for multithreading you need thread local storage (so each thread gets it's own "stack")
Calculation vs. User Interface
We approach your original problem differently: All internal representation, all persistent data uses SI base units. Conversion takes place at input and output (e.g. even though the typical size is millimeter, it's always stored as meter).
I can't really compare, but worksd very well for us.
Dimensional Analysis
Other replies have at least hinted at Dimensional Analysis, such as the respective Boost Library. It can enforce dimensional correctness, and can automate the input / output conversions.