How to store a boost::quantity with possible different boost::dimension - c++

I am using boost::units library to enforce physical consistency in a scientific project. I have read and tried several examples from boost documentation. I am able to create my dimensions, units and quantities. I did some calculus, it works very well. It is exactly what I expected, except that...
In my project, I deal with time series which have several different units (temperature, concentration, density, etc.) based on six dimensions. In order to allow safe and easy units conversions, I would like to add a member to each channel class representing the dimensions and units of time series. And, data treatment (import, conversion, etc.) are user-driven, therefore dynamic.
My problem is the following, because of the boost::units structure, quantities within an homogeneous system but with different dimensions have different types. Therefore you cannot directly declare a member such as:
boost::units::quantity channelUnits;
Compiler will claim you have to specify dimensions using template chevrons. But if you do so, you will not be able to store different type of quantities (say quantities with different dimensions).
Then, I looked for boost::units::quantity declaration to find out if there is a base class that I can use in a polymorphic way. But I haven't found it, instead I discovered that boost::units heavily uses Template Meta Programming which is not an issue but does not exactly fit my dynamic needs since everything is resolved at compile-time not at run-time.
After more reading, I tried to wrap different quantities in a boost::variant object (nice to meet it for the very first time).
typedef boost::variant<
boost::units::quantity<dim1>,
...
> channelUnitsType;
channelUnitsType channelUnits;
I performed some tests and it seems to work. But I am not confident with boost::variant and the visitor-pattern.
My questions are the following:
Is there another - maybe best - way to have run-time type resolution?
Is dynamic_cast one of them? Units conversion will not happen very often and only few data are in concern.
If boost::variant is a suitable solution, what are its drawbacks?

Going deeper in my problem I read two articles providing tracks for a solution:
Kostadin Damevski, Expressing Measurements Units in Interfaces for Scientific Component Software;
Lingxiao Jiang, A Practical Type System for Validating Dimensional Units Correctness of C Programs.
The first gives good ideas for the interface implementation. The second gives a complete overview of what you must cope with.
I keep in mind that boost::units is a complete and efficient way for dimension consistency at compile-time without overhead at runtime. Anyway, for runtime dimension consistency involving dimensions changes you do need a dynamic structure that boost::units does not provide. So here am I: designing a units class that will exactly fit my needs. More work to achieve, more satisfaction at the end...
About the original questions:
boost::variant works well (it provides the dynamic boost::units is missing) for this job. And furthermore, it can be serialized out of the box. Thus it is a effective approach. But it is adding a layer of abstraction for a simple - I am not saying trivial - task that could be done by a single class.
Casting is achieved by boost::variant_cast<> instead of dynamic_cast<>.
boost::any could be easier to implement but serialization becomes an hard way.

I have been thinking about this problem and came up with the following conclusion:
1. Implement type erasure (pros: nice interfaces, cons:memory overhead)
It looks impossible to store without overhead a general quantity with common dimension, that break one of the design principles of the libraries. Even type erasure won't help here.
2. Implement a convertible type (pros: nice interfaces, cons:operational overhead)
The only way I see without storage overhead, is to choose a conventional (possibly hidden) system where all units are converted to and from. There is no memory overhead but there is a multiplication overhead in almost all queries to the values and a tremendous number of conversion and some loose of precision of high exponent, (think of conversion from avogadro number to the 10 power).
3. Allow implicit conversions (pros: nice interfaces, cons:harder to debug, unexpected operational overheads)
Another option, mostly in the practical side to alleviate the problem is to allow implicit conversion at the interface level, see here: https://groups.google.com/d/msg/boost-devel-archive/JvA5W9OETt8/5fMwXWuCdDsJ
4. Template/Generic code (pros: no runtime or memory overhead, conceptually correct, philosophy follows that of the library, cons: harder to debug, ugly interfaces, possible code bloat, lots of template parameters everywhere)
If you ask the library designer probably they will tell you that you need to make your functions generic. This is possible but it complicates the code. For example:
template<class Length>
auto square(Length l) -> decltype(l*l){return l*l;}
I use C++11 to simplify the example here (it is possible to do it in C++98), and also to show that this is becoming easier to do in C++11 (and even simpler in C++14 with decltype(auto).
I know that this is not the type of code you had in mind but it is consistent with the design of the library. You may think, well how do I restrict this function to physical length and not something else? Well, the answer is that you don't need to this, however if you insist, in the worst case...
template<class Length, typename std::enable_if<std::is_same<typename get_dimension<Lenght>::type, boost::units::length_dimension>::value>::type>
auto square(Length l) -> decltype(l*l){return l*l;}
(In better cases decltype will do the SFINAE job.)
In my opinion, option 4. and possibly combined with 3. is the most elegant way ahead.
References:
https://www.boost.org/doc/libs/1_69_0/boost/units/get_dimension.hpp

Related

Looking for a concept related to "either or" in a data structure (almost like std::variant)

I'm modernizing the code that reads and writes to a custom binary file format now.
I'm allowed to use C++17 and have already modernized large parts of the code base.
There are mainly two problems at hand.
binary selectors (my own name)
cased selectors (my own name as well)
For #1 it is as follows:
Given that 1 bit is set in a binary string. You either (read/write) two completely different structs.
For example, if bit 17 is set to true, it means bits 18+ should be streamed with Struct1.
But if bit 17 is false, bits 18+ should be streamed with Struct2.
Struct1 and Struct2 are completely different with minimal overlap.
For #2 it is basically the same but as follows:
Given that x bits in the bit stream are set. You have a potential pool of completely different structs. The number of structs is allowed to be between [0, 2**x] (inclusive range).
For instance, in one case you might have 3 bits and 5 structs.
But in another case, you might have 3 bits and 8 structs.
Again the overlap between the structs is minimal.
I'm currently using std::variant for this.
For case #1, it would be just two structs std::variant<Struct1, Struct2>
For case #2, it would be just a flat list of the structs again using std::variant.
The selector I use is naturally the index in the variant, but it needs to be remapped for a different bit pattern that actually needs to be written/read to/from the format.
Have any of you used or encountered some better strategies for these cases?
Is there a generally known approach to solve this in a much better way?
Is there a generally known approach to solve this in a much better way?
Nope, it's highly specific.
Have any of you used or encountered some better strategies for these cases?
The bit patterns should be encoded in the type, somehow. Almost all the (de)serialization can be generic so long as the required information is stored somewhere.
For example,
template <uint8_t BitPattern, typename T>
struct IdentifiedVariant;
// ...
using Field1 = std::variant< IdentifiedVariant<0x01, Field1a>,
IdentifiedVariant<0x02, Field1b> >;
I've absolutely used types like this in the past to automate all the boilerplate, but the details are extremely specific to the format and rarely reusable.
Note that even though you can't overlay your variant type on a buffer, there's no need for (de)serialization to proceed bit-by-bit. There's hardly any speed penalty so long as the data is already read into a buffer - if you really need to go full zero-copy, you can just have your FieldNx types keep a pointer into the buffer and decode fields on demand.
If you want your data to be bit-continuous you can't use std::variant You will need to use std::bitset or managing the memory completely manually to do that. But it isn't practical to do so because your structs will not be byte-aligned so you will need to do every read/write manually bit by bit. This will reduce speed greatly, so I only recommend this way if you want to save every bit of memory even at the cost of speed. And at this storage it will be hard to find the nth element you will need to iterate.
std::variant<T1,T2> will waste a bit of space because 1) it will always use enough space for storing the the bigger data, but using that over bit-manipulation will increase the speed and the readability of the code. (And will be easier to write)

std::bitset<N> implementation causes size overheard

Seems like std::bitset<N> under the hood is an array of unsigned longs now, this means that there will be an (heavy?) overhead when N is small. sizeof(std::bitset<8>) is 8 bytes!
Is there a reason why the type of underlying array itself is not a template parameter? Why does the implementation not use uint32_t/16_t/8_t when more appropriate? I do not see anything in the implementation that limits this?
I am guessing I am missing a particular reason but unsure how to look for it or maybe there is no reason at all ? Since this is such a simple container I am not able to understand how the zero overhead principle of C++ seems to be avoided here.
GCC Impl: https://gcc.gnu.org/onlinedocs/gcc-4.6.2/libstdc++/api/a00775_source.html
I believe clang is similar (used sizeof to confirm)
I am not able to understand how the zero overhead principle of C++ seems to be avoided here.
The zero-overhead principle is a principle, not an absolute rule of C++.
Many people use std::vector in contexts where a compile-time fixed capacity would be useful. Such a type could have only two pointers instead of three and thus be 50% smaller. Many people use std::string in contexts where an immutable string would work just as well if not better; it would reduce the size of the string (ignoring SSO), as well as its complexity. And so forth.
These all represent inefficiencies relative to the standard type. No standard library type can handle every possible usage scenario. The goal for such types is to be broadly useful, not perfect.
There is nothing preventing someone from writing a bitset-style type with the exact same interface which has a user-provided underlying type. But the standard has no such type.
Indeed, there's nothing preventing implementations of bitset from choosing an underlying type based on the given number of bits. Your implementation doesn't do that, but it could have.

Is it a good idea to base a non-owning bit container on std::vector<bool>? std::span?

In a couple of projects of mine I have had an increasing need to deal with contiguous sequences of bits in memory - efficiently (*). So far I've written a bunch of inline-able standalone functions, templated on the choice of a "bit container" type (e.g. uint32_t), for getting and setting bits, applying 'or' and 'and' to their values, locating the container, converting lengths in bits to sizes in bytes or lengths in containers, etc. ... it looks like it's class-writing time.
I know the C++ standard library has a specialization of std::vector<bool>, which is considered by many to be a design flaw - as its iterators do not expose actual bools, but rather proxy objects. Whether that's a good idea or a bad one for a specialization, it's definitely something I'm considering - an explicit bit proxy class, which will hopefully "always" be optimized away (with a nice greasing-up with constexpr, noexcept and inline). So, I was thinking of possibly adapting std::vector code from one of the standard library implementation.
On the other hand, my intended class:
Will never own the data / the bits - it'll receive a starting bit container address (assuming alignment) and a length in bits, and won't allocate or free.
It will not be able resize the data dynamically or otherwise - not even while retaining the same amount of space like std::vector::resize(); its length will be fixed during its lifespan/scope.
It shouldn't anything know about the heap (and work when there is no heap)
In this sense, it's more like a span class for bits. So maybe start out with a span then? I don't know, spans are still not standard; and there are no proxies in spans...
So what would be a good basis (edit: NOT a base class) for my implementation? std::vector<bool>? std::span? Both? None? Or - maybe I'm reinventing the wheel and this is already a solved problem?
Notes:
The bit sequence length is known at run time, not compile time; otherwise, as #SomeProgrammerDude suggests I could use std::bitset.
My class doesn't need to "be-a" span or "be-a" vector, so I'm not thinking of specializing any of them.
(*) - So far not SIMD-efficiently but that may come later. Also, this may be used in CUDA code where we don't SIMDize but pretend the lanes are proper threads.
Rather than std::vector or std::span I suspect an implementation of your class would share more in common with std::bitset, since it is pretty much the same thing, except with a (fixed) runtime-determined size.
In fact, you could probably take a typical std::bitset implementation and move the <size_t N> template parameter into the class as a size_t size_ member (or whatever name you like) and you'll have your dynamic bitset class with almost no changes. You may want to get rid anything of you consider cruft like the constructors that take std::string and friends.
The last step is then to remove ownership of the underlying data: basically you'll remove the creation of the underlying array in the constructor and maintain a view of an existing array with some pointers.
If your clients disagree on what the underlying unsigned integer type to use for storage (what you call the "bit container"), then you may also need to make your class a template on this type, although it would be simpler if everyone agreed on say uint64_t.
As far as std::vector<bool> goes, you don't need much from that: everything that vector does that you want, std::bitset probably does too: the main thing that vector adds is dynamic growth - but you've said you don't want that. vector<bool> has the proxy object concept to represent a single bit, but so does std::bitset.
From std::span you take the idea of non-ownership of the underlying data, but I don't think this actually represents a lot of underlying code. You might want to consider the std::span approach of having either a compile-time known size or a runtime provided size (indicated by Extent == std::dynamic_extent) if that would be useful for you (mostly if you sometimes use compile-time sizes and could specialize some methods to be more efficient in that case).

How many arguments can theoretically be passed as parameters in c++ functions?

I was wondering if there was a limit on the number of parameters you can pass to a function.
I'm just wondering because I have to maintain functions of 5+ arguments here at my jobs.
And is there a critical threshold in nbArguments, talking about performance, or is it linear?
Neither the C nor C++ standard places an absolute requirement on the number of arguments/parameters you must be able to pass when calling a function, but the C standard suggests that an implementation should support at least 127 parameters/arguments (§5.2.4.1/1), and the C++ standard suggests that it should support at least 256 parameters/arguments (§B/2).
The precise wording from the C standard is:
The implementation shall be able to translate and execute at least one program that
contains at least one instance of every one of the following limits.
So, one such function must be successfully translated, but there's no guarantee that if your code attempts to do so that compilation will succeed (but it probably will, in a modern implementation).
The C++ standard doesn't even go that far, only going so far as to say that:
The bracketed number following each quantity is recommended as the minimum for that quantity. However, these quantities are only guidelines and do not determine compliance.
As far as what's advisable: it depends. A few functions (especially those using variadic parameters/variadic templates) accept an arbitrary number of arguments of (more or less) arbitrary types. In this case, passing a relatively large number of parameters can make sense because each is more or less independent from the others (e.g., printing a list of items).
When the parameters are more...interdependent, so you're not just passing a list or something on that order, I agree that the number should be considerably more limited. In C, I've seen a few go as high as 10 or so without being terribly unwieldy, but that's definitely starting to push the limit even at best. In C++, it's generally enough easier (and more common) to aggregate related items into a struct or class that I can't quite imagine that many parameters unless it was in a C-compatibility layer or something on that order, where a more...structured approach might force even more work on the user.
In the end, it comes down to this: you're going to either have to pass a smaller number of items that are individually larger, or else break the function call up into multiple calls, passing a smaller number of parameters to each.
The latter can tend to lead toward a stateful interface, that basically forces a number of calls in a more or less fixed order. You've reduced the complexity of a single call, but may easily have done little or nothing to reduce the overall complexity of the code.
In the other direction, a large number of parameters may well mean that you've really defined the function to carry out a large number of related tasks instead of one clearly defined task. In this case, finding more specific tasks for individual functions to carry out, and passing a smaller set of parameters needed by each may well reduce the overall complexity of the code.
It seems like you're veering into subjective territory, considering that C varargs are (usually) passed mechanically the same way as other arguments.
The first few arguments are placed in CPU registers, under most ABIs. How many depends on the number of architectural registers; it may vary from two to ten. In C++, empty classes (such as overload dispatch tags) are usually omitted entirely. Loading data into registers is usually "cheap as free."
After registers, arguments are copied onto the stack. You could say this takes linear time, but such operations are not all created equal. If you are going to be calling a series of functions on the same arguments, you might consider packaging them together as a struct and passing that by reference.
To literally answer your question, the maximum number of arguments is an implementation-defined quantity, meaning that the ISO standard requires your compiler manual to document it. The C++ standard also recommends (Annex B) that no implementation balk at less than 256 arguments, which should be Enough For Anyone™. C requires (§5.2.4.1) support for at least 127 arguments, although that requirement is normatively qualified such as to weaken it to only a recommendation.
It is not really dirty, sometimes you can't avoid using 4+ arguments while maintaining stability and efficiency. If possible it should be minimized for sake of clarity (perhaps by use of structs), especially if you think that some function is becoming a god construct (function that runs most of the program, they should be avoided for sake of stability). If this is the case, functions that take larger numbers of arguments are pretty good indicators of such constructs.

How to efficiently implement infinity and minus infinity supporting arithmetic in C++?

The trivial solution would be:
class Number
{
public:
bool isFinite();
bool isPositive();
double value();
...
private:
double value_;
bool isFinite_;
bool isPositive_;
...
};
The thing that worries me is efficiency:
From Effective C++: 55 Specific Ways to Improve Your Programs and Designs (3rd Edition) by Scott Meyers:
Even when small objects have inexpensive copy constructors, there can
be performance issues. Some compilers treat built-in and user-defined
types differently, even if they have the same underlying
representation. For example, some compilers refuse to put objects
consisting of only a double into a register, even though they happily
place naked doubles there on a regular basis. When that kind of thing
happens, you can be better off passing such objects by reference,
because compilers will certainly put pointers (the implementation of
references) into registers.
Is there a way to bypass the efficiency problem? For example a library that uses some assembly language magic?
There is very little reason to implement a Number class for a double. The double format already implement Infinity, NaN, and signage as part of the raw basic double type.
Second, you should write your code first aiming for correctness, and then try to optimize later, at which time you can look at factor out specific data structure and rewrite code and algos.
Modern day compilers are typically very efficient in writing good code, and typically does a way better job than most human programmers does.
For your specific example, I would just use doubles as they are rather than in a class. They're well adapted and defined for handling infinities.
In a more general sense, you should use the trivial solution and only worry about performance when (or, more likely, if) it becomes a problem.
That means code it up and test it in many of the scenarios you're going to use it for.
If it still performs within the bounds of your performance requirements, don't worry about trying to optimise it. And you should have some performance requirements slightly more specific that "I want it to run a s fast as possible" :-)
Remember that efficiency doesn't always mean "as fast as possible, no matter the cost". It means achieving your goals without necessarily sacrificing other things (like readability or maintainability).
If you take an complete operation that makes the user wait 0.1 seconds and optimise it to the point where it's ten times faster, the user won't notice that at all (I say "complete" because obviously, the user would notice a difference if it was done ten thousand times without some sort of interim result).
And remember, measure, don't guess!