I am trying to write high-performance code that uses random numbers, using Mersenne Twister. It takes roughly ~5ns to generate a random unsigned long long. This is used to generate a double, however, these take ~40ns to generate in a distribution.
Viewing the STL code the doubles, generated by a distribution, are generated by calls to std::generate_canonical, which involves a std::ceil and std::log2 operation, I believe it is these that are costly.
These operations are unnecessary as they are used to calculate the number of bits needed for calls to any RNG implementation. As this is known before compile time, I have written my own implementation that does not make these calls, and the time to generate a double is ~15ns.
Is it possible to specialize a templated STL function? If so how is this achieved, my attempts so far result in the original function still being used. I would like to specialize this STL function as I would still like to use the distributions in <random>.
This is in Visual C++, though once the code has been developed it will be run on Linux and use either GCC or ICC. If the method for generating doubles on Linux is different, (and quicker), this problem is irrelevant.
Edit 1:
I believe all distributions requiring a double make calls to std::generate_canonical, this function creates a double in the range [0,1) and the correct precision is created by iteratively adding calls to the RNG operator(). The log2 and ceil are used to calculate the number of iterations.
MSVC std::generate_canonical
// FUNCTION TEMPLATE generate_canonical
template<class _Real,
size_t _Bits,
class _Gen>
_Real generate_canonical(_Gen& _Gx)
{ // build a floating-point value from random sequence
_RNG_REQUIRE_REALTYPE(generate_canonical, _Real);
const size_t _Digits = static_cast<size_t>(numeric_limits<_Real>::digits);
const size_t _Minbits = _Digits < _Bits ? _Digits : _Bits;
const _Real _Gxmin = static_cast<_Real>((_Gx.min)());
const _Real _Gxmax = static_cast<_Real>((_Gx.max)());
const _Real _Rx = (_Gxmax - _Gxmin) + static_cast<_Real>(1);
const int _Ceil = static_cast<int>(_STD ceil(
static_cast<_Real>(_Minbits) / _STD log2(_Rx)));
const int _Kx = _Ceil < 1 ? 1 : _Ceil;
_Real _Ans = static_cast<_Real>(0);
_Real _Factor = static_cast<_Real>(1);
for (int _Idx = 0; _Idx < _Kx; ++_Idx)
{ // add in another set of bits
_Ans += (static_cast<_Real>(_Gx()) - _Gxmin) * _Factor;
_Factor *= _Rx;
}
return (_Ans / _Factor);
}
My Simplified Version
template<size_t _Bits>
double generate_canonical(std::mt19937_64& _Gx)
{ // build a floating-point value from random sequence
const double _Gxmin = static_cast<double>((_Gx.min)());
const double _Gxmax = static_cast<double>((_Gx.max)());
const double _Rx = (_Gxmax - _Gxmin) + static_cast<double>(1);
double _Ans = (static_cast<double>(_Gx()) - _Gxmin);
return (_Ans / _Rx);
}
This function is written in namespace std {}
Edit 2:
I found a solution please see my answer below.
Sorry, specializing Standard Library functions is not allowed; doing so results in Undefined Behavior.
However, you can use alternative distributions; C++ has well-defined interfaces between generators and distributions.
Oh, and just to eliminate the possibility of a beginners error (since you don't show code): you do not create a new distribution for every number.
By creating a template function with all parameters set, and declaring the functions as inline, it is possible to create user defined version of std::generate_canonical.
User defined std::generate_canonical:
namespace std {
template<>
inline double generate_canonical<double, static_cast<size_t>(-1), std::mt19937>(std::mt19937& _Gx)
{ // build a floating-point value from random sequence
const double _Gxmin = static_cast<double>((_Gx.min)());
const double _Rx = (static_cast<double>((_Gx.max)()) - _Gxmin) + static_cast<double>(1);
double _Ans = (static_cast<double>(_Gx()) - _Gxmin);
_Ans += (static_cast<double>(_Gx()) - _Gxmin) *_Rx;
return (_Ans / _Rx * _Rx);
}
template<>
inline double generate_canonical<double, static_cast<size_t>(-1), std::mt19937_64>(std::mt19937_64& _Gx)
{ // build a floating-point value from random sequence
const double _Gxmin = static_cast<double>((_Gx.min)());
const double _Rx = (static_cast<double>((_Gx.max)()) - _Gxmin) + static_cast<double>(1);
return ((static_cast<double>(_Gx()) - _Gxmin) / _Rx);
}
}
The second parameter static_cast<size_t>(-1) should be modified to whatever value is used by specific libraries, this is the case for VC++ but may be different for GCC. This means it is not portable.
This function has been defined for std::mt19337 and std::mt19937_64 and seems to be used for STL distributions correctly.
Results:
double using std::generate_canonical
Generating 400000000 doubles using standard MT took: 17625 milliseconds
This equivalent to: 44.0636 nanoseconds per value
Generating 400000000 doubles using 64bit MT took: 11958 milliseconds
This equivalent to: 29.8967 nanoseconds per value
double using new generate_canonical
Generating 400000000 doubles using standard MT took: 4843 milliseconds
This equivalent to: 12.1097 nanoseconds per value
Generating 400000000 doubles using 64bit MT took: 2645 milliseconds
This equivalent to: 6.61362 nanoseconds per value
Related
I need to subtract extremely small double number x from 1 i.e. to calculate 1-x in C++ for 0<x<1e-16. Because of machine precision restrictions for small enoughf x I will always get 1-x=1. Simple solution is to switch from double to some more precise format like long. But because of some restrictions I can't switch to more precise number formats.
What is the most efficient way to get accurate value of 1-x where x is an extremely small double if I can't use more precise formats and I need to store the result of the subtraction as a double? In practice I would like to avoid percentage errors greater then 1% (between double representation of 1-x and its actual value).
P.S. I am using Rcpp to calculate the quantiles of standard normal distribution via qnorm function. This function is symmetric around 0.5 being much more accurate for values close to 0. Therefore instead of qnorm(1-(1e-30)) I would like to calculate -qnorm(1e-30) but to derive 1e-30 from 1-(1e-30) I need to deal with a precision problem. The restriction on double is due to the fact that as I know it is not safe to use more precise numeric formats in Rcpp. Note that my inputs to qnorm could be sought of exogeneous so I can't to derive 1-x from x durning some preliminary calculations.
Simple solution is to switch from double to some more precise format like long [presumably, double]
In that case you have no solution. long double is an alias for double on all modern machines. I stand corrected, gcc and icc still support it, only cl has dropped support for it for a long time.
So you have two solutions, and they're not mutually exclusive:
Use an arbitrary precision library instead of the built-in types. They're orders of magnitude slower, but if that's the best your algorithm can work with then that's that.
Use a better algorithm, or at least rearrange your equation variables, to not have this need in the first place. Use distribution and cancellation rules to avoid the problem entirely. Without a more in depth description of your problem we can't help you, but I can tell you with certainty that double is more than enough to allow us to model airplane AI and flight parameters anywhere in the world.
Rather than resorting to an arbitrary precision solution (which, as others have said, would potentially be extremely slow), you could simply create a class that extends the inherent precision of the double type by a factor of (approximately) two. You would then only need to implement the operations that you actually need: in your case, this may only be subtraction (and possibly addition), which are both reasonably easy to achieve. Such code will still be considerably slower than using native types, but likely much faster than libraries that use unnecessary precision.
Such an implementation is available (as open-source) in the QD_Real class, created some time ago by Yozo Hida (a PhD Student, at the time, I believe).
The linked repository contains a lot of code, much of which is likely unnecessary for your use-case. Below, I have shown an extremely trimmed-down version, which allows creation of data with the required precision, shows an implementation of the required operator-() and a test case.
#include <iostream>
class ddreal {
private:
static inline double Plus2(double a, double b, double& err) {
double s = a + b;
double bb = s - a;
err = (a - (s - bb)) + (b - bb);
return s;
}
static inline void Plus3(double& a, double& b, double& c) {
double t3, t2, t1 = Plus2(a, b, t2);
a = Plus2(c, t1, t3);
b = Plus2(t2, t3, c);
}
public:
double x[2];
ddreal() { x[0] = x[1] = 0.0; }
ddreal(double hi) { x[0] = hi; x[1] = 0.0; }
ddreal(double hi, double lo) { x[0] = Plus2(hi, lo, x[1]); }
ddreal& operator -= (ddreal const& b) {
double t1, t2, s2;
x[0] = Plus2(x[0], -b.x[0], s2);
t1 = Plus2(x[1], -b.x[1], t2);
x[1] = Plus2(s2, t1, t1);
t1 += t2;
Plus3(x[0], x[1], t1);
return *this;
}
inline double toDouble() const { return x[0] + x[1]; }
};
inline ddreal operator-(ddreal const& a, ddreal const& b)
{
ddreal retval = a;
return retval -= b;
}
int main()
{
double sdone{ 1.0 };
double sdwee{ 1.0e-42 };
double sdval = sdone - sdwee;
double sdans = sdone - sdval;
std::cout << sdans << "\n"; // Gives zero, as expected
ddreal ddone{ 1.0 };
ddreal ddwee{ 1.0e-42 };
ddreal ddval = ddone - ddwee; // Can actually hold 1 - 1.0e42 ...
ddreal ddans = ddone - ddval;
std::cout << ddans.toDouble() << "\n"; // Gives 1.0e-42
ddreal ddalt{ 1.0, -1.0e-42 }; // Alternative initialization ...
ddreal ddsec = ddone - ddalt;
std::cout << ddsec.toDouble() << "\n"; // Gives 1.0e-42
return 0;
}
Note that I have deliberately neglected error-checking and other overheads that would be needed for a more general implementation. Also, the code I have shown has been 'tweaked' to work more optimally on x86/x64 CPUs, so you may need to delve into the code at the linked GitHub, if you need support for other platforms. (However, I think the code I have shown will work for any platform that conforms strictly to the IEEE-754 Standard.)
I have tested this implementation, extensively, in code I use to generate and display the Mandelbrot Set (and related fractals) at very deep zoom levels, where use of the raw double type fails completely.
Note that, though you may be tempted to 'optimize' some of the seemingly pointless operations, doing so will break the system. Also, this must be compiled using the /fp:precise (or /fp:strict) flags (with MSVC), or the equivalent(s) for other compilers; using /fp:fast will break the code, completely.
I have a constant integer, steps, which is calculated using the floor function of the quotient of two other constant variables. However, when I attempt to use this as the length of an array, visual studio tells me it must be a constant value and the current value cannot be used as a constant. How do I make this a "true" constant that can be used as an array length? Is the floor function the problem, and is there an alternative I could use?
const int simlength = 3.154*pow(10,7);
const float timestep = 100;
const int steps = floor(simlength / timestep);
struct body bodies[bcount];
struct body {
string name;
double mass;
double position[2];
double velocity[2];
double radius;
double trace[2][steps];
};
It is not possible with the standard library's std::pow and std::floor function, because they are not constexpr-qualified.
You can probably replace std::pow with a hand-written implementation my_pow that is marked constexpr. Since you are just trying to take the power of integers, that shouldn't be too hard. If you are only using powers of 10, floating point literals may be written in the scientific notation as well, e.g. 1e7, which makes the pow call unnecessary.
The floor call is not needed since float/double to int conversion already does flooring implicitly. Or more correctly it truncates, which for positive non-negative values is equivalent to flooring.
Then you should also replace the const with constexpr in the variable declarations to make sure that the variables are usable in constant expressions:
constexpr int simlength = 3.154*my_pow(10,7); // or `3.154e7`
constexpr float timestep = 100;
constexpr int steps = simlength / timestep;
Theoretically only float requires this change, since there is a special exception for const integral types, but it seems more consistent this way.
Also, I have a feeling that there is something wrong with the types of your variables. A length and steps should not be determined by floating-point operations and types, but by integer types and operations alone. Floating-point operations are not exact and introduce errors relative to the mathematical precise calculations on the real numbers. It is easy to get unexpected off-by-one or worse errors this way.
You cannot define an array of a class type before defining the class.
Solution: Define body before defining bodies.
Furthermore, you cannot use undefined names.
Solution: Define bcount before using it as the size of the array.
Is the floor function the problem, and is there an alternative I could use?
std::floor is one problem. There's an easy solution: Don't use it. Converting a floating point number to integer performs similar operation implicitly (the behaviour is different in case of negative numbers).
std::pow is another problem. It cannot be replaced as trivially in general, but in this case we can use a floating point literal in scientific notation instead.
Lastly, non-constexpr floating point variable isn't compile time constant. Solution: Use constexpr.
Here is a working solution:
constexpr int simlength = 3.154e7;
constexpr float timestep = 100;
constexpr int steps = simlength / timestep;
P.S. trace is a very large array. I would recommend against using so large member variables, because it's easy for the user of the class to not notice such detail, and they are likely to create instances of the class in automatic storage. This is a problem because so large objects in automatic storage are prone to cause stack overflow errors. Using std::vector instead of an array is an easy solution. If you do use std::vector, then as a side effect the requirement of compile time constant size disappear and you will no longer have trouble using std::pow etc.
Because simlength is 3.154*10-to-the-7th, and because timestep is 10-squared, then the steps variable's value can be written as:
3.154e7 / 1e2 == 3.154e5
And, adding a type-cast, you should be able to write the array as:
double trace[2][(int)(3.154e5)];
Note that this is HIGHLY IRREGULAR, and should have extensive comments describing why you did this.
Try switching to constexpr:
constexpr int simlength = 3.154e7;
constexpr float timestep = 1e2;
constexpr int steps = simlength / timestep;
struct body {
string name;
double mass;
double position[2];
double velocity[2];
double radius;
double trace[2][steps];
};
Is it possible in C++ to create a union that would let me do something like this ...
union myTime {
long millis;
double seconds;
};
BUT, have it somehow do the conversion so that if I input times in milliseconds, and then call seconds, it will take the number and divide it by 1000, or conversely, if I input the number in seconds, then call millis, it would multiply the number by 1000...
So that:
myTime.millis = 1340;
double s = myTime.seconds;
Where s would equal 1.34
or
myTime.seconds = 2.5;
long m = myTime.millis;
Where m would = 2500
Is this possible?
A union is just different representations for the same value (the same bytes), so you can't define any smart logic over that.
In this case, you can define a class with conversion functions (both for initializtion or for getting the data).
class myTime {
public:
myTime(long millis);
double as_seconds();
static void from_seconds(double seconds);
};
Notice that as mentioned in other answers, for time conversions you can use std::chrono objects (c++11 and above)
To answer the question as asked: No. Unions are lower-level structure that simply allow multiple object representations to live in the same memory space. In your example, long and double share the same address.
They are not, however, smart enough to automatically do a conversation of any kind. Accessing the inactive member of a union is actually undefined behavior in most cases (there are exceptions for if you have a common-initial sequence in a standard-layout object).
Even if the behavior were well-defined, the value you would see in the double would be the double interpretation of the byte-pattern necessary to represent 1340.
If your problem is specifically to do with converting millis to seconds, as per your example, have you considered using std::chrono::duration units? These units are designed specifically for automatically doing these conversions between time units for you -- and you are capable of defining durations with custom representations (such as double).
Your example in your problem could be rewritten:
using double_seconds = std::chrono::duration<double>;
const auto millis = std::chrono::millis{1340};
const auto m = double_seconds{millis};
// m contains 1.340
You can if you abuse the type system a bit:
union myTime {
double seconds;
class milli_t {
double seconds;
public:
milli_t &operator=(double ms) {
seconds = ms/1000.0;
return *this; }
operator double() const { return seconds * 1000; }
} millis;
};
Now if you do
myTime t;
t.millis = 1340;
double s = t.seconds;
s would equal 1.34
and
myTime t;
t.seconds = 2.5;
long m = t.millis;
m would be 2500, exactly as you desire.
Of course, why you would want to do this is unclear.
I have an integer of type uint32_t and would like to divide it by a maximum value of uint32_t and obtain the result as a float (in range 0..1).
Naturally, I can do the following:
float result = static_cast<float>(static_cast<double>(value) / static_cast<double>(std::numeric_limits<uint32_t>::max()))
This is however quite a lot of conversions on the way, and a the division itself may be expensive.
Is there a way to achieve the above operation faster, without division and excess type conversions? Or maybe I shouldn't worry because modern compilers are able to generate an efficient code already?
Edit: division by MAX+1, effectively giving me a float in range [0..1) would be fine too.
A bit more context:
I use the above transformation in a time-critical loop, with uint32_t being produced from a relatively fast random-number generator (such as pcg). I expect that the conversions/divisions from the above transformation may have some noticable, albeit not overwhelming, negative impact on the performance of my code.
This sounds like a job for:
std::uniform_real_distribution<float> dist(0.f, 1.f);
I would trust that to give you an unbiased conversion to float in the range [0, 1) as efficiently as possible. If you want the range to be [0, 1] you could use this:
std::uniform_real_distribution<float> dist(0.f, std::nextafter(1.f, 2.f))
Here's an example with two instances of a not-so-random number generator that generates min and max for uint32_t:
#include <iostream>
#include <limits>
#include <random>
struct ui32gen {
constexpr ui32gen(uint32_t x) : value(x) {}
uint32_t operator()() { return value; }
static constexpr uint32_t min() { return 0; }
static constexpr uint32_t max() { return std::numeric_limits<uint32_t>::max(); }
uint32_t value;
};
int main() {
ui32gen min(ui32gen::min());
ui32gen max(ui32gen::max());
std::uniform_real_distribution<float> dist(0.f, 1.f);
std::cout << dist(min) << "\n";
std::cout << dist(max) << "\n";
}
Output:
0
1
Is there a way to achieve the operation faster, without division
and excess type conversions?
If you want to manually do something similar to what uniform_real_distribution does (but much faster, and slightly biased towards lower values), you can define a function like this:
// [0, 1) the common range
inline float zero_to_one_exclusive(uint32_t value) {
static const float f_mul =
std::nextafter(1.f / float(std::numeric_limits<uint32_t>::max()), 0.f);
return float(value) * f_mul;
}
It uses multiplication instead of division since that often is a bit faster (than your original suggestion) and only has one type conversion. Here's a comparison of division vs. multiplication.
If you really want the range to be [0, 1], you can do like below, which will also be slightly biased towards lower values compared to what std::uniform_real_distribution<float> dist(0.f, std::nextafter(1.f, 2.f)) would produce:
// [0, 1] the not so common range
inline float zero_to_one_inclusive(uint32_t value) {
static const float f_mul = 1.f/float(std::numeric_limits<uint32_t>::max());
return float(value) * f_mul;
}
Here's a benchmark comparing uniform_real_distribution to zero_to_one_exclusive and zero_to_one_inclusive.
Two of the casts are superfluous. You dont need to cast to float when anyhow you assign to a float. Also it is sufficient to cast one of the operands to avoid integer arithmetics. So we are left with
float result = static_cast<double>(value) / std::numeric_limits<int>::max();
This last cast you cannot avoid (otherwise you would get integer arithmetics).
Or maybe I shouldn't worry because modern compilers are able to
generate an efficient code already?
Definitely a yes and no! Yes, trust the compiler that it knows best to optimize code and write for readability first. And no, dont blindy trust. Look at the output of the compiler. Compare different versions and measure them.
Is there a way to achieve the above operation faster, without division
[...] ?
Probably yes. Dividing by std::numeric_limits<int>::max() is so special, that I wouldn't be too surprised if the compiler comes with some tricks. My first approach would again be to look at the output of the compiler and maybe compare different compilers. Only if the compilers output turns out to be suboptimal I'd bother to enter some manual bit-fiddling.
For further reading this might be of interest: How expensive is it to convert between int and double? . TL;DR: it actually depends on the hardware.
If performance were a real concern I think I'd be inclined to represent this 'integer that is really a fraction' in its own class and perform any conversion only where necessary.
For example:
#include <iostream>
#include <cstdint>
#include <limits>
struct fraction
{
using value_type = std::uint32_t;
constexpr explicit fraction(value_type num = 0) : numerator_(num) {}
static constexpr auto denominator() -> value_type { return std::numeric_limits<value_type>::max(); }
constexpr auto numerator() const -> value_type { return numerator_; }
constexpr auto as_double() const -> double {
return double(numerator()) / denominator();
}
constexpr auto as_float() const -> float {
return float(as_double());
}
private:
value_type numerator_;
};
auto generate() -> std::uint32_t;
int main()
{
auto frac = fraction(generate());
// use/manipulate/display frac here ...
// ... and finally convert to double/float if necessary
std::cout << frac.as_double() << std::endl;
}
However if you look at code gen on godbolt you'll see that the CPU's floating point instructions take care of the conversion. I'd be inclined to measure performance before you run the risk of wasting time on early optimisation.
I have a very simple use case for using Boost.Unit, but not sure if there is a better/easier way to get the same done.
I want to convert between the same units, but different ratios. For example, hertz to kilohertz to megahertz.
From my understanding, I first must define units with my specific ratios:
typedef boost::units::make_scaled_unit<si::frequency, scale<10, static_rational<0> > >::type Hertz_unit;
typedef boost::units::make_scaled_unit<si::frequency, scale<10, static_rational<3> > >::type KilloHertz_unit;
typedef boost::units::make_scaled_unit<si::frequency, scale<10, static_rational<6> > >::type MegaHertz_unit;
Then create quantities that represent the units:
typedef boost::units::quantity<Hertz_unit , double> Hertz;
typedef boost::units::quantity<KilloHertz_unit, double> KilloHertz;
typedef boost::units::quantity<MegaHertz_unit , double> MegaHertz;
Finally some constants and literals:
BOOST_UNITS_STATIC_CONSTANT( Hz, Hertz_unit );
BOOST_UNITS_STATIC_CONSTANT(KHz, KilloHertz_unit);
BOOST_UNITS_STATIC_CONSTANT(MHz, MegaHertz_unit );
Hertz operator"" _Hz (long double val) { return Hertz (val * Hz); }
KilloHertz operator"" _KHz (long double val) { return KilloHertz(val * KHz); }
MegaHertz operator"" _MHz (long double val) { return MegaHertz (val * MHz); }
Now I can use the quantities:
Hertz freq_1 = (10 * Hz);
KilloHertz freq_2 = (10 * KHz);
MegaHertz freq_3 = (10 * MHz);
// OR
Hertz freq_4 = 10.0_Hz;
KilloHertz freq_5 = 10.0_KHz;
MegaHertz freq_6 = 10.0_MHz;
// Convert between units
Hertz freq_7 = static_cast<Hertz>(10 * KHz);
Is this how Boost.Unit should be used or am I missing something that might make it easier to use?
Are there not already defined units/quantities that I can use somewhere hidden in a header? Or should this be done for all my units that I use?
Do I need to know/remember that Kilo is scale<10, static_rational<3> or is this already defined and available?
There are a few different predefined "systems" that make things easier to use and avoid needing to define your own units and scales.
While this code doesn't involve frequencies, you should be able to adapt it to your needs (there is a boost/units/systems/si/frequency.hpp header, for example):
#include <boost/units/quantity.hpp>
#include <boost/units/systems/si/length.hpp>
#include <boost/units/systems/si/prefixes.hpp>
using boost::units::si::meters;
using boost::units::si::milli;
typedef boost::units::quantity<boost::units::si::length> length;
static const auto millimeters = milli * meters;
// ...
auto x = length(5 * millimeters);
auto mm = double(x / meters * 1000.0);
You used to be able to do this without the explicit cast to length (although you then needed to explicitly type the variable as length instead of using auto), but at some point this was made to require an explicit cast.
In theory you shouldn't need to do the conversion from meters to mm manually in that second line, but the obvious construction x / millimeters produces compile errors that I never did manage to figure out a good workaround (the scale doesn't cancel out like it should).
(You can also use x.value() rather than x / meters, but I don't like that approach as it will still compile and give you surprising results if the base unit of x wasn't what you were expecting. And it still doesn't solve the mm conversion issue.)
Alternatively you might want to consider something like this answer, although that's mostly geared to using a single alternative scale as your base unit.
Here's another method using frequencies and multiple quantity types:
#include <boost/units/quantity.hpp>
#include <boost/units/systems/si/frequency.hpp>
#include <boost/units/systems/si/prefixes.hpp>
using boost::units::si::hertz;
using boost::units::si::kilo;
using boost::units::si::mega;
static const auto kilohertz = kilo * hertz;
static const auto megahertz = mega * hertz;
typedef boost::units::quantity<boost::units::si::frequency> Hertz;
typedef boost::units::quantity<decltype(kilohertz)> KiloHertz;
typedef boost::units::quantity<decltype(megahertz)> MegaHertz;
// ...
auto freq_1 = Hertz(10 * hertz);
auto freq_2 = KiloHertz(10 * kilohertz);
auto freq_3 = MegaHertz(10 * megahertz);
auto freq_4 = KiloHertz(freq_3);
// freq1.value() == 10.0
// freq2.value() == 10.0
// freq3.value() == 10.0
// freq4.value() == 10000.0
You can do conversions and maths on these fairly easily; in this context the value() is probably the most useful as it will naturally express the same unit as the variable.
One slightly unfortunate behaviour is that the default string output for these units is presented as inverse seconds (eg. freq2 is "10 k(s^-1)"). So you probably just want to avoid using those.
And yes, the operator"" works as well, so you could substitute these in the above:
Hertz operator"" _Hz(long double val) { return Hertz(val * hertz); }
KiloHertz operator"" _kHz(long double val) { return KiloHertz(val * kilohertz); }
MegaHertz operator"" _MHz(long double val) { return MegaHertz(val * megahertz); }
auto freq_1 = 10.0_Hz;
auto freq_2 = 10.0_kHz;
auto freq_3 = 10.0_MHz;
auto freq_4 = KiloHertz(freq_3);
For consistency you can also define Hertz in terms of hertz, but due to quirks it's a little more tricky than the others; this works though:
typedef boost::units::quantity<std::decay_t<decltype(hertz)>> Hertz;
typedef boost::units::quantity<std::decay_t<decltype(kilohertz)>> KiloHertz;
typedef boost::units::quantity<std::decay_t<decltype(megahertz)>> MegaHertz;