g++ std::chrono assertion break - c++

I'm having trouble with some code from a C++ project. It includes the std::chrono library and keeps breaking at the following assertion:
static_assert(system_clock::duration::min() < system_clock::duration::zero(), "a clock's minimum duration cannot be less than its epoch");
The assert breaks the code in both a Debian machine with g++ 6.3.0 and in a PC with Windows 10, CygWin and g++ 7.3.0.
I've also tried in an online C++ compiler a simple example including the chrono library, which by itself does not give any problems, but when comparing manually the minimum and zero duration of the chrono system clock gives the result that should trigger the assert as well.
I've searched about the issue and found some clues leading to some related problems caused by the TZ posix variable that holds timezone info. Tried unsetting and setting it to its right value, yet it had no effects on the assert.
I'd appreciate any pointers or suggestions.
Edit: While std::chrono::milliseconds::zero() has a (as expected) value of 0, the value of std::chrono::milliseconds::min() is -9223372036854775808, or -2^63 which i think is the minimum possible value for a long long value (possible overflow?).

After some tests i realized the assert was being triggered in both systems only when using the g++ through the testing software being used, since the same code compiled outside it did not fail the assertion with the same compilers.
It turns out that the software uses the EDG parser and it needs the option --64_bit_target to avoid triggering the assert. Unfortunately, no information about the option exists in the parser documentation, so i can't know the reason why this issue happens without it.
Probably the question doesn't have much value now, but i didn't want to delete it since people already wrote answers that may be of interest to someone.

A duration can be negative, as you found with the highly negative value of …::min(). The assertion is incorrect, almost like asserting that -1 must be greater than zero.
The C++17 spec declares an abs() function for finding an absolute duration, and discusses its applicability with signed and unsigned representations:
23.17.5.9 duration algorithms [time.duration.alg]
template <class Rep, class Period> constexpr duration<Rep, Period> abs(duration<Rep, Period> d);
1 Remarks: This function shall not participate in overload resolution unless numeric_limits<Rep>::is_signed is true.
2 Returns: If d >= d.zero(), return d, otherwise return -d.

I maybe have two suggestions :
In general , assert fail only happen in Debug version , so if you
just want to build successfully, you can build to release version to
avoid the problem.
You can confirm your Debian and Windows time zone in proper zone .

Related

GCC Assembly "+t"

I'm currently testing some inline assembly in C++ on an old compiler (GCC circa 2004) and I wanted to perform the square root function on a floating point number. After trying and searching for a successful method, I came across the following code
float r3(float n){
__asm__("fsqrt" : "+t" (n));
return n;
};
which worked. The issue is, even though I understand the assembly instructions used, I'm unable to find any particular documentation as to what the "+t" flag means on the n variable. I'm under the genuine idea that it seems to be a manner by which to treat the variable n as both the input and output variable but I was unable to find any information on it. So, what exactly is the "t" flag and how does it work here?
+
Means that this operand is both read and written by the instruction.
(From here)
t
Top of 80387 floating-point stack (%st(0)).
(From here)
+ means you are reading and writing the register.
t means the value is on the top of the 80387 floating point stack.
References:
GCC manual, Extended Asm has general information about constraints - search for "constraints"
GCC manual, Machine Constraints has information about the specific constraints supported on each architecture - search for "x86 family"

What is the maximum value I can pass to std::thread::sleep_for() and sleep_until()?

This question on sleeping forever has an answer that mentions this:
std::this_thread::sleep_until(
std::chrono::time_point<std::chrono::system_clock>::max());
and this:
std::this_thread::sleep_for(
std::chrono::system_clock::durat‌​ion::max());
Running this code on Visual C++ 2017 RC actually doesn't sleep at all. I haven't checked out the sleep_until() case, so I'm not sure what's going on there.
In the sleep_for() case, the given duration seems to be converted to an absolute time by adding it to system_clock::now()which is then forwarded to sleep_until(). The problem is that the addition overflows, giving a time in the past.
Looking at the C++17 draft in 30.3.2, neither sleep_until() nor sleep_for() seem to mention limits. There is nothing relevant in Timing specifications (30.2.4). As for duration::max(), it is described in duration_values (20.17.4.3) as: "The value returned shall compare greater than zero()", which isn't helpful at all.
Honestly, I was rather surprised to see sleep_for() fail for system_clock::duration::max(), as it is a construct that make perfect sense to me.
What is the highest value I can pass to those functions that has a well-defined behaviour?
Technically speaking std::chrono::system_clock::durat‌​ion::max() should sleep for a very long time (longer than you will or your grandchildren will live). And the standard enforces that.
But practically, implementors are still learning how to deal with overflow induced by chrono conversions among durations of different precisions. So bugs are common.
It might be more practical to sleep for 9'000h (a little over a year). There's no way this is going to cause overflow. And it is surely "forever" for your application.
However, don't hesitate to send a bug report to your vendor complaining that std::chrono::system_clock::durat‌​ion::max() doesn't work. It should. It is just tricky to make it work correctly. And making it work isn't portable, so it isn't reasonable to ask you to write some wrapper to do it.
Motivated by isanae's excellent comment below which asks for references:
30.3.3 [thread.thread.this]/p7 which describes sleep_for says:
Effects: Blocks the calling thread for the relative timeout (30.2.4) specified by rel_time.
30.2.4 [thread.req.timing] which is a specification of all the timing requirements in the thread support library, says:
2 Implementations necessarily have some delay in returning from a timeout. Any overhead in interrupt response, function return, and scheduling induces a “quality of implementation” delay, expressed as duration Di. Ideally, this delay would be zero. Further, any contention for processor and memory resources induces a “quality of management” delay, expressed as duration Dm. The delay durations may vary from timeout to timeout, but in all cases shorter is better.
3 The member functions whose names end in _for take an argument that specifies a duration. These functions produce relative timeouts. Implementations should use a steady clock to measure time for these functions.330 Given a duration argument Dt, the real-time duration of the timeout is Dt + Di + Dm .
Ok, so now I'm amused, because we aren't talking about a member function. We're talking about a namespace-scope function. This is a defect. Feel free to submit one.
But the spec provides no grace to overflow. The spec (nearly) clearly says that the implementation can't return until after the specified delay. It is vague on how much after, but clear on that it can't return before.
If you "bug" STL and he isn't cooperative, just refer him to me, and we will work it out. :-) Perhaps there is a standards bug I'm not seeing, and should be fixed. If so, I can help you file the bug against the standard instead of against VS. Or maybe VS has already addressed this issue, and the fix is available in an upgrade.
If this is a bug in VS, please let STL know that I am more than happy to assist in fixing it. There are different tradeoffs in addressing this issue on different platforms.
At the moment, I can't swear that there isn't a bug of this class in my own implementation (libc++). So no high-horse here. It is a difficult area for a std::lib to get right.
Update
I've looked at the libc++ sleep_for and sleep_until. sleep_for correctly handles the overflow by sleeping for a "long time" (as much as the OS can handle). sleep_until has the overflow bug.
Here is a very lightly tested fixed sleep_until:
template <class _Clock, class _Duration>
void
sleep_until(const chrono::time_point<_Clock, _Duration>& __t)
{
using namespace chrono;
using __ldsec = duration<long double>;
_LIBCPP_CONSTEXPR time_point<_Clock, __ldsec> _Max =
time_point<_Clock, nanoseconds>::max();
time_point<_Clock, nanoseconds> __ns;
if (__t < _Max)
{
__ns = time_point_cast<nanoseconds>(__t);
if (__ns < __t)
__ns += nanoseconds{1};
}
else
__ns = time_point<_Clock, nanoseconds>::max();
mutex __mut;
condition_variable __cv;
unique_lock<mutex> __lk(__mut);
while (_Clock::now() < __ns)
__cv.wait_until(__lk, __ns);
}
The basic strategy is to do the overflow check using a long double representation which not only has a very large maximum representable value, but also uses saturation arithmetic (has an infinity). If the input value is too big for the OS to handle, truncate it down to something the OS can handle.
On some platforms it might not be desirable to resort to floating point arithmetic. One might use __int128_t instead. Or there is a more involved trick of converting to the "least common multiple" of the input and the native duration before doing the comparison. That conversion will only involve division (not multiplication) and so can't overflow. However it will not always give accurate answers for two values that are nearly equal. But it should work well enough for this use case.
For those interested in the latter (lcm) strategy, here is how to compute that type:
namespace detail
{
template <class Duration0, class ...Durations>
struct lcm_type;
template <class Duration>
struct lcm_type<Duration>
{
using type = Duration;
};
template <class Duration1, class Duration2>
struct lcm_type<Duration1, Duration2>
{
template <class D>
using invert = std::chrono::duration
<
typename D::rep,
std::ratio_divide<std::ratio<1>, typename D::period>
>;
using type = invert<typename std::common_type<invert<Duration1>,
invert<Duration2>>::type>;
};
template <class Duration0, class Duration1, class Duration2, class ...Durations>
struct lcm_type<Duration0, Duration1, Duration2, Durations...>
{
using type = typename lcm_type<
typename lcm_type<Duration0, Duration1>::type,
Duration2, Durations...>::type;
};
} // namespace detail
One can think of lcm_type<duration1, duration2> as the opposite of common_type<duration1, duration2>. The former finds a duration which the conversion to only divides. The latter finds a duration which the conversion to only multiplies.
It's unspecified, and it will overflow
I've had discussions with Billy O'Neal, one of the Visual C++ standard library developers, and Howard Hinnant, lead author of libc++. My conclusion is that the _for and _until family from the threading library will overflow in unspecified ways and you should not try to pass largish values to them. Whether the standard is under-specified on that subject is unclear to me.
The problem
All timed functions1 take either a duration or a time_point. Both are defined by their underlying type (representation) and ratio (period). The period can also be considered a "unit", such as a second or nanosecond.
There are two main places where overflow can happen:
Before the platform-specific call, and
During the conversion to a platform-specific type
Before the call
It is possible to avoid overflow in this situation, like Howard mentions in his answer, but "implementors are still learning how to deal with overflow induced by chrono conversions among durations of different precisions".
Visual C++ 2017, for example, implements sleep_for() in terms of sleep_until() by adding the given duration to the current time returned by
system_clock::now(). If the duration is too large, this will overflow. Other libraries, such as libstdc++, don't seem to have this problem.
The system call
Once you go deep enough, you'll have to interact with whatever platform you're on to do the actual work. This is where it gets messy.
On libstdc++, for example, the call to sleep_for() ends up in nanosleep(), which takes a timespec. This is a simplified version of it:
auto s = duration_cast<seconds>(time);
auto ns = duration_cast<nanoseconds>(time - s);
timespec ts = { s.count(), ns.count() };
nanosleep(&ts, &ts);
It's easy to overflow this: you just have to pass a time that is longer than LLONG_MAX seconds:
std::this_thread::sleep_for(hours::max());
This overflows the duration_cast into seconds and sets ts.tv_sec to -3600, which doesn't sleep at all because nanosleep() fails on negative values. It gets even better with sleep_until(), which tries to call nanosleep() in a loop, but it keeps failing, so it takes 100% of the processor for the duration of the wait.
The same thing happens in the Visual C++ 2017 library. Ignoring the overflow in sleep_for() because it adds the duration to the current time, it ends up calling Sleep, which takes an unsigned 32-bit value in milliseconds.
Even if it called something more flexible like NtWaitForSingleObject() (which it might in the future), it's still only a signed 64-bit value in 100-nanosecond increments and can still overflow.
Bugs and limitations
I personally consider an overflow in the <chrono> library itself to be a bug, such as Visual C++'s implementation of sleep_for() in terms of sleep_until(). I think whatever value you give should end up untouched right up to the final conversion before calling into a platform-specific function.
Once you get there though, if the platform doesn't support sleeping for the duration you're asking for, there is no real solution. As <chrono> is prohibited from throwing exceptions, I accept than overflowing is a possibility. Although this then becomes undefined behaviour, I wish implementations would be a bit more careful treating overflows, such as libstdc++'s various failings of handling EINVAL and spinning in a tight loop.
Visual C++
I'm quoting a few things from the emails I got from Billy O'Neal because they add the point of view of a standard library developer:
Are you saying that this:
this_thread::sleep_for(system_clock::durat‌ion::max());
is undefined behaviour by the standard?
As far as I can tell, yes. It's kind of a grey area -- no maximum allowable range is really specified for these functions, but given their nature of accepting arbitrary time_point/duration, which may be backed by some user-supplied bignum type of which the standard library has no knowledge, a conversion to some underlying time_point/duration type is essentially mandated. <chrono>'s design treats dealing with overflows as a non-goal (see duration_cast, for example, which outright prohibits implementing "as if infinity" and similar).
The standard [...] doesn't give us any way to report failure to convert here -- the behavior is literally undefined. We are explicitly prohibited from throwing exceptions, we have no way of reasoning about what happens if you exceed LLONG_MAX, and so our only possible responses are "as if infinity" or go directly to std::terminate(), do not pass go, do not collect $200.
libstdc++ and libc++ are targeting platforms for which system_clock actually maps to something the platform understands, where Unix timestamps are the law of the land. We are not targeting such a platform, and are obligated to map to/from "DWORD milliseconds" and/or FILETIME.
About the only thing I can think of might be a reasonable use case for this thing would be to have some kind of sentinel value which means "infinity," but if we want to go there the standard should introduce a named constant and describe the behavior thereof.
I'd rather solve your direct problem (wanting a time value to be a sentinel for infinity) rather than attempting to mandate overflow checking. Overflow checking when you don't know anything about the types involved can get really expensive (in both complexity and run time), but checking for a magic constant (e.g. chrono::duration<rep, period>::max() or chrono::time_point<clock, duration>::max()) should be cheap.
It also looks like a future update (ABI incompatible) would make major changes to <thread> so it doesn't overflow in sleep_for() anymore, but it is still limited by what the Windows API supports. Something like NtWaitForSingleObject() does support 64-bit values, but signed, because it supports both relative (negative) and absolute (positive) times.
1 By "timed functions", I mean any function for which 30.2.4 [thread.req.timing] applies, such as this_thread::sleep_for() and this_thread::sleep_until(), but also stuff in timed_mutex, recursive_timed_mutex, condition_variable, etc.

Is there any tool for C++ which will check for common unspecified behavior?

Often one makes assumptions about a particular platform one is coding on, for example that signed integers use two's complement storage, or that (0xFFFFFFFF == -1), or things of that nature.
Does a tool exist which can check a codebase for the most common violations of these kinds of things (for those of us who want portable code but don't have strange non-two's-complement machines)?
(My examples above are specific to signed integers, but I'm interested in other errors (such as alignment or byte order) as well)
There are various levels of compiler warnings that you may wish to have switched on, and you can treat warnings as errors.
If there are other assumptions you know you make at various points in the code you can assert them. If you can do that with static asserts you will get failure at compile time.
I know that CLang is very actively developing a static analyzer (as a library).
The goal is to catch errors at analysis time, however the exact extent of the errors caught is not that clear to me yet. The library is called "Checker" and T. Kremenek is the responsible for it, you can ask about it on clang-dev mailing list.
I don't have the impression that there is any kind of reference about the checks being performed, and I don't think it's mature enough yet for production tool (given the rate of changes going on) but it may be worth a look.
Maybe a static code analysis tool? I used one a few years ago and it reported errors like this. It was not perfect and still limited but maybe the tools are better now?
update:
Maybe one of these:
What open source C++ static analysis tools are available?
update2:
I tried FlexeLint on your example (you can try it online using the Do-It-Yourself Example on http://www.gimpel-online.com/OnlineTesting.html) and it complains about it but perhaps not in a way you are looking for:
5 int i = -1;
6 if (i == 0xffffffff)
diy64.cpp 6 Warning 650: Constant '4294967295' out of range for operator '=='
diy64.cpp 6 Info 737: Loss of sign in promotion from int to unsigned int
diy64.cpp 6 Info 774: Boolean within 'if' always evaluates to False [Reference: file diy64.cpp: lines 5, 6]
Very interesting question. I think it would be quite a challenge to write a tool to flag these usefully, because so much depends on the programmer's intent/assumptions
For example, it would be easy to recognize a construct like:
x &= -2; // round down to an even number
as being dependent on twos-complement representation, but what if the mask is a variable instead of a constant "-2"?
Yes, you could take it a step further and warn of any use of a signed int with bitwise &, any assignment of a negative constant to an unsigned int, and any assignment of a signed int to an unsigned int, etc., but I think that would lead to an awful lot of false positives.
[ sorry, not really an answer, but too long for a comment ]

gcc optimization? bug? and its practial implication to project

My questions are divided into three parts
Question 1
Consider the below code,
#include <iostream>
using namespace std;
int main( int argc, char *argv[])
{
const int v = 50;
int i = 0X7FFFFFFF;
cout<<(i + v)<<endl;
if ( i + v < i )
{
cout<<"Number is negative"<<endl;
}
else
{
cout<<"Number is positive"<<endl;
}
return 0;
}
No specific compiler optimisation options are used or the O's flag is used. It is basic compilation command g++ -o test main.cpp is used to form the executable.
The seemingly very simple code, has odd behaviour in SUSE 64 bit OS, gcc version 4.1.2. The expected output is "Number is negative", instead only in SUSE 64 bit OS, the output would be "Number is positive".
After some amount of analysis and doing a 'disass' of the code, I find that the compiler optimises in the below format -
Since i is same on both sides of comparison, it cannot be changed in the same expression, remove 'i' from the equation.
Now, the comparison leads to if ( v < 0 ), where v is a constant positive, So during compilation itself, the else part cout function address is added to the register. No cmp/jmp instructions can be found.
I see that the behaviour is only in gcc 4.1.2 SUSE 10. When tried in AIX 5.1/5.3 and HP IA64, the result is as expected.
Is the above optimisation valid?
Or, is using the overflow mechanism for int not a valid use case?
Question 2
Now when I change the conditional statement from if (i + v < i) to if ( (i + v) < i ) even then, the behaviour is same, this atleast I would personally disagree, since additional braces are provided, I expect the compiler to create a temporary built-in type variable and them compare, thus nullify the optimisation.
Question 3
Suppose I have a huge code base, an I migrate my compiler version, such bug/optimisation can cause havoc in my system behaviour. Ofcourse from business perspective, it is very ineffective to test all lines of code again just because of compiler upgradation.
I think for all practical purpose, these kinds of error are very difficult to catch (during upgradation) and invariably will be leaked to production site.
Can anyone suggest any possible way to ensure to ensure that these kind of bug/optimization does not have any impact on my existing system/code base?
PS :
When the const for v is removed from the code, then optimization is not done by the compiler.
I believe, it is perfectly fine to use overflow mechanism to find if the variable is from MAX - 50 value (in my case).
Update(1)
What would I want to achieve? variable i would be a counter (kind of syncID). If I do offline operation (50 operation) then during startup, I would like to reset my counter, For this I am checking the boundary value (to reset it) rather than adding it blindly.
I am not sure if I am relying on the hardware implementation. I know that 0X7FFFFFFF is the max positive value. All I am doing is, by adding value to this, I am expecting the return value to be negative. I don't think this logic has anything to do with hardware implementation.
Anyways, all thanks for your input.
Update(2)
Most of the inpit states that I am relying on the lower level behavior on overflow checking. I have one questions regarding the same,
If that is the case, For an unsigned int how do I validate and reset the value during underflow or overflow? like if v=10, i=0X7FFFFFFE, I want reset i = 9. Similarly for underflow?
I would not be able to do that unless I check for negativity of the number. So my claim is that int must return a negative number when a value is added to the +MAX_INT.
Please let me know your inputs.
It's a known problem, and I don't think it's considered a bug in the compiler. When I compile with gcc 4.5 with -Wall -O2 it warns
warning: assuming signed overflow does not occur when assuming that (X + c) < X is always false
Although your code does overflow.
You can pass the -fno-strict-overflow flag to turn that particular optimization off.
Your code produces undefined behavior. C and C++ languages has no "overflow mechanism" for signed integer arithmetic. Your calculations overflow signed integers - the behavior is immediately undefined. Considering it form "a bug in the compiler or not" position is no different that attempting to analyze the i = i++ + ++i examples.
GCC compiler has an optimization based on that part of the specification of C/C++ languages. It is called "strict overflow semantics" or something lake that. It is based on the fact that adding a positive value to a signed integer in C++ always produces a larger value or results in undefined behavior. This immediately means that the compiler is perfectly free to assume that the sum is always larger. The general nature of that optimization is very similar to the "strict aliasing" optimizations also present in GCC. They both resulted in some complaints from the more "hackerish" parts of GCC user community, many of whom didn't even suspect that the tricks they were relying on in their C/C++ programs were simply illegal hacks.
Q1: Perhaps, the number is indeed positive in a 64bit implementation? Who knows? Before debugging the code I'd just printf("%d", i+v);
Q2: The parentheses are only there to tell the compiler how to parse an expression. This is usually done in the form of a tree, so the optimizer does not see any parentheses at all. And it is free to transform the expression.
Q3: That's why, as c/c++ programmer, you must not write code that assumes particular properties of the underlying hardware, such as, for example, that an int is a 32 bit quantity in two's complement form.
What does the line:
cout<<(i + v)<<endl;
Output in the SUSE example? You're sure you don't have 64bit ints?
OK, so this was almost six years ago and the question is answered. Still I feel that there are some bits that have not been adressed to my satisfaction, so I add a few comments, hopefully for the good of future readers of this discussion. (Such as myself when I got a search hit for it.)
The OP specified using gcc 4.1.2 without any special flags. I assume the absence of the -O flag is equivalent to -O0. With no optimization requested, why did gcc optimize away code in the reported way? That does seem to me like a compiler bug. I also assume this has been fixed in later versions (for example, one answer mentions gcc 4.5 and the -fno-strict-overflow optimization flag). The current gcc man page states that -fstrict-overflow is included with -O2 or more.
In current versions of gcc, there is an option -fwrapv that enables you to use the sort of code that caused trouble for the OP. Provided of course that you make sure you know the bit sizes of your integer types. From gcc man page:
-fstrict-overflow
.....
See also the -fwrapv option. Using -fwrapv means that integer signed overflow
is fully defined: it wraps. ... With -fwrapv certain types of overflow are
permitted. For example, if the compiler gets an overflow when doing arithmetic
on constants, the overflowed value can still be used with -fwrapv, but not otherwise.

How to detect an overflow in C++?

I just wonder if there is some convenient way to detect if overflow happens to any variable of any default data type used in a C++ program during runtime? By convenient, I mean no need to write code to follow each variable if it is in the range of its data type every time its value changes. Or if it is impossible to achieve this, how would you do?
For example,
float f1=FLT_MAX+1;
cout << f1 << endl;
doesn't give any error or warning in either compilation with "gcc -W -Wall" or running.
Thanks and regards!
Consider using boosts numeric conversion which gives you negative_overflow and positive_overflow exceptions (examples).
Your example doesn't actually overflow in the default floating-point environment in a IEEE-754 compliant system.
On such a system, where float is 32 bit binary floating point, FLT_MAX is 0x1.fffffep127 in C99 hexadecimal floating point notation. Writing it out as an integer in hex, it looks like this:
0xffffff00000000000000000000000000
Adding one (without rounding, as though the values were arbitrary precision integers), gives:
0xffffff00000000000000000000000001
But in the default floating-point environment on an IEEE-754 compliant system, any value between
0xfffffe80000000000000000000000000
and
0xffffff80000000000000000000000000
(which includes the value you have specified) is rounded to FLT_MAX. No overflow occurs.
Compounding the matter, your expression (FLT_MAX + 1) is likely to be evaluated at compile time, not runtime, since it has no side effects visible to your program.
In situations where I need to detect overflow, I use SafeInt<T>. It's a cross platform solution which throws an exception in overflow situations.
SafeInt<float> f1 = FLT_MAX;
f1 += 1; // throws
It is available on codeplex
http://www.codeplex.com/SafeInt/
Back in the old days when I was developing C++ (199x) we used a tool called Purify. Back then it was a tool that instrumented the object code and logged everything 'bad' during a test run.
I did a quick google and I'm not quite sure if it still exists.
As far as I know nowadays several open source tools exist that do more or less the same.
Checkout electricfence and valgrind.
Clang provides -fsanitize=signed-integer-overflow and -fsanitize=unsigned-integer-overflow.
http://clang.llvm.org/docs/UsersManual.html#controlling-code-generation