std::chrono behaves different in arm - c++

So the following code I am using for a hacky production fix due to time constraints. Basically I have a static function that is called from many places, much more than intended and it is causing another section of the application to choke. So I thought I would come up with a quick fix to limit the calls to the overworked function to once every two seconds. This works just fine in x86 using clang or gcc.
#include <chrono>
#include <iostream>
#include <unistd.h>
#include <thread>
static void staticfunction()
{
static std::mutex mutex;
static auto t0(std::chrono::high_resolution_clock::now());
std::unique_lock<std::mutex> lg_mutex(mutex, std::try_to_lock);
if( lg_mutex.owns_lock())
{
auto t1 = std::chrono::high_resolution_clock::now();
if( 2000 <= std::chrono::duration_cast<std::chrono::milliseconds>(t1 - t0).count() )
{
// Make a check in other section of application
std::cout << "Check true after " << std::dec
<< std::chrono::duration_cast<std::chrono::milliseconds>(t1 - t0).count()
<< " ms.\n";
t0 = std::chrono::high_resolution_clock::now();
}
}
}
int main()
{
while(true) {
std::thread t1(staticfunction);
std::thread t2(staticfunction);
std::thread t3(staticfunction);
std::thread t4(staticfunction);
t1.join();
t2.join();
t3.join();
t4.join();
}
return 0;
Prints
Check true after 2000 ms.
Check true after 2000 ms.
Check true after 2002 ms.
Check true after 2005 ms.
....
However, for our ARM controller I cross compiled using Linaro 7.1 and now the condition for the if stmt isn't satisfied until 10 seconds has passed. I was curious and compared against 1 second instead of two (duration_cast of ms vs seconds doesn't change anything) and if(1 <= ....count()) was true after half a second.
Is this a bug in the Linaro compiler? Or are clocks for our ARM controller off? Cross compile flags are -mcpu=cortex-a7 -mfloat-abi=hard -marm -march=armv7ve if that makes a difference
EDIT: multithreaded, same output.

Related

command to kill/stop the program if it runs more than a certain time limit

If I have a C++ code with an infinite loop inside i want a command that will kill the execution after certain time.
so i came up with something like this-
g++ -std=c++20 -DLOCAL_PROJECT solution.cpp -o solution.exe & solution.exe & timeout /t 0 & taskkill /im solution.exe /f
But the problem with this was that it would first execute the program so due to the infinite loop it won't even come to timeout and taskkill part.
Does anybody have any solution to it or other alternatives instead of timeout?
I am using windows 10 and my compiler is gnu 11.2.0
Also in case there is No TLE i don't want taskkill to show this error
ERROR: The process "solution.exe" not found.
Your main loop could exit after a certain time limit, if you're confident it is called regularly enough.
#include <chrono>
using namespace std::chrono_literals;
using Clock = std::chrono::system_clock;
int main()
{
auto timeLimit = Clock::now() + 1s;
while (Clock::now() < timeLimit) {
//...
}
}
Alternatively you could launch a thread in your main throwing an exception after a certain delay:
#include <chrono>
#include <thread>
using namespace std::chrono_literals;
struct TimeOutException {};
int main()
{
std::thread([]{
std::this_thread::sleep_for(1s);
std::cerr << "TLE" << std::endl;
throw TimeOutException{};
}).detach();
//...
}
terminate called after throwing an instance of 'TimeOutException'

how to use std::atomic_signal_fence() with semaphore and volatile?

std::atomic_signal_fence() Establishes memory synchronization ordering ... between a thread and a signal handler executed on the same thread.
-- cppreference
In order to find an example for this illustration, I looked at bames53's similar question in stackoverflow. However the answer may not suit my x86_64 environment, since x86_64 CPU is strong memory model and forbids Store-Store re-ordering ^1. Its example will correctly execute even without std::atomic_signal_fence() in my x86_64 environment.
So I made a Store-Load re-ordering example suitable for x86_64 after Jeff Preshing's post. The example code is not that short, so I opened this question instead of appending onto bames53's similar question.
main() and signal_handler() will run in the same thread(i.e. they will share the same tid) in a single core environment. main() can be interrupted at any time by signal_handler(). If no signal_fences are used, in the generated binary X = 1; r1 = Y; will be exchanged their ordering if compiled with g++ -O2(Store(X)-Load(Y) is optimized to Load(Y)-Store(X)). The same with Y = 1; r2 = X;. So if main() is interrupted just after 'Load(Y)', it results r1 == 0 and r2 == 0 at last. Thus in the following code line (C) will assert fail. But if line (A) and (B) are uncommented, it should never assert fail since a signal_fence is used to protect synchronization between main() and signal_handler(), to the best of my understanding.
#include <atomic>
#include <cassert>
#include <cstdio>
#include <cstdlib>
#include <ctime>
#include <iostream>
#include <semaphore.h>
#include <signal.h>
#include <unistd.h>
sem_t endSema;
// volatile int synchronizer;
int X, Y;
int r1, r2;
void signal_handler(int sig) {
signal(sig, SIG_IGN);
Y = 1;
// std::atomic_signal_fence(std::memory_order_seq_cst); // (A) if uncommented, assert still may fail
r2 = X;
signal(SIGINT, signal_handler);
sem_post(&endSema); // if changed to the following, assert never fail
// synchronizer = 1;
}
int main(int argc, char* argv[]) {
std::srand(std::time(nullptr));
sem_init(&endSema, 0, 0);
signal(SIGINT, signal_handler);
for (;;) {
while(std::rand() % std::stol(argv[1]) != 0); // argv[1] ~ 1000'000
X = 1;
// std::atomic_signal_fence(std::memory_order_seq_cst); // (B) if uncommented, assert still may fail.
r1 = Y;
sem_wait(&endSema); // if changed to the following, assert never fail
// while (synchronizer == 0); synchronizer = 0;
std::cout << "r1=" << r1 << " r2=" << r2 << std::endl;
if (r1 == 0) assert(r2 != 0); // (C)
Y = 0; r1 = 0; r2 = 0; X = 0;
}
return 0;
}
Firstly semaphore is used to synchronize main() with signal_handler(). In this version, the assert always fail after around received 30 SIGINTs with or without the signal fence. It seems that std::atomic_signal_fence() did not work as I expected.
Secondly If semaphore is replaced with volatile int synchronizer, the program seems never fail with or without the signal fence.
What's wrong with the code? Did I miss-understand the cppreference doc? Or is there any more proper example code for this topic in x86_64 environment that I can observe the effects of std::atomic_signal_fence?
Below is some relevant info:
compiling & running env: CentOS 8 (Linux 4.18.0) x86_64 single CPU core.
Compiler: g++ (GCC) 8.3.1 20190507
Compiling command g++ -std=c++17 -o ordering -O2 ordering.cpp -pthread
Run with ./ordering 1000000, then keep pressing Ctrl-C to invoke the signal handler.

boost interprocess scoped lock with timer blocks despite should return

I have some code of my application that makes usage of boost inteprocess scoped lock with timers. When a mutex is acquired in one thread, a second thread tyring to acquire it for few milliseconds will fail and will log something to screeen.
I don't know why but with the version of boost 1.50 this doens't work anymore.
The code below I can see that the thread #2 doesn't print "ERROR" but is completely stuck.
Am I missing something here?
I am using LINUX kernel 2.6.32 with g++.
COuld it be something to deal with UTC? I read o boost that the time used by such lock is UTC and in date time I am reading right now about local_adjustor and conversion from local to utc and vice-versa.
AFG
#include <iostream>
#include <boost/interprocess/sync/scoped_lock.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/interprocess/sync/named_mutex.hpp>
#include <boost/thread.hpp>
#include <boost/bind.hpp>
namespace bi = boost::interprocess;
void lock_test( bi::named_mutex& mt, bool long_sleep ) {
boost::posix_time::ptime pt =
boost::posix_time::microsec_clock::local_time()
+ boost::posix_time::milliseconds(100);
bi::scoped_lock<bi::named_mutex> l( mt, pt );
if( l.owns() ){
std::cout << "Locked"<<std::endl;
}
else{
std::cout << "ERROR" << std::endl;
std::cout.flush();
return ;
}
if(long_sleep){
while(true) {sleep(1);std::cout<<"[]";std::cout.flush();}
}
}
int main(){
bi::named_mutex m_mutex( bi::open_or_create, "ciao"
, bi::permissions( 0666 ));
boost::thread t1 = boost::thread( &lock_test
, boost::ref( m_mutex), true );
sleep(4);
boost::thread t2 = boost::thread( &lock_test
, boost::ref(m_mutex), false );
while(true){sleep(1);}
}
It looks that if I switch from boost::posix_time::microsec_clock::local_time() to
boost::posix_time::microsec_clock::universal_time()
everything works fine.
You should use boost::get_system_time(), there are quite a few examples with it. Though I can't find the authoritative source, I use microsec_clock exactly as you do and get similar problems. Just discovered the bug though, will update when I'll test the fix.
Usage of boost::unique_lock::timed_lock

Boost thread_interrupted exception terminate()s with MinGW gcc 4.4.0, OK with 3.4.5

I've been "playing around with" boost threads today as a learning exercise, and I've got a working example I built quite a few months ago (before I was interrupted and had to drop multi-threading for a while) that's showing unusual behaviour.
When I initially wrote it I was using MingW gcc 3.4.5, and it worked. Now I'm using 4.4.0 and it doesn't - incidentally, I've tried again using 3.4.5 (I kept that version it a separate folder when I installed 4.4.0) and it's still working.
The code is at the end of the question; in summary what it does is start two Counter objects off in two child threads (these objects simply increment a variable then sleep for a bit and repeat ad infinitum - they count), the main thread waits for the user via a cin.get() and then interrupts both threads, waits for them to join, then outputs the result of both counters.
Complied with 3.4.5 it runs as expected.
Complied with 4.4.0 it runs until the user input, then dies with a message like the below - it seems the the interrupt exceptions are killing the entire process?
terminate called after throwing an instance of '
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
boost::thread_interrupted'
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
From what I read, I think that any (?) uncaught exception that is allowed to propagate out of a child thread will kill the process? But, I'm catching the interrupts here, aren't I? At least I seem to be when using 3.4.5.
So, firstly, have I understood how interrupting works?
And, any suggestions as to what is happening and how to fix?
Code:
#include <iostream>
#include <boost/thread/thread.hpp>
#include <boost/date_time.hpp>
//fixes a linker error for boost threads in 4.4.0 (not needed for 3.4.5)
//found via Google, so not sure on validity - but does fix the link error.
extern "C" void tss_cleanup_implemented() { }
class CCounter
{
private:
int& numberRef;
int step;
public:
CCounter(int& number,int setStep) : numberRef(number) ,step(setStep) { }
void operator()()
{
try
{
while( true )
{
boost::posix_time::milliseconds pauseTime(50);
numberRef += step;
boost::this_thread::sleep(pauseTime);
}
}
catch( boost::thread_interrupted const& e )
{
return;
}
}
};
int main( int argc , char *argv[] )
{
try
{
std::cout << "Starting counters in secondary threads.\n";
int number0 = 0,
number1 = 0;
CCounter counter0(number0,1);
CCounter counter1(number1,-1);
boost::thread threadObj0(counter0);
boost::thread threadObj1(counter1);
std::cout << "Press enter to stop the counters:\n";
std::cin.get();
threadObj0.interrupt();
threadObj1.interrupt();
threadObj0.join();
threadObj1.join();
std::cout << "Counter stopped. Values:\n"
<< number0 << '\n'
<< number1 << '\n';
}
catch( boost::thread_interrupted& e )
{
std::cout << "\nThread Interrupted Exception caught.\n";
}
catch( std::exception& e )
{
std::cout << "\nstd::exception thrown.\n";
}
catch(...)
{
std::cout << "\nUnexpected exception thrown.\n"
}
return EXIT_SUCCESS;
}
Solved.
It turns out adding the complier flag -static-libgcc removes the problem with 4.4.0 (and has no apparent affect with 3.4.5) - or at least in this case the program returns the expected results.

How to get the time in milliseconds in C++

In Java you can do this:
long now = (new Date()).getTime();
How can I do the same but in C++?
Because C++0x is awesome
namespace sc = std::chrono;
auto time = sc::system_clock::now(); // get the current time
auto since_epoch = time.time_since_epoch(); // get the duration since epoch
// I don't know what system_clock returns
// I think it's uint64_t nanoseconds since epoch
// Either way this duration_cast will do the right thing
auto millis = sc::duration_cast<sc::milliseconds>(since_epoch);
long now = millis.count(); // just like java (new Date()).getTime();
This works with gcc 4.4+. Compile it with --std=c++0x. I don't know if VS2010 implements std::chrono yet.
There is no such method in standard C++ (in standard C++, there is only second-accuracy, not millisecond). You can do it in non-portable ways, but since you didn't specify I will assume that you want a portable solution. Your best bet, I would say, is the boost function microsec_clock::local_time().
I like to have a function called time_ms defined as such:
// Used to measure intervals and absolute times
typedef int64_t msec_t;
// Get current time in milliseconds from the Epoch (Unix)
// or the time the system started (Windows).
msec_t time_ms(void);
The implementation below should work in Windows as well as Unix-like systems.
#if defined(__WIN32__)
#include <windows.h>
msec_t time_ms(void)
{
return timeGetTime();
}
#else
#include <sys/time.h>
msec_t time_ms(void)
{
struct timeval tv;
gettimeofday(&tv, NULL);
return (msec_t)tv.tv_sec * 1000 + tv.tv_usec / 1000;
}
#endif
Note that the time returned by the Windows branch is milliseconds since the system started, while the time returned by the Unix branch is milliseconds since 1970. Thus, if you use this code, only rely on differences between times, not the absolute time itself.
You can try this code (get from StockFish chess engine source code (GPL)):
#include <iostream>
#include <stdio>
#if !defined(_WIN32) && !defined(_WIN64) // Linux - Unix
# include <sys/time.h>
typedef timeval sys_time_t;
inline void system_time(sys_time_t* t) {
gettimeofday(t, NULL);
}
inline long long time_to_msec(const sys_time_t& t) {
return t.tv_sec * 1000LL + t.tv_usec / 1000;
}
#else // Windows and MinGW
# include <sys/timeb.h>
typedef _timeb sys_time_t;
inline void system_time(sys_time_t* t) { _ftime(t); }
inline long long time_to_msec(const sys_time_t& t) {
return t.time * 1000LL + t.millitm;
}
#endif
int main() {
sys_time_t t;
system_time(&t);
long long currentTimeMs = time_to_msec(t);
std::cout << "currentTimeMs:" << currentTimeMs << std::endl;
getchar(); // wait for keyboard input
}
Standard C++ does not have a time function with subsecond precision.
However, almost every operating system does. So you have to write code that is OS-dependent.
Win32:
GetSystemTime()
GetSystemTimeAsFileTime()
Unix/POSIX:
gettimeofday()
clock_gettime()
Boost has a useful library for doing this:
http://www.boost.org/doc/libs/1_43_0/doc/html/date_time.html
ptime microsec_clock::local_time() or ptime second_clock::local_time()
Java:
package com.company;
public class Main {
public static void main(String[] args) {
System.out.println(System.currentTimeMillis());
}
}
c++:
#include <stdio.h>
#include <windows.h>
__int64 currentTimeMillis() {
FILETIME f;
GetSystemTimeAsFileTime(&f);
(long long)f.dwHighDateTime;
__int64 nano = ((__int64)f.dwHighDateTime << 32LL) + (__int64)f.dwLowDateTime;
return (nano - 116444736000000000LL) / 10000;
}
int main() {
printf("%lli\n ", currentTimeMillis());
return 0;
}