calculation of unix time from utc time on ubuntu - c++

I am using ubuntu 18.04 on my machine. My ntp is configured to use gpsd as a source. Time provided by gpsd does not consider leap seconds but NTP adjusts it and provides UTC with leap seconds adjusted. So my system clock will be synced to UTC by NTP. From the documentation, std::chrono::system_clock::now provides time since 1970 and does not count leap seconds.
My question is does the kernel adjusts leap seconds when we call this? Or the time queried from std::chrono::system_clock::now is actually contains same time coming from NTP which has leap seconds adjusted.

system_clock and NTP both "handle" leap seconds the same way. Time simply stops while a leap second is being inserted. Here I'm speaking of the time standard, and not of any particular implementation.
An implementation of NTP might not stop for a whole second during a leap second insertion. Instead it might delay itself by small fractions of a second for hours both before and after a leap second insertion such that the sum of all delays is one second. This is known as a "leap second smear".
So you could say that both system_clock and NTP ignore leap seconds in that if you have two time points t0 and t1 in these systems and if t0 references a time prior to a leap second insertion and t1 references a time after that leap second insertion, then the expression t1-t0 gives you a result that does not count the inserted leap second. The result is 1 less than the number of physical seconds that has actually transpired.
A GPS satellite "ignores" leap seconds in a completely different way than system_clock and NTP. The GPS "clock" keeps ticking right through a leap second, almost completely ignoring it. However GPS weeks are always exactly 604,800 seconds (86,400 * 7), even if a leap second was inserted into UTC that week.
So to convert GPS weeks (and GPS time of week) to UTC, one has to know the total number of leap seconds that have been inserted since the GPS epoch (First Sunday of January 1980). I believe gpsd does this transformation for you when it provides you a UTC time point.

Related

What does the timeGetTime() function really return?

I'm wondering what the timeGetTime() function really returns. I've powered on my system about 15 mins ago, and timeGetTime() returns 257052531 milliseconds which is about 71 hours!
The documentation says:
The timeGetTime function retrieves the system time, in milliseconds. The system time is the time elapsed since Windows was started.
So, my system is operating ONLY ~15 mins! How could it return ~71 hours!?

QTimer long timeout

Qt 5.7 32-bit on windows 10 64-bit
long period timer
the interval of a QTimer is given in msecs as a signed integer, so the maximum interval which can be set is a little bit more than 24 days (2^31 / (1000*3600*24) = 24.85)
I need a timer with intervals going far beyond this limit.
So my question is, which alternative do you recommend? std::chrono (C++11) seems not to be suitable as it does not have an event handler?
Alain
You could always create your own class which uses multiple QTimer's for the duration they are valid and just count how many have elapsed.
Pretty simple problem. If you can only count to 10 and you need to count to 100 - just count to 10 ten times.
I would implement this in the following way:
Upon timer start, note the current time in milliseconds like this:
m_timerStartTime = QDateTime::currentMSecsSinceEpoch()
The, I would start a timer at some large interval, such as 10 hours, and attach a handler function to the timer that simply compared the time since it started to see if we are due:
if(QDateTime::currentMSecsSinceEpoch() - m_timerStartTime > WANTED_DELAY_TIME){
// Execute the timer payload
// Stop interval timer
}
This simple approach could be improved in several ways. For example, to keep the timer running even if application is stopped/restarted, simply save the timer start time in a setting or other persistent storage, and read it back in at application start up.
And to improve precision, simply change the interval from the timer handler function in the last iteration so that it tracks the initial end time perfectly (instead of overshooting by up to 10 minutes).

wait for a time

I have a requirement to start my application at certain time. I don’t want to be put in the corn job. My executable is an application and like to start on 2011-Jan-20
So I have to run it as
./app –date 2011-Jan-20
Here problem is, how I will calculates the time difference from current and date supplied in command line option.
I don’t want to write won function. Is there any in build function are available for such type of time difference. ( c and Linux)
I know you're expecting a C answer but this might interest you:
Since you're on linux, the system provides already an efficient way to schedule ponctual tasks: at
In your case, an user that would like to run his task on the 20.01.2011 at 8AM, would just type:
echo "./app" | at 08:00 20.01.2011
The task will be run using the credentials of the user. Note that at also accept relative time directives such as at now +1 day. It is a powerful tool which ships with most Linux distributions by default.
The list of scheduled jobs can be get using:
atq
And you can even remove scheduled jobs using:
atrm
Hope this helps.
You can calculate the difference between the start time and now in milliseconds and then wait for that many milliseconds by passing that number as a timeout argument to select() or epoll().
To calculate the difference, one way is to first convert your date string to struct tm using strptime() and then pass it to mktime() which is going to give you a number of seconds since unix epoch 1970-01-01 00:00:00. Then get the current time by using gettimeofday() or clock_gettime(), they also report time passed since unix epoch. Convert the start time and the current time to seconds and subtract the values.

Synchronize system clock with UTC time using C uder Window OS

I have UTC time which is coming from UDP, I have a program to calculate Day and Time of UTC, how can I set my system clock at that day and time, kindly give me some direction so that I can make it possible.
I am using Window OS.
To set the current system time, use the SetSystemTime Win32 API function.

Modify Time for simulation in c++

i am writing a program which simulates an activity, i am wondering how to speed up time for the simulation, let say 1 hour in the real world is equal to 1 month in the program.
thank you
the program is actually similar to a restaurant simulation where you dont really know when customer come. let say we pick a random number (2-10) customer every one hour
It depends on how it gets time now.
For example, if it calls Linux system time(), just replace that with your own function (like mytime) which returns speedier times. Perhaps mytime calls time and multiplies the returned time by whatever factor makes sense. 1 hr = 1 month is 720 times. Handling the origin as when the program begins should be accounted for:
time_t t0;
main ()
{
t0 = time(NULL); // at program initialization
....
for (;;)
{
time_t sim_time = mytime (NULL);
// yada yada yada
...
}
}
time_t mytime (void *)
{
return 720 * (time (NULL) - t0); // account for time since program started
// and magnify by 720, so one hour is one month
}
You just do it. You decide how many events take place in an hour of simulation time (eg., if an event takes place once a second, then after 3600 simulated events you've simulated an hour of time). There's no need for your simulation to run in real time; you can run it as fast as you can calculate the relevant numbers.
It sounds like you are implementing a Discrete Event Simulation. You don't even need to have a free-running timer (no matter what scaling you may use) in such a situation. It's all driven by the events. You have a priority queue containing events, ordered by the event time. You have a processing loop which takes the event at the head of the queue, and advances the simulation time to the event time. You process the event, which may involve scheduling more events. (For example, the customerArrived event may cause a customerOrdersDinner event to be generated 2 minutes later.) You can easily simulate customers arriving using random().
The other answers I've read thus far are still assuming you need a continuous timer, which is usually not the most efficient way of simulating an event-driven system. You don't need to scale real time to simulation time, or have ticks. Let the events drive time!
If the simulation is data dependent (like a stock market program), just speed up the rate at which the data is pumped. If it is some think that depends on time() calls you will have to do some thing like wallyk's answer (assuming you have the source code).
If time in your simulation is discrete, one option is to structure your program so that something happens "every tick".
Once you do that, time in your program is arbitrarily fast.
Is there really a reason for having a month of simulation time correspond exactly to an hour of time in the real world ? If yes, you can always process the number of ticks that correspond to a month, and then pause the appropriate amount of time to let an hour of "real time" finish.
Of course, a key variable here is the granularity of your simulation, i.e. how many ticks correspond to a second of simulated time.