Boost timer limit - c++

I need to measure an elapsed time (or some sort of timestamp - doesn't matter if it's the system time or something that started from 0) in milliseconds, and was interested in using the boost::cpu_timer class to do this.
Is it un-wise to use this class for an extended period of time (i.e. a week straight of non-stop measuring)? Is there an alternative solution?
From my experience with getting the system timestamp, I've gradually come to the assumption that obtaining the timestamp in milliseconds (which is what I need) every couple of milliseconds is incredibly slow and strenuous.

I think boost::chrono or std::chrono better solve this problem

Related

How to determine the time an actuator has been enabled during the past x minutes?

I need to find the most efficient approach to the following. If someone can point me in the right direction, I can write the code myself.
Environment
I am using an ESP32 and working in Arduino C++.
What I want to achieve
I want to track the amount of time an actuator has been on over the past x minutes. This is to prevent the actuator from over-heating.
My idea
Storing current times in an array every time the actuator goes on (it is on for a fixed amount of time). When the oldest measurement is older than x minutes, it is removed from the array. If the array exceeds a certain size (e.g. certain amount of minutes the actuator has been on), a cool down period is started.
However, I feel there must be a more efficient / easy way to achieve this. How would you go about this?
Thanks in advance.
If possible, has temperature sensor is the easiest way.
With array, there will be problem with the size, especially, if you want to count in minutes. For counting, there is also way for easier as following:
T is the total time ON in last xx minutes as you expected. During initialization, it will be 0.
If actuator is ON, so every check cycle (may be every s or smaller depend on your required), T will be increase value = cycle time
If actuator is OFF: if T>0 then decrease value = cycle time, if T= 0, nothin to subtract more.

Calculating download speed in libcurl?

So I've been using libcurl to have a little go with HTTP requests like GET, and I have managed to create the progress function callback to see how much has downloaded. However, what I don't know is the formula in order to calculate download speed as you go (similar to how browsers show you the download speed, eg Chrome).
I originally thought of using this:
downloadSpeed = amountCurrentlyDownloaded / secondsSinceDownloadStarted
Similar to the
speed = distance / time
formula. However, this isn't accurate. For example, if the download hasn't changed at all, downloadSpeed will go down slightly, but not down to zero.
So what is the correct formula to calculate download speed?
Think of a car. Do you want to know the average speed for the trip, or do you want to know your current speed? Your formula gives average speed.
Since you are receiving data in increments, you can't just do like a spedometer and see a current speed. Instead, maybe you could update every few seconds, and when you do, divide the number of chars since the
Last update by the time since the last update (need to use higher precision timer than seconds).
Perhaps you want to display both the current and average speeds. That's just a question of what will "feel" best to your users.

ColdFusion Execution Time Accuracy

I found a very old thread from 2004 that reported the fact that the execution times listed in ColdFusion debugging output are only accurate to the 16ms. Meaning, that when you turn debugging output on and look at execution times, you're seeing an estimate to the closest multiple of 16ms. I can see this today with ACF10. When refreshing a page, most times bounce between multiples of 15-16ms.
Here are the questions:
Starting at the bottom, when ColdFusion reports 0ms or 16ms, does
that always mean somewhere between 0 and 16, but not over 16ms?
When coldfusion reports 32 ms, does this mean somewhere between
17 and 32?
ColdFusion lists everything separately by default rather than as
an execution tree where callers include many functions. When
determining the execution cost higher up on the tree, is it summing
the "innaccurate" times of the children, or is this a realistic cost
of the actual time all the child processes took to execute?
Can we use cftimers or getTickCount() to actually get accurate
times, or are these also estimates?
Sometimes, you'll see that 3 functions took 4ms each for a total of 12ms or even a single call taking 7ms. Why does it sometimes seem "accurate?"
I will now provide some guesses, but I'd like some community support!
Yes
Yes
ColdFusion will track report accurate to the 16ms the total time that process took rather than summing the child processes.
cftimers and getTickCount() are more accurate.
I have no idea?
In Java, you either have System.currentTimeMillis() or System.nanoTime().
I assume getTickCount() merely returns System.currentTimeMillis(). It's also used by ColdFusion to report debugging output execution times. You can find on numerous StackOverflow questions complaining about the inaccuracy of System.currentTimeMillis() because it is reporting from the operating system. On Windows, the accuracy can vary quite a bit, up to 50ms some say. It doesn't take leap ticks into account or anything. However, it is fast. Queries seem to report either something from the JDBC driver, the SQL engine, or another method as they are usually accurate.
As an alternative, if you really want increased accuracy, you can use this:
currentTime = CreateObject("java", "java.lang.System").nanoTime()
That is less performant than currentTimeMillis(), but it is precise down to nanoseconds. You can divide by 1000 to get to microseconds. You'll want to wrap in precisionEvaluate() if you are trying to convert to milliseconds by dividing by 1000000.
Please note that nanoTime() is not accurate to the nanosecond, is just precise to the nanosecond. It's accuracy is just a matter of being an improvement over currentTimeMillis().
This is more a comment then an answer but i can't comment yet.
In my experience the minimum execution time for a query is 0 ms or 16 ms. It is never 8ms or 9ms. For fun you can try this:
<cfset s = gettickcount()>
<cfset sleep(5)>
<cfset e = gettickcount() -s>
<Cfoutput>#e#</cfoutput>
I tried it with different values it seems the expected output and the actual output always differ in the range from 0ms to 16ms no matter what value is used. It seems that coldfusion (java) is accurate with a margin of about 16 ms.

C++ and Windows: Is SYSTEMTIME always based on the Gregorian calendar?

I have a SYSTEMTIME struct. This struct may either contain a UTC time or a local time that was returned from a Windows API function at some prior point and time.
In C++ I am calculating the day of the year based on the SYSTEMTIME that a function returns. In other words how many days since Jan 1. In order to do that I need to be mindful of the extra day during leap years, Feburary 29. That's all easy enough if I knew that the SYSTEMTIME is always based on the gregorian calendar.
If a user in a foreign country uses some other calendar system wouldn't I have a problem calculating the day of the year? I can't seem to do this on my machine to test the theory, and I don't even know if it's plausible. Any Microsoft experts that can help me out here?
Maybe a better question would be is there already a Windows API function that calculates the day of the year based on a SYSTEMTIME? I can't find one.
The closest thing I could find searching is this javascript question, which is interesting but I think very different from what I'm asking. I won't see any replies to this question until tomorrow (Monday) so if there are any follow up questions I will answer them then.
Thanks!
edit: I found this article but it still doesn't answer the question:
OS level support for non-Gregorian calendars? - Sorting it all Out - Site Home - MSDN Blogs
In looking at SYSTEMTIME on MSDN, it says:
Retrieves the current system date and time. The system time is expressed in Coordinated Universal Time (UTC).
It seems that regardless, SYSTEMTIME works in the Gregorian calendar.
Best of luck, I hope that I was of help.

How do I set a timer 2 seconds in to the future with boost?

So I have a function which takes "const boost::posix_time::time_duration& t" as an attribute and I can't find any information on how to do something as simple as set the duration 2 seconds (or whatever) in to the future. Lots of complex stuff explained in boost.org but I can't find any info on how to get the current time in some manageable form to which I can add a couple of seconds. This has to be something really simple but I can't figure it out...
time_duration doesn't hold a time as in "a specific point in time". It holds a length of time. If you want that length to be 2 seconds, you can do this:
t = boost::posix_time::seconds(2);