Is there an equivalent to: Microseconds() we can find in the Carbon framework?
** Microseconds()
*
* Summary:
* Determines the number of microseconds that have elapsed since
* system startup time.
*
* Discussion:
* Return a value representing the number of microseconds since some
* point in time, usually since the system was booted. One
* microsecond is 1 * 10^-6 seconds, and so there are one million (
* 1,000,000 ) microseconds per second. For reference, in one
* microsecond light can travel about 850 feet in a vacuum.
*
* Microseconds() doesn't necessarily advance while the computer is
* asleep, so it should not be used for long duration timings.
*
* Parameters:
*
* microTickCount:
* The number of microseconds elapsed since system startup.
*
* Availability:
* Mac OS X: in version 10.0 and later in CoreServices.framework
* CarbonLib: in CarbonLib 1.0 and later
* Non-Carbon CFM: in InterfaceLib 7.1 and later*
Microseconds is a Core Service, and it is always accessible from Carbon. (Carbon sits on top of Core Services).
It originates from classic Mac OS as well.
Related
I need to calculate the overall CPU usage of my Linux device over some time (1-5 seconds) and a list of processes with their respective CPU usage times. The programm should be designed and implemented in C++. My assumption would be that the sum of all process CPU times would be equal to the total value for the whole CPU. For now the CPU I am using is multi-cored (2 cores).
According to How to determine CPU and memory consumption from inside a process? it is possible to calculate all "jiffies" available in the system since startup using the values for "cpu" in /proc/stat. If you now sample the values at two points in time and compare the values for user, nice, system and idle at the two time points, you can calculate the average CPU usage in this interval. The formula would be
totalCPUUsage = ((user_aft - user_bef) + (nice_aft - nice_bef) + (system_aft - system_bef)) /
((user_aft - user_bef) + (nice_aft - nice_bef) + (system_aft - system_bef) + (idle_aft - idle_bef)) * 100 %
According to How to calculate the CPU usage of a process by PID in Linux from C? the used jiffies for a single process can be calculated by adding utime and stime from /proc/${PID}/stat (column 14 and 15 in this file). When I now calculate this sum and divide it by the total amount of jiffies in the analyzed interval, I would assume the formula for one process to be
processCPUUsage = ((process_utime_aft - process_utime_bef) + (process_stime_aft - process_stime_bef)) /
((user_aft - user_bef) + (nice_aft - nice_bef) + (system_aft - system_bef) + (idle_aft - idle_bef)) * 100 %
When I now sum up the values for all processes and compare it to the overall calculated CPU usage, I receive a slightly higher value for the aggregated value most of the time (although the values are quite close for all different CPU loads).
Can anyone explain to me, what's the reason for that? Are there any CPU resources that are used by more than one process and thus accounted twice or more in my accumlation? Or am I simply missing something here? I can not find any further hint in the Linux man page for the proc file system (https://linux.die.net/man/5/proc) as well.
Thanks in advance!
I'm writing a music generator in C++, and I'm currently working on BPM. To get the amount of time to wait between notes, I'm using 60 / bpm, but this evaluates to zero. I have checked to make sure that bpm is declared, and it is. Trying 60 / bpm for some reason gives 2. Why is this?
Because 60 / 120 is 0 considering the constants are integral. (Inferring 120 from x / 60 = 2.) You will need to use 60.0 / x for example to get a floating point number as a result.
I work with SoapUI project and I have one question. In following example I've got 505 requests in 5 seconds with thread count =5. I would like to understand how count has been calculated in this example.
For example, if I want 1000 request in 1 minute what setting should I set in variance strategy?
Regards, Evgeniy
variance strategy as the name implies, it varies the number of threads overtime.Within the specified interval the threads will increase and decrease as per the variance value, thus simulating a realistic real time load on target web-service.
How variance is calculated : its not calculated using the mathematical variance formula. its just a multiplication. (if threads = 10 and variance = 0.5 then 10 * 0.5 = 5. The threads will be incremented and decremented by 5)
For example:
Threads = 20
variance = 0.8
Strategy = variance
interval = 60
limit = 60 seconds
the above will vary the thread by 16 (because 20 * 0.8 = 16), that is the thread count will increase to 36 and decrease to 4 and end with the original 20 within the 60 seconds.
if your requirement is to start with 500 threads and hit 1000 set your variance to 2 and so on.
refrence link:
chek the third bullet - simulating different type of load - soapUI site
Book for reference:
Web Service Testing with SoapUi by Charitha kankanamge
I have the following doubt on the usage of tm_isdst flag in the tm structure. As per man pages and googled results, I understand that its value is interpreted as follows
A. A value of 0 indicates DST is not in effect for the represented time
B. A value of 1 indicates DST is in effect
C. A value of -1 causes mktime to check whether DST is in effect or not.
It is this third point which is confusing me. My doubt is how mktime can figure out whether DST has to be applied or not accurately.
For example
My Time Zone = GMT + 3:00
DST shifting = +1 Hour at 5:00 AM in January (to keep it simple)
Current UTC time = "01/Jan/2012 00:00:00"
UTC time in seconds time_t timetUTC = X seconds
Hence my time is = "01/Jan/2012 03:00:00"
As time passes, my time value changes as follows
"01/Jan/2012 04:00:00" (X + 1 * 60 * 60)
"01/Jan/2012 05:00:00" (X + 2 * 60 * 60)
"01/Jan/2012 05:59:59" (X + 2 * 60 * 60 + 59)
"01/Jan/2012 05:00:00" (X + 3 * 60 * 60)
"01/Jan/2012 06:00:00" (X + 4 * 60 * 60)
As per my understanding
tm tmMyTime = localtime_r(X + 2 * 60 * 60) will set tmMyTime.tm_isdst to 0
tm tmMyTime = localtime_r(X + 3 * 60 * 60) will set tmMyTime.tm_isdst to 1
This way, even though all other components of tm structure are equal in both cases,
mktime(tmMyTime) can return proper UTC value, depending on tm_isdst value.
Now, if I set tmMyTime.tm_isdst = -1, what value would mktime return? I read about TZ variable, time database etc etc. In spite of all that, logically how can mktime() figure out whether to apply DST correction or not for those tm values that can occur twice?
We do not have DST in our time zone. Hence I am not very sure whether my understanding
is correct. Please correct me if I am wrong. Your help is much appreciated.
In short: it is implementation dependent.
mktime knows the DST rules by checking the locale.
For the bigger part of the year, mktime can figure out if DST is to be applied for a particular local time. The problem is indeed the "duplicate" hour when DST shifts backwards (in your example 05:00:00 - 05:59:59). For this local time range, given tm_isdst = -1, mktime cannot know if DST is in effect or not. Which one of these is chosen, differs from one implementation to another. With the GNU version of mktime, the UTC before the shift is returned.
tm_isdst cannot in general resolve ambiguous times. That's because many timezones have transitions (one-time) without jumping from dst to nodst, just changing offset and zone abbreviation. So both times (before and after transition) have the same tm_isdst.
Some other zones changes tm_isdst when switching summer/winter time but doesn't change abbreviation (Australia/Melbourne for example).
I think it's going to be somewhat dependent on your platform. In Windows, at least, the mktime() documentation states "The C run-time library assumes the United States’s rules for implementing the calculation of Daylight Saving Time", so it has a table of rules somewhere where it can determine when DST started / ended in any given year.
You're lucky not to have DST where you are. In my world, which is real-time data acquisition and presentation, DST is a big nuisance!
I have so far used Periodic build in Hudson where the schedule * * * * * builds the project every minute and 5 * * * * builds the project every x:05, x+1:05 etc.
But what is the way to build the project every 5 mins??? (Or any given time period)
Thanks
*/5 * * * *
Will do the build every 5 minutes.