Explain karma unit test times - unit-testing

I've been searching the web for this for two days and I found nothing. Maybe I'm looking in the wrong way — I don't know...
So here it is: what are the times on my console when running a Karma+Jasmine+phantomJs unit test?
... Executed 1 of 1 SUCCESS (0.878 secs / 0.112 secs)
First, I though that the second time is the total unit test time (for example, when running multiple tasks), however, sometimes the first time gets to be 'bigger', sometimes not...
Anyone?

total time / net time
net time = only test execution (in the browser)
total time = how long it took since Karma noticed the file change till the final result was printed (net time + communication with the browser + loading the files in the browser)
See karma/lib/reporters/base.js

Related

phpseclib with laravel unstable connection to switch

I am using phpseclib to conect with cisco switch.
$ssh = new SSH2('169.254.170.30',22);
$ssh->login('lemi', 'a');
Sometimes it connects very fast few times in a row (when i reload a page), but sometimes i get error message "Maximum execution time of 60 seconds exceeded" also few times in a row. I am doing that in laravel. I do not want to prolong execution time. I can't figure out where is the problem. I think that it has some problems with sockets... I am getting error on this line (3031) of code in SSH2.php :
$raw = stream_get_contents($this->fsock, $this->decrypt_block_size);
Any sugestions?

FIO runtime different than gettimeofday()

I am trying to measure the execution time of FIO benchmark. I am, currently, doing so wrapping the FIO call between gettimeofday():
gettimeofday(&startFioFix, NULL);
FILE* process = popen("fio --name=randwrite --ioengine=posixaio rw=randwrite --size=100M --direct=1 --thread=1 --bs=4K", "r");
gettimeofday(&doneFioFix, NULL);
and calculate the elapsed time as:
double tstart = startFioFix.tv_sec + startFioFix.tv_usec / 1000000.;
double tend = doneFioFix.tv_sec + doneFioFix.tv_usec / 1000000.;
double telapsed = (tend - tstart);
Now, the question(s) is
telapsed time is different (larger) than the runt by FIO output. Can you please help me in understanding Why? as the fact can be seen in FIO output:
randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=1
fio-2.2.8
Starting 1 thread
randwrite: (groupid=0, jobs=1): err= 0: pid=3862: Tue Nov 1 18:07:50 2016
write: io=102400KB, bw=91674KB/s, iops=22918, runt= 1117msec
...
and the telapsed is:
telapsed: 1.76088 seconds
what is the actual time taken by FIO execution:
a) runt given by FIO, or
b) the elapsed time by getttimeofday()
How does FIO measure its runt? (probably, this question linked to 1.)
PS: I have tried to replace the gettimeofday(with std::chrono::high_resolution_clock::now()), but it also behaves the same (by same, I mean it also gives larger elapsed time than runt)
Thank you in advance, for your time and assistance.
A quick point:gettimeofday() on Linux uses a clock that doesn't necessarily tick at a constant interval and can even move backwards (see http://man7.org/linux/man-pages/man2/gettimeofday.2.html and https://stackoverflow.com/a/3527632/4513656 ) - this may make telapsed unreliable (or even negative).
Your gettimeofday/popen/gettimeofday measurement (telapsed) is going to be: the fio process start up (i.e. fork+exec on Linux) elapsed + fio initialisation (e.g. thread creation because I see --thread, ioengine initialisation) + fio job elapsed (runt) + fio stopping elapsed + process stop elapsed). You are comparing this to just runt which is a sub component of telapsed. It is unlikely all the non-runt components are going to happen instantly (i.e. take up 0 usecs) so the expectation is that runt will be smaller than telapsed. Try running fio with --debug=all just to see all the things it does in addition to actually submitting I/O for the job.
This is difficult to answer because it depends on what you want you mean when you say "fio execution" and why (i.e. the question is hard to interpret in an unambiguous way). Are you interested in how long fio actually spent trying to submit I/O for a given job (runt)? Are you interested in how long it takes your system to start/stop a new process that just so happens to try and submit I/O for a given period (telapsed)? Are you interested in how much CPU time was spent submitting I/O (none of the above)? So because I'm confused I'll ask you some questions instead: what are you going to use the result for and why?
Why not look at the source code? https://github.com/axboe/fio/blob/7a3b2fc3434985fa519db55e8f81734c24af274d/stat.c#L405 shows runt comes from ts->runtime[ddir]. You can see it is initialised by a call to set_epoch_time() (https://github.com/axboe/fio/blob/6be06c46544c19e513ff80e7b841b1de688ffc66/backend.c#L1664 ), is updated by update_runtime() ( https://github.com/axboe/fio/blob/6be06c46544c19e513ff80e7b841b1de688ffc66/backend.c#L371 ) which is called from thread_main().

Resuming PHPUnit test in Moodle from specific percentage or test number

I am running PHPUnitTest (vendor/bin/phpunit) on Moodle, it took about 30 min and reached (2867 / 4261) 68% and break with fatal error.
I have fixed the fatal error, now I wish to continue/resume from 68% or 2867/4261(where it broke last time) rather than running from beginning or running each test one by one.
Any possibility?
I am following guideline here

How can I periodically execute some function if this function takes along time to run (less than peroid)

I want to run a function for example func() exactly 1 time per second. However the running time of func() is about 500 ms. How Can I do that? I know if the running time of the function is low, I can write a while loop in func() and sleep() for 1 second after each execution. But now, the running time is high. What should I do to ensure the func() run exactly 1 time per second? Thanks.
Yo do:
Take the current time in start_time.
Perform your job
Take the current time in end_time
Wait for (1 second + start_time - end_time)
That way, you can perform your tasks every seconds reliably. If the task takes less time, you will wait longer and vice versa. Note however that this assumes that your task takes always less than 1 sec. to execute. In the real code, you want to check for that before the sleep statement.
Implementation details depend on the platform.
Note that using this method still results in a small drift due to the time it takes to compute step 4. A more accurate alternative would be to synchronize on integer multiple of one second. That way, over 1000s of cycles you would not drift.
It depends on the level of accuracy you need.
If you want a brute, easy to code solution, you can get the time before first run of the function and save it in some variable (start_time). Create repeat index count variable (repeat_number) that stores next repeat number. Then you can do kinda this:
1) next_run_time = ++repeat_number*1sec + start_time;
2) func();
3) wait_time = next_run_time - current_time;
4) sleep(wait_time)
5) goto 1;
This approach disables accumulation of time error on each iteration.
But for the real application you should find some event framework or library.

Occasional Date or timezone discrepancy in hudson or maven with jodatime

I hope following explanation will make sense because it's a weird problem we're facing and hard to describe.
We have a maven project which gets build in hudson and that contains some unit tests where dates are used and asserted. The hudson server runs on solaris. Now, occasionally (like 30% of the times) the unit tests using dates fail because 3,5 hours are deducted from the specified time in the unit test and hence asserts start failing. The other 70% everything works fine although nothing at all changed in the code and we run the hudson job several times an hour.
I add following code to a unittest to check the time:
#Test
public void testDate() {
System.out.println("new DateMidnight(2011, 1, 5).toDate();");
System.out.println(new DateMidnight(2011, 1, 5).toDate());
System.out.println(new DateMidnight(2011, 1, 5).toDate().getTime());
Calendar cal = Calendar.getInstance();
cal.set(Calendar.YEAR, 2011);
cal.set(Calendar.MONTH, 0);
cal.set(Calendar.DAY_OF_MONTH, 5);
cal.set(Calendar.HOUR, 0);
cal.set(Calendar.MINUTE, 0);
cal.set(Calendar.SECOND, 0);
cal.set(Calendar.MILLISECOND, 0);
System.out.println("cal.getTime();");
System.out.println(cal.getTime());
System.out.println(cal.getTime().getTime());
}
So basically it should print the same thing when using jodatime or plain old Calendar. This is the case in 70% of the runs; for the other 30% I get following printouts:
Running TestSuite
new DateMidnight(2011, 1, 5).toDate();
Tue Jan 04 21:30:00 MET 2011
1294173000000
cal.getTime();
Wed Jan 05 12:00:00 MET 2011
1294225200000
So, Calendar keeps the correct date and time but jodatime deducts 3,5 hours.
Local maven tests never appear the pose this problem and we can't figure out what could be the cause of it. Especially, we can't think of a single reason why the tests sometimes pass and sometimes fail without changing any code nor hudson or server setting.
Also, we run the maven install with cobertura which means that the unit tests are run twice. It happens also that they pass the first time and fail the second time or the other way around or that they fail both times.
Thanks for any solutions or tips to track down the cause,
Stijn
You may also be running into some version of SUREFIRE-533, which should be solvable by setting the TZ environment variable in the environment that is running hudson. Please report back on the issue if this helps.
Perhaps Hudson is running on 3 servers, and 1 has a different version of the time zone data in the JDK from the other 2? (check the JVM version in detail). Remember that Joda-Time also has its own version of the time-zone data, and the two may be different.
The problem seems to be fixed now.
We've upgraded our jodatime version from 5.14.2 to 5.14.6.
Since then I ran the build on auto each half hour so after about a 100 runs we never encountered this problem again.