I am trying to run all unit tests using eunit inside a folder but it seems like timeout always reset to 5 seconds.
e.g.
Module:
-module(example).
-include_lib("eunit/include/eunit.hrl").
main_test() ->
% sleep for 10 seconds
?assertEqual(true, begin timer:sleep(10000), true end).
Command line:
Eshell V5.7.3 (abort with ^G)
1> c(example).
{ok,example}
2> eunit:test({timeout, 15, example}).
Test passed.
ok
3> eunit:test({timeout, 15, {dir, "."}}).
example: main_test (module 'example')...*timed out*
undefined
=======================================================
Failed: 0. Skipped: 0. Passed: 0.
One or more tests were cancelled.
error
As you can see, running {timeout, 15, example} works but not {timeout, 15, {dir, "."}}. Does anyone have a clue?
To me that makes sense: the timeout for an entire directory is probably not related to the timeouts for the individual tests.
I would write the test like this:
main_test_() ->
% sleep for 10 seconds
{timeout, 15, ?_assertEqual(true, begin timer:sleep(10000), true end)}.
(Underscores added to create a test expression instead of the test itself; it's all in the eunit manual. I don't think it's possible to specify a timeout in the test itself in any other way.)
Related
Using Base.Test for my unit tests, I am surprised by immediate exit right after the first test failure.
Let's consider this runtest.jl file:
using Base.Test
#testset "First" begin
# test fails
#test false
end;
#testset "Second" begin
# never run...
#test true
end;
The output of julia runtest.jl is always (the second test is never run):
First: Test Failed
Expression: false
Stacktrace:
[1] macro expansion at /home/picaud/Temp/runtests.jl:14 [inlined]
[2] macro expansion at ./test.jl:860 [inlined]
[3] anonymous at ./<missing>:?
Test Summary: | Fail Total
First | 1 1
ERROR: LoadError: Some tests did not pass: 0 passed, 1 failed, 0 errored, 0 broken.
My question: how to run and report all test results even if some tests fail?
Reading the Julia doc Working-with-Test-Sets it seems one must systematically use nested testset:
Typically a large number of tests are used to make sure functions work
correctly over a range of inputs. In the event a test fails, the
default behavior is to throw an exception immediately. However, it is
normally preferable to run the rest of the tests first to get a better
picture of how many errors there are in the code being tested.
and later this quote:
The #testset() macro can be used to group tests into sets. All the
tests in a test set will be run, and at the end of the test set a
summary will be printed.
In the previous peculiar example, this
using Base.Test
#testset "All tests" begin
#testset "First" begin
#test false
end;
#testset "Second" begin
# is run, ok
#test true
end;
end;
will run all tests:
First: Test Failed
Expression: false
Stacktrace:
[1] macro expansion at /home/picaud/Temp/runtests.jl:5 [inlined]
[2] macro expansion at ./test.jl:860 [inlined]
[3] macro expansion at /home/picaud/Temp/runtests.jl:4 [inlined]
[4] macro expansion at ./test.jl:860 [inlined]
[5] anonymous at ./<missing>:?
Test Summary: | Pass Fail Total
All tests | 1 1 2
First | 1 1
Second | 1 1
ERROR: LoadError: Some tests did not pass: 1 passed, 1 failed, 0 errored, 0 broken.
That is a loooong writeup containing a simple question in it. The answer is also simple: yes, adding an outer test set is the de-facto standard for achieving the goal.
In the logs it says that 2 test workers were used, is there a way to configure max to be 1?
Run Settings
...
NumberOfTestWorkers: 2
Using a manual script like below works but gets messy when the solution contains many assemblies.
test_script:
- nunit3-console.exe Gu.Persist.Core.Tests\bin\Release\Gu.Persist.Core.Tests.dll --result=myresults.xml;format=AppVeyor --workers=1
- ...
AppVeyor generates nunit3-console command line without any --workers switch. I believe that number of workers is decided by nunit console itself. As I understand if you remove Parallelizable Attribute from your tests, it will be only one worker.
We are running automated unit tests in our Bamboo build, but they are sometimes failing even though our log indicates that all tests are appropriately passing. I've done some Googling and am currently getting no where. Does anyone have a clue as to why the VSTest.Console.Exe is returning a value other than 0?
Thanks a ton!
Here are the last few lines of the log:
build 26-May-2016 14:11:25 Passed ReInitializeConnection
build 26-May-2016 14:11:25 Passed UserIdentifier_CRUD
build 26-May-2016 14:11:25 Results File: D:\build-dir\AVENTURA-T2-COREUNITTESTS\TestResults\bamboo_svc_BUILDP02 2016-05-26 14_10_58.trx
build 26-May-2016 14:11:25
build 26-May-2016 14:11:25 Total tests: 159. Passed: 159. Failed: 0. Skipped: 0.
build 26-May-2016 14:11:25 Test Run Successful.
build 26-May-2016 14:11:25 Test execution time: 27.3562 Seconds
simple 26-May-2016 14:11:32 Failing task since return code of [C:\Program Files\Bamboo\temp\AVENTURA-T2-COREUNITTESTS-345-ScriptBuildTask-2971562088758505573.bat] was 255 while expected 0
simple 26-May-2016 14:11:32 Finished task 'Run vstest.console.exe' with result: Failed
This isn't the solution I wanted but it does keep my build from failing if the return code is something other than 0 and all the tests are passing. At the end of our test command I add:
if %ERRORLEVEL% NEQ 0 (
echo Failure Reason Given is %errorlevel%
exit /b 0
)
All this does it catch the error coming out of the vstest.console.exe and throw a return code of 0 out instead of 255. If anyone ever figures this out, I would greatly appreciate knowing why the return code is something other than 0.
As indicated in a comment to the question, I've come up against the issue in the test automation for my company too.
In our case, vstest would return 1 when tests failed, but then occasionally return 255. In the case of the 255 return, the test TRX output would not be generated.
In our situation, we are running integration tests that spawn child processes. The child processes have output handlers attached that write to the test context. The test starts the process, then uses the WaitForExit(int milliseconds) method to wait for it to complete.
The output handlers on the process output are then executing in a different thread, but have a reference to the test context to write their output.
This can cause issues in two ways:
In the documentation for WaitForExit(int milliseconds) on MSDN, it states:
When standard output has been redirected to asynchronous event handlers, it is possible that output processing will not have completed when this method returns. To ensure that asynchronous event handling has been completed, call the WaitForExit() overload that takes no parameter after receiving a true from this overload.
This means that it's possible that the output handlers are writing to the context after the test is complete.
When the timeout expires, the process continues to run in the background, and therefore might also be able to write to the test context.
The fix in our case was threefold:
After the call to WaitForExit(int), either kill the process (timeout) or call WaitForExit() again (non-timeout).
Deregister the output event handlers from the process object
Dispose the Process object properly (with using).
The specifics of your case might be different to ours, but look for threaded tests where (a) the thread might execute after the test is complete and (b) writes to the test output.
I am running PHPUnitTest (vendor/bin/phpunit) on Moodle, it took about 30 min and reached (2867 / 4261) 68% and break with fatal error.
I have fixed the fatal error, now I wish to continue/resume from 68% or 2867/4261(where it broke last time) rather than running from beginning or running each test one by one.
Any possibility?
I am following guideline here
I am trying to learn Erlang currency programming.
This is an example program got from Erlang.org but no instructions about how to run it.
I run it in this way,
1> counter:start()
<0.33.0>
But, I do not know how to run other functions so that the process (counter:start()) can do the work according to the received message.
How to confirm that two or more processes have really been generated ?
Another question, how to print out received message in a function ?
-module(counter).
-export([start/0,loop/1,increment/1,value/1,stop/1]).
%% First the interface functions.
start() ->
spawn(counter, loop, [0]).
increment(Counter) ->
Counter ! increment.
value(Counter) ->
Counter ! {self(),value},
receive
{Counter,Value} ->
Value
end.
stop(Counter) ->
Counter ! stop.
%% The counter loop.
loop(Val) ->
receive
increment ->
loop(Val + 1);
{From,value} ->
From ! {self(),Val},
loop(Val);
stop -> % No recursive call here
true;
Other -> % All other messages
loop(Val)
end.
Any help will be appreciated.
thanks
Other functions will just use the module you just created, like this:
C = counter:start(),
counter:increment(C),
counter:increment(C),
io:format("Value: ~p~n", [counter:value(C)]).
You can run pman:start() to bring up the (GUI) process manager to see which processes you have.
In addition to what Emil said, you can use the i() command to verify which processes are running. Let's start three counters:
1> counter:start().
<0.33.0>
2> counter:start().
<0.35.0>
3> counter:start().
<0.37.0>
And run i():
...
<0.33.0> counter:loop/1 233 1 0
counter:loop/1 2
<0.35.0> counter:loop/1 233 1 0
counter:loop/1 2
<0.37.0> counter:loop/1 233 1 0
counter:loop/1 2
...
As you can see, the above processes (33, 35 and 37) are happily running and they're executing the counter:loop/1 function. Let's stop process 37:
4> P37 = pid(0,37,0).
<0.37.0>
5> counter:stop(P37).
stop
Checking the new list of processes:
6> i().
You should verify it's gone.