We are running automated unit tests in our Bamboo build, but they are sometimes failing even though our log indicates that all tests are appropriately passing. I've done some Googling and am currently getting no where. Does anyone have a clue as to why the VSTest.Console.Exe is returning a value other than 0?
Thanks a ton!
Here are the last few lines of the log:
build 26-May-2016 14:11:25 Passed ReInitializeConnection
build 26-May-2016 14:11:25 Passed UserIdentifier_CRUD
build 26-May-2016 14:11:25 Results File: D:\build-dir\AVENTURA-T2-COREUNITTESTS\TestResults\bamboo_svc_BUILDP02 2016-05-26 14_10_58.trx
build 26-May-2016 14:11:25
build 26-May-2016 14:11:25 Total tests: 159. Passed: 159. Failed: 0. Skipped: 0.
build 26-May-2016 14:11:25 Test Run Successful.
build 26-May-2016 14:11:25 Test execution time: 27.3562 Seconds
simple 26-May-2016 14:11:32 Failing task since return code of [C:\Program Files\Bamboo\temp\AVENTURA-T2-COREUNITTESTS-345-ScriptBuildTask-2971562088758505573.bat] was 255 while expected 0
simple 26-May-2016 14:11:32 Finished task 'Run vstest.console.exe' with result: Failed
This isn't the solution I wanted but it does keep my build from failing if the return code is something other than 0 and all the tests are passing. At the end of our test command I add:
if %ERRORLEVEL% NEQ 0 (
echo Failure Reason Given is %errorlevel%
exit /b 0
)
All this does it catch the error coming out of the vstest.console.exe and throw a return code of 0 out instead of 255. If anyone ever figures this out, I would greatly appreciate knowing why the return code is something other than 0.
As indicated in a comment to the question, I've come up against the issue in the test automation for my company too.
In our case, vstest would return 1 when tests failed, but then occasionally return 255. In the case of the 255 return, the test TRX output would not be generated.
In our situation, we are running integration tests that spawn child processes. The child processes have output handlers attached that write to the test context. The test starts the process, then uses the WaitForExit(int milliseconds) method to wait for it to complete.
The output handlers on the process output are then executing in a different thread, but have a reference to the test context to write their output.
This can cause issues in two ways:
In the documentation for WaitForExit(int milliseconds) on MSDN, it states:
When standard output has been redirected to asynchronous event handlers, it is possible that output processing will not have completed when this method returns. To ensure that asynchronous event handling has been completed, call the WaitForExit() overload that takes no parameter after receiving a true from this overload.
This means that it's possible that the output handlers are writing to the context after the test is complete.
When the timeout expires, the process continues to run in the background, and therefore might also be able to write to the test context.
The fix in our case was threefold:
After the call to WaitForExit(int), either kill the process (timeout) or call WaitForExit() again (non-timeout).
Deregister the output event handlers from the process object
Dispose the Process object properly (with using).
The specifics of your case might be different to ours, but look for threaded tests where (a) the thread might execute after the test is complete and (b) writes to the test output.
Related
These are the tasks in tasks.py:
#shared_task
def add(x, y):
return x * y
#shared_task
def verify_external_video(video_id, media_id, video_type):
return True
I am calling verify_external_video 1000+ times from a custom Django command I run from CLI
verify_external_video.delay("1", "2", "3")
In Flower, I am then monitoring the success or failure of the jobs. A random number of jobs fail, others succeed...
Those that fail, do so because of two reasons that I just cannot understand:
NotRegistered('lstv_api_v1.tasks.verify_external_video')
if it's not registered, why are 371 succeedings?
and...
TypeError: verify_external_video() takes 1 positional argument but 3 were given
Again, a mystery, as I quit Celery and Flower, and run them AGAIN from scratch before running my CLI Django command. There is no code living anywhere where verify_external_video() takes 1 parameter. And if this is the case... why are SOME of the calls successful?
This type of failure isn't sequential. I can have 3 successful jobs, followed by one that does not succeed, followed by success again, so it's not a timing issue.
I'm at a loss here.
In Short: I had a number of rogue celery processes running around from previous "violent" CTRL-C's which prevented graceful termination of what was running.
In the logs it says that 2 test workers were used, is there a way to configure max to be 1?
Run Settings
...
NumberOfTestWorkers: 2
Using a manual script like below works but gets messy when the solution contains many assemblies.
test_script:
- nunit3-console.exe Gu.Persist.Core.Tests\bin\Release\Gu.Persist.Core.Tests.dll --result=myresults.xml;format=AppVeyor --workers=1
- ...
AppVeyor generates nunit3-console command line without any --workers switch. I believe that number of workers is decided by nunit console itself. As I understand if you remove Parallelizable Attribute from your tests, it will be only one worker.
I need guidence on implementing WHILE loop with Kettle/PDI. The scenario is
(1) I have some (may be thousand or thousands of thousand) data in a table, to be validated with a remote server.
(2) Read them and loopup to the remote server; I use Modified Java Script for this as remote server lookup validation is defined in external Java JAR file (I can use "Change number of copies to start... option on Modified java script and set to 5 or 10)
(3) Update the result on database table. There will be 50 to 60% connection failure cases each session.
(4) Repeat Step 1 to step 3 till all gets updated to success
(5) Stop looping on Nth cycle; this is to avoid very long or infinite looping, N value may be 5 or 10.
How to design such a WhILE loop in Pentaho Kettle?
Have you seen this link? It gives a pretty well detailed explanation of how to implement a while loop.
You need a parent job with a sub-transformation for doing a check on the condition which will return a variable to the job on whether to abort or to continue.
I wanted to run a code (or an external executable) for a specified amount of time. For example, in Fortran I can
call system('./run')
Is there a way I can restrict its run to let's say 10 seconds, for example as follows
call system('./run', 10)
I want to do it from inside the Fortran code, example above is for system command, but I want to do it also for some other subroutines of my code. for example,
call performComputation(10)
where performComputation will be able to run only for 10 seconds. The system it will run on is Linux.
thanks!
EDITED
Ah, I see - you want to call a part of the current program a limited time. I see a number of options for that...
Option 1
Modify the subroutines you want to run for a limited time so they take an additional parameter, which is the number of seconds they may run. Then modify the subroutine to get the system time at the start, and then in their processing loop get the time again and break out of the loop and return to the caller if the time difference exceeds the maximum allowed number of seconds.
On the downside, this requires you to change every subroutine. It will exit the subroutine cleanly though.
Option 2
Take advantage of a threading library - e.g. pthreads. When you want to call a subroutine with a timeout, create a new thread that runs alongside your main program in parallel and execute the subroutine inside that thread of execution. Then in your main program, sleep for 10 seconds and then kill the thread that is running your subroutine.
This is quite easy and doesn't require changes to all your subroutines. It is not that elegant in that it chops the legs off your subroutine at some random point, maybe when it is least expecting it.
Imagine time running down the page in the following example, and the main program actions are on the left and the subroutine actions are on the right.
MAIN SUBROUTINE YOUR_SUB
... something ..
... something ...
f_pthread_create(,,,YOUR_SUB,) start processing
sleep(10) ... calculate ...
... calculate ...
... calculate ...
f_pthread_kill()
... something ..
... something ...
Option 3
Abstract out the subroutines you want to call and place them into their own separate executables, then proceed as per my original answer below.
Whichever option you choose, you are going to have to think about how you get the results from the subroutine you are calling - will it store them in a file? Does the main program need to access them? Are they in global variables? The reason is that if you are going to follow options 2 or 3, there will not be a return value from the subroutine.
Original Answer
If you don't have timeout, you can do
call system('./run & sleep 10; kill $!')
Yes there is a way. take a look at the linux command timeout
# run command for 10 seconds and then send it SIGTERM kill message
# if not finished.
call system('timeout 10 ./run')
Example
# finishes in 10 seconds with a return code of 0 to indicate success.
sleep 10
# finishes in 1 second with a return code of `124` to indicate timed out.
timeout 1 sleep 10
You can also choose the type of kill signal you want to send by specifying the -s parameter. See man timeout for more info.
I'm using the parallel computing toolbox (PCT) in combination with the Simbiology toolbox in MATLAB 2012b. I’m receiving an intermittent error message when I run my script with a remote pool of workers, but not with a local pool of workers:
Caught std::exception Exception message is:
vector::_M_range_check
Error using parallel_function (line 589)
Error in remote execution of remoteParallelFunction : RUNTIME_ERROR
Error in PSOFit (line 486)
parfor ns = 1:r.NumSwp
Error in PSOopt_driver (line 209)
PSOFit(ObjFuncName,LB,UB,PSOopts);
The error does not occur when I comment out the call to the function sbiosimulate (a SimBiology function for model evaluation).
I have a couple of ideas:
I’ve introduced some sort of race condition, that causes a problem in accessing the model variables (is this possible in MATLAB?)
Model compilation in simbiology is sometimes but not always compatible with the PCT, and I’ve hit some sort of edge case
Since sbiosimulate evaluates compiled C++ code, for some inputs there might be a bug in the source that generates the exception
I am aware of this.
I'm a developer of SimBiology. I believe this is a bug that was introduced into SimBiology's C++ code in the R2012a release. The bug is triggered when a simulation ends without producing any simulation results. This can sometimes occur when the model is configured to report only particular times (using the OutputTimes options) AND the simulation is configured to end after a particular amount of real time (using the MaximumWallClock option). Basically, the simulation "times out" before it ever gets a chance to log the first output time.
One way to work around this problem is to always include time 0 in the OutputTimes. This time will always get logged before evaluating the MaximumWallClock criterion, preventing the bug from getting triggered. I am also contacting this user directly and will work on fixing the bug in a future release.