I recently started implementing automatic tests for my code, and I noticed that the CI does not catch the warnings of the compiler - the tests are shown as successful even when there are warnings.
I have initially added a flag for the compiler to turn the warnings into errors and allow_failure=True, but the problem is that the compiler stops in the first warning->error and does not go through the entire compilation.
I then used the trick explained here to write the warnings into a file, and then test if the file is not zero:
- make 2> >(tee make.warnings)
- test ! -s make.warnings
After the whole compilation is done, this will give an error if there are warnings written to the file - and using allow_failure=True, this works for the cases where I have no errors/warnings, but also when I have warnings. However, if I have real errors, this will also be shown as a warning in CI, and will not stop the pipeline because of the allow_failure=True.
I could not find a way to allow_failure=True depending on something run in the script (without creating a new stage) or using some condition (i.e., if the file is empty or not). Is there a simple way to do this that I'm missing?
Since there is no condition per se, check out GitLab 13.8 (January 2021):
Control job status using exit codes
You can use the allow_failure keyword to prevent failed jobs from causing an entire pipeline to fail.
Previously, allow_failure only accepted boolean values of true or false but we’ve made improvements in this release.
Now you can use the allow_failure keyword to look for specific script exit codes.
This gives you more flexibility and control over your pipelines, preventing failures based on the exit codes.
See Documentation and Issue.
If you can use as condition an exit status of a script, that would be enough for allow_failure to be more specific.
Related
So, I have a simple project with one Program (user-writen code) and then one Add-Inn (custom task) (it is very useful System Command executor add-inn, about you can read here)
So, when I get some condition in Program step, I want to stop my Project. I could stop Program, for example, using
ENDSAS
or
Abort abend
but, even if Program immediately stops (with error message in log), Add-Inn is execute! How disable executing of Add-Inn on condition (on error or any other)?
Thanks
P.S. Project will be execute in batch mode
If you're using an EG process flow, as you presumably are, then you need to set up a condition to run the add-in task.
Right click on your one program, select 'condition', add. Let's say you have a macro variable that is set to 0 if you have no problems or (some value) if you have a problem. (Could also use any of the automatic SYSERR type macro variables.)
Then, put your add-in task under "Then, run this task". You can add an Else or Else If if you want, or just leave that as None.
Then, EG won't run your task unless the macro variable is successful.
Basically I created a new test file in a particular package with some bare bones test structure - no actual tests...just an empty struct type that embeds suite.Suite, and a function that takes in a *testing.T object and calls suite.Run() on said struct. This immediately caused all our other tests to start failing indeterministically.
The nature of the failures were associated with database unique key integrity violations on inserts and deletes into a single Postgres DB. This is leading me to believe that the tests were being run concurrently without calling our setup methods to prepare the environment properly between tests.
Needless to say, the moment I move this test file to another package, everything magically works!
Has anyone else run into this problem before and can possibly provide some insights?
What I've found from my use, is that "go test" runs a single package's test cases sequentially (unless t.Parallel() is called), but if you supply multiple packages (go test ./foo ./bar ./baz), each package's tests are run parallel to other packages. Definitely caused similar headaches with database testing for me.
As it turns out, this is a problem rooted in how go test works, and has nothing to do with testify. Our tests were being ran on ./... This causes the underlining go test tool to run tests in each package in parallel, as justinas pointed out. After digging around more on StackOverflow (here and here) and reading through testify's active issue on this problem, it seems that the best immediate solution is to use the -p=1 flag to limit the number of packages to be run in parallel.
However, it is still unexplained why the tests consistently passed prior to adding these new packages. A hunch is perhaps the packages/test files were sorted and ran in such a manner that concurrency wasn't an issue prior to adding the new packages/files.
We just "upgraded" from Visual Studio 2008 to Visual Studio 2012. We updated our unit tests and now they pass when running them individually but when I try to Run All, I got the following error:
The active Test Run was aborted because the execution process exited unexpectedly. To investigate further, enable local crash dumps either
at the machine level or for process vstest.executionengine.appcontainer.x86.exe. Go to more details: [http://go.microsoft.com/fwlink/?linkid=232477][1]
So I went to the link and followed the instructions to add the registry key to enable local crash dumps. The error message then changed to:
The active Test Run was aborted because the execution process exited unexpectedly. Check the execution process logs for more information.
If the logs are not enabled, then enable the logs and try again.
Apparently it noticed the changes that I made in the registry to enable crash. However, when I looked in %LOCALAPPDATA%\CrashDumps, no files were being created.
If I run one test at a time (or even a few tests at a time), I can get them all to pass. The problem is only with Run All.
Has anyone else encountered similar problems? If so, how did you solve them?
Essentially the same question was asked on MSDN, but the answer was something like "click the link to the crash dump". That answer doesn't help me because I don't see any link to the crash dump and I am unable to get the crash dump to be generated.
This question on StackOverflow is also similar, and ended up resulting in a bug being logged on Microsoft Connect (which looks to be deferred for some reason), but my problem might be different because my code has nothing to do with "async tasks" (I don't think).
EDIT: The problem went away, seemingly on its own, but the problem was likely an exception that wasn't being caught in the unit test code, as some of the answers below suggest. However, I'm still confused as to why the problem only appeared with Run All, and not when running smaller groups of tests or Debug All.
I had the same problem, the tests failed for apparently no reason. Later I found that a buggy method was causing a StackOverflowException. When I fixed my bug, the VS bug disappeared.
Maybe it works most of the time because you don't run the faulty code.
The best workaround I have so far is to debug all. This is done via TEST -> Debug -> All Tests. It's obviously slower but it doesn't crash.
This can happen with certain errors, such as a stackoverflow. Presumably this is crashing the test runner and so it can't continue when it hits a test that causes the problem.
The solution, therefore, is to run all tests in debug (from the Test -> Debug menu) and Visual Studio will show errors like these.
For anyone else who may need this in future: My test runner was crashing when a console specific command (Environment.Exit(-1);) was executed via the unit test. Even running in debug mode would just crash - I could not get at a useful error message.
So my scenario is different to the main question scenario in that a) debug didn't work at all b) run all vs run individually made no difference. That is because my error scenario always arose but the stack overflows of the original question did not.
The bottom line: test runner is bad and will crash if it finds something it doesn't like. You need to manually isolate and work out what the Bad Thing™ is.
For someone else looking for this: I had some code that was calling System.Environment.Exit(123), and I was unaware of this. So check for any code that terminates the process.
I've just had the same problem. It turned out that was my code - there was an infinite loop of WCF service calls. In your case this might be something else. So my proposal is to either remember (logs in version control system?) or to figure out (by excluding different tests from run, e.g. with bisection method) which place in code leads to this behavior. And wuala! It's cause of the problem and at the same time bug in code.
UPDATE
As for questions in your EDIT. It could happen that running smaller groups of tests didn't reproduce the issue. In this case, given those groups included all tests, one can make an assumption that some tests interfere. Maybe some static data or fields in a test class?
As for running tests in debug mode - I'm not surprised. Visual Studio test runner behaves different in "Run" mode vs "Debug" one.
I had a similar problem except that it wasn't a stackoverflow exception. It was caused by my project under test using Entity Framework and the NUnit project not having references included to the EntityFramework and EntityFramework.SqlServer modules. Adding the references to Entity Framework modules fixed it.
Just had the same problem. Closing and reopening visual studio fixed it for me.
I am using google test in a C++ project. Some functions use assert() in order to check for invalid input parameters. I already read about Death-Tests (What are Google Test, Death Tests) and started using them in my test cases.
However, I wonder if there is a way to suppress the runtime errors caused by failing assertions. At this time each failing assertion creates a pop-up window I have to close everytime I run the tests. As my project grows, this behaviour increasingly disturbs the workflow in an unacceptable way and I tend to not test assert()-assertions any longer.
I know there are possibilities to disable assertions in general, but it seems more convenient to suppress the OS-generated warnings from inside the testing framework.
Ok, I found the solution myself: You have to select the test-style threadsafe. Just add the following line to your test code:
::testing::FLAGS_gtest_death_test_style = "threadsafe";
You can either do this for all tests in the test-binary or for affected tests only. The latter is faster. I got this from the updated FAQ: Googletest AdvancedGuide
Here's the scenario. I'm debugging my own app (C/C++) which is using some library developed by another team in the company. An assertion fails when my code generates some edge case. Its a pain because the assertion is not formulated correctly so the library function is working OK but I get all these interruptions where I just have to continue (lots as its in a loop) so I can get to the stuff I'm actually interested in. I have to use the debug version of the library when debugging for other reasons. The other team wont fix this till next release (hey, it works on our machine).
Can I tell the debugger to ignore the breakpoints asserted by this section of code (i.e. can it auto-continue for me).
If the code is triggering breakpoints on its own (by __debugbreak or int 3), you cannot use conditional breakpoints, as the breakpoints are not know to Visual Studio at all. However, you may be able to disable any such breakpoints you are not interested in by modifying the code from the debugger. Probably not what you want, because you need to repeat this in each debugging session, however still may be better than nothing. For more information read How to disable a programmatical breakpoint / assert?.
There's no good way to automatically ignore ASSERT() failures in a debug library. If that's the one you have to use, you're just going to have to convince the other team that this needs fixed now, or if you have the source for this library, you could fix or remove the assertions yourself just to get your work done in the meantime.
You can add an exception handler around the call(s) to the library, catch the EXCEPTION_BREAKPOINT exception and do nothing.
Example 2 in the following link seems to be what you want to do:
http://msdn.microsoft.com/en-us/library/ms681409(VS.85).aspx
You can use conditional breakpoints. Some links:
http://support.microsoft.com/kb/308469
http://dotnettipoftheday.org/tips/conditional_breakpoint.aspx