When I using PHPUnit, some tests are failed, I want to repeat unit testing for failed tests and not for passed tests. Is there a way to do that!?
I can filter tests, but I want to automatically do that.
Thanks
For some one else with problem like my problem, following links is useful (Re-run last failed test in PHPUnit)
PHPUnit do not keep track of failed and passed tests. The response is on the fly. The idea of having something like that can kill all the automated test concept. Think about it. You are automating your tests because you wanna be warned when some change break your code. But you only know if something broke your code when you run the automated test. There is no guarantee that the fix you made for one testcase, will not break another testcase.
PHPUnit help you to make sure your code works even when you fixed what was causing some testcase to fail.
Related
On a few simple groovy classes I included the junit test in with the class--If you annotate the test methods with #Test (from Junit) and run the main-less class with "groovy MyClass.groovy" it automatically runs the unit tests.
I like this because It really requires zero overhead (no additional files or junk code, just one annotation).
Question is, can I tell it to run a single test method? I tried "groovy MyClass.groovy myMethod" but I didn't really expect that to work. I also tried -Dtest=myMethod which I also didn't have much hope for.
Is there a trick someone knows? I suppose since it is just a .groovy file I could comment out the tests I don't want to run or add a main that calls the various tests, but I'm just wondering if there is a way to leverage this automatic run JUnit tests thing already built into groovy.
I would like to generate statistics from the test run, and would like to keep track of these 4 numbers:
failed / passed / inconclusive and ignored tests.
My question is... is it possible to get the number of skipped/ignored tests (this is a test method attributed with [Ignore]).
I'm not aware of any working solutions for this yet.
Please see this issue in Microsoft tracker: mstest.exe report file (trx) does not list ignored tests. The status for it is "Closed as Deferred".
However, the current design seems to make more sense to me. Statistics for test runs should only include the tests that are supposed to be executed. Tests marked with [Ignore] should not be considered as a part of test run. If people have excluded some tests from test runs intentionally, then why would they want to see them in the test run results?
But I'd like to see this feature personally. More statistics won't hurt after all.
I have a huge group of unit tests that I'm running against our software where I work. I'm running them using IntelliJ IDEA. I'm using Spock and Groovy to make these tests, but they are, in turn, using JUnit. So anything that applies to JUnit should apply here as well... (in theory)
In order to figure out what coverage is I'm right clicking on the root of the code I want to run and simply selecting "Test in 'example' with Coverage".
IntelliJ then proceeds to try and run the 270 or some odd tests we have in this part of the project. The problem is that it isn't running a couple of them and I can't figuring out why for the life of me.
I've googled the issue and turned up nothing substantial. The list of tests in IntelliJ only tells me that these couple of tests didn't start, it doesn't give a reason at all. I tried checking the logs, but all it says about these particular tests is that it's trying to open them, no comment on how successful they were or were not, and why they aren't working.
If someone could just point me in a useful direction? I need to expand my test bases coverage and being able to see what code is actually covered is a pretty critical...
Some Clarification:
The tests can be run on their own, and they do computer coverage for themselves if you do so. They only can't be run when I run them in a batch.
I'm pretty sure it's not a display issue because I get several different reports telling me that the tests were not run. So, more than just the frozen yellow spinner it also warns me with a red "attention" bubble that the tests did not run.
I have two testing questions. Both are probably easily answered. The first is that I wrote this unit test in Grails:
void testCount() {
mockDomain(UserAccount)
new UserAccount(firstName: "Ken").save()
new UserAccount(firstName: "Bob").save()
new UserAccount(firstName: "Dave").save()
assertEquals(3, UserAccount.count())
}
For some reason, I get 0 returned back. Did I forget to do something?
EDIT: OH, I understand. The validation constraints were violated, so they didn't store. Is there any way to get some feedback here? That's a really crappy thing to have happen....
The second question is for those who use IDEA. What should I be running - IDEA's junit tests, or grails targets? I have two options.
Also, why does IDEA say that my tests pass and it provides a green light even though the test above actually fails? This will really drive me nuts if I have to check the test reports in html every time I run my tests.....
Help?
I always do object.save(failOnError: true) in tests to avoid silent failures like this. This causes an exception to be thrown if validation fails. Even without a real database in a unit test, most of the constraints will be checked, although I prefer to use integration tests if I want to test complex relationships between domain objects.
I personally haven't found the Idea JUnit tests to particularly useful when working with grails. It is likely fine to use the test runner for "Unit" tests. For integration tests you might consider setting up an ant target in "debug" mode to run your tests.
Over time running tests starts to occupy such a long amount of time I tend to run them exclusively from the command line to avoid the additional overhead IntelliJ adds.
In regards to your unit test, I am pretty sure you would need to run an integration test to get a count that is not zero.
I'm not sure what unit test your using exactly but since GORM is not bootstrapped in the unit tests I'm not sure the domain object mocking supports the increment of a count.
Your test would likely pass as an integration test provided that your domain objects validate.
add flush:true to your save method.
new UserAccount(firstName: "Ken").save(flush:true)
...
Grails sets the flush mode of the hibernate session to manual. So the change is not persisted after the action returns but is before the view is rendered. This allows views to access lazy-loaded collections and relationships and prevents changes from automatically being persisted.
I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way.
The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail.
This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself.
So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically.
Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb.
It is hard to run all the unit tests and not integration tests in IDEA.
Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain.
Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!#!#!
Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it.
Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful.
After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.
You can set this property in your config file:
grails.gorm.failOnError=true
That will make it a system wide default for save (which you can override with .save(failOnError: false) if you want).
If you only want this behavior in the test, you can put it in that environment specific stanza in Config.groovy. I actually like this as a project wide behavior.
I'm sure theres a way that you could turn failOnError on/off within a defined scope, but I haven't investigated how to do it yet (might be a good blog post, I'll update this if I write one).
I'm not sure what you've got misconfigured in IDEA, but it shows me a red bar when my tests fail and I can click on the lines in the stacktrace and get right to the issues. The latest version of intellij even collapses down the majority of metaclass cruft that isn't interesting when trying to fix issues.
If you haven't done this already to generate your project, I'd try wiping away your existing .ipr/.iml/.iws/.idea files and running this command to have grails regenerate your configuration:
grails integrate-with --intellij
Then run the .ipr file that gets generated.