I am unittesting my cakephp-plugins. However, I got this situation.
My plugin-tests works well, but when I activate another plugin (who's fixtures (tables) are not loaded, my original tests won't pass anymore!
How to 'fix' this? Is it the responsibility for the second plugin who let the other tests fail, or do I have to prepare myself in my main-plugin for situations like this?
Hopefully I described the situation clear...
EDIT
Lets make it clear :)
I have tested a controller in plugin 'A'. It passes all tests so thats great.
But, when I load plugin 'B' in my system and I test the samen controller from plugin 'A' it fails because plugin 'B' wants a specific table who doesn't exist because my test didn't load it's fixture.
This gave me the question: How do i have to test? Should I only focus on plugin 'A' our keep in mind that plugin 'B' could possibly join the system (which is very complicated)...
Greetz
The best idea is to test your plugins in isolation, that means to only install plugin A and run its tests and in another place install plugin B and run its tests.
If you are affecting plugin A from plugin B so that tests fail you will need to fix your tests to account for those cases if you want all tests to be ran in the same test suite.
Related
When I code in Ruby on Rails, I rely on Guard to listen for changes to the code base so when I'm writing tests, I don't need to manually run the tests in the file I'm working on each time.
https://github.com/guard/guard-rspec
What is the closest thing to thing for django so I can enjoy the same workflow?
Specifically, what I want to do is be able to have tests run, based on:
what run tests based on files I have changed, and not
know whether to run the test command based on whether a test run is currently taking place
work with existing tests written with unittest
work with something like factory boy to let me use factories instead of fixtures
I've used nose before, and pytest and I'm comfortable using both - but I haven't used many of pytests extensive set of libraries.
What are my options for this?
I have a problem with executing tests in JUnit. Imagine you have one test case class with f.e. 100 tests, no test suite and no main program - test case class test the device on com port. JUnit project is in Netbeans. I want to run tests - but not all of them at the same time, i would like to choose tests to run before actual testing.
Once I saw something like that in eclipse - but it wasn't my project and I don't know how it was done and how to do the same thing in netbeans. It was a separate window, poping up before running tests. In this window there were checkboxes with names of methods with #Test annotation and you could choose tests you wanted to run and click run - so it let you to run what you wanted.
Does anyone know how to do it in netbeans? Is it any library or plugin?
Any help will be appreciated.
You can take a look at Run single test from a JUnit class using command-line. It does allow you to specify what test you want to run given a class with multiple test cases in it. Being command-line you can then script your own test suite that runs the specific ones you want.
I also noticed your other question Junit: changing sequence of test running. With the scripting approach you can actually control the order of your testing.
This approach does not take advantage of Eclipse's or NetBean's JUnit test runners though, so it is a very specific workaround.
Netbeans nowadays support running single tests:
As written in a fairly old book XUnit Patterns NUnit 2.0 did not create new test fixtures for each test, and because of that if tests were manipulating some state of fixture it became shared and could cause various bad side effects.
Is this still the same? I tried to find it on official site but failed, and havent used NUnit for a while.
The fixture is created once for all of the tests in that fixture.
For a given fixture class, a FixtureSetup method is run once for all of the tests in a fixture, and a Setup method is run once for each test. So, any state that needs to be reset should be done in a Setup method (or TearDown, which is run at the end of each test.)
Since 3.13 you can configure that with
LifeCycle.SingleInstance A single instance is created and shared for all test cases
LifeCycle.InstancePerTestCase A new instance is created for each test case
https://docs.nunit.org/articles/nunit/writing-tests/attributes/fixturelifecycle.html
I found that this was an issue that affected me and also found this link which provides a bit of history to the issue;
https://blogs.msdn.microsoft.com/jamesnewkirk/2004/12/04/why-variables-in-nunit-testfixture-classes-should-be-static
I think one of the biggest screw-ups that was made when we wrote NUnit V2.0 was to not create a new instance of the test fixture class for each contained test method.
Not yet tested this in V3 to see if its changed
I am trying to start unit testing an MVC2 project, which uses the Entity Framework. When I run my "hello world" test, it fails saying this:
The specified named connection is
either not found in the configuration,
not intended to be used with the
EntityClient provider, or not valid.
How can I pass the connection data (which were generated by the Entity Framework and are in the main Web.config) to the testing project?
Thanks
Depending on what unit testing framework you use you could try adding an app.config to your test-project with the right settings for EF. This works with xUnit.Net and I'm pretty sure most other test-frameworks also support this.
For completeness I do need to warn you that tests that touch the database aren't unit-tests but integration tests. Those are useful too but can become a hassle to maintain when your code changes. It's usually a good idea to test small pieces of code in isolation, this gets around problems like you describe because you won't need to access the database at all.
I would recommended using Dev Magic Fake to Mock the UI without need to use Entity framework or even DB, using Dev Magic Fake, you can run your MVC project and run the unit test without need for any DAL
for more information http://devmagicfake.codeplex.com/
Thanks
I have two testing questions. Both are probably easily answered. The first is that I wrote this unit test in Grails:
void testCount() {
mockDomain(UserAccount)
new UserAccount(firstName: "Ken").save()
new UserAccount(firstName: "Bob").save()
new UserAccount(firstName: "Dave").save()
assertEquals(3, UserAccount.count())
}
For some reason, I get 0 returned back. Did I forget to do something?
EDIT: OH, I understand. The validation constraints were violated, so they didn't store. Is there any way to get some feedback here? That's a really crappy thing to have happen....
The second question is for those who use IDEA. What should I be running - IDEA's junit tests, or grails targets? I have two options.
Also, why does IDEA say that my tests pass and it provides a green light even though the test above actually fails? This will really drive me nuts if I have to check the test reports in html every time I run my tests.....
Help?
I always do object.save(failOnError: true) in tests to avoid silent failures like this. This causes an exception to be thrown if validation fails. Even without a real database in a unit test, most of the constraints will be checked, although I prefer to use integration tests if I want to test complex relationships between domain objects.
I personally haven't found the Idea JUnit tests to particularly useful when working with grails. It is likely fine to use the test runner for "Unit" tests. For integration tests you might consider setting up an ant target in "debug" mode to run your tests.
Over time running tests starts to occupy such a long amount of time I tend to run them exclusively from the command line to avoid the additional overhead IntelliJ adds.
In regards to your unit test, I am pretty sure you would need to run an integration test to get a count that is not zero.
I'm not sure what unit test your using exactly but since GORM is not bootstrapped in the unit tests I'm not sure the domain object mocking supports the increment of a count.
Your test would likely pass as an integration test provided that your domain objects validate.
add flush:true to your save method.
new UserAccount(firstName: "Ken").save(flush:true)
...
Grails sets the flush mode of the hibernate session to manual. So the change is not persisted after the action returns but is before the view is rendered. This allows views to access lazy-loaded collections and relationships and prevents changes from automatically being persisted.