I'm working with Symfony + Doctrine + PHPUnit, with NetBeans IDE. Here' my current approach to unit testing.
setUp() function loads the test fixtures from .yml files
tearDown() function delete all data from models. this is done by looping through an array of all my models' names to something like Doctrine_Query::delete($modelName)->execute()
This seems to work, but i'm just curious if this is the correct way to do it. I am essentially clearing all tables after each test function by specifying the models/tables to 'delete all' from.
Q1: I am just wondering if this is the correct way...
Q2: this works nicely in Netbeans IDE, but does not seem to work via "./symfony test:unit". am i missing something or the CLI just works with lime?
./symfony test:unit runs symfonys own test suite that is using lime as a test framework, and not phpUnit.
And netbeans uses phpUnit for its integrated test support. hopefully netbeans will add test suport for symfony test suite in their incomming symfony suport in netbeans 6.8
If you want to use phpunit with symfony check out: PHPUnit plugin Just a note that this only works with 1.2.x it seems, for 1.4.x which is what I'm currently using at work check out: Another PHPUnit plugin this last one is in beta, but it works for 1.4.x according to the author, I'll be trying it out soon, so if I can remember, I'll come back here and throw up my findings. It's honestly not to hard to back out if you don't want to install it, so trying it is easy.
If you happen to try it, please post your findings, I'd be really interested in hearing your thoughts. I'm finding lime to be lame (HAH!) as it just makes mocking a chore.
I'm trying it externally with PHPUnit, no plugin. I'm using Doctrine. I am having quite a problem. If I run ONE PHPUnit (written by me) tes method, it's great. The second one, not so good. It may be the way I'm using Doctrine. It seems like even though I delete everything from the database (between PHPUnit Method calls) and restore the fixtures file, all in the setup(), DOCTRINE remebers previous values.
Doesn'jt matther whether I flush the connection, unset the parent object that is being erroneiously 'remembered', refreshRelated() etc, I still get old values when I do the first assignment to a relationship.
$parent=new ParentType;
//set parent values.
$child=new ChildType
//set child values
$child['Parent']=$parent;
$child->save();
The database is reflecting everything fine, it's Doctrine in PHPUnit that's not working. I haven't tried it OUT of PHPUnit yet, after all, test before use, right? But I may have to do that and see if it's Doctrine or PHPUnit
Related
When I code in Ruby on Rails, I rely on Guard to listen for changes to the code base so when I'm writing tests, I don't need to manually run the tests in the file I'm working on each time.
https://github.com/guard/guard-rspec
What is the closest thing to thing for django so I can enjoy the same workflow?
Specifically, what I want to do is be able to have tests run, based on:
what run tests based on files I have changed, and not
know whether to run the test command based on whether a test run is currently taking place
work with existing tests written with unittest
work with something like factory boy to let me use factories instead of fixtures
I've used nose before, and pytest and I'm comfortable using both - but I haven't used many of pytests extensive set of libraries.
What are my options for this?
I am unittesting my cakephp-plugins. However, I got this situation.
My plugin-tests works well, but when I activate another plugin (who's fixtures (tables) are not loaded, my original tests won't pass anymore!
How to 'fix' this? Is it the responsibility for the second plugin who let the other tests fail, or do I have to prepare myself in my main-plugin for situations like this?
Hopefully I described the situation clear...
EDIT
Lets make it clear :)
I have tested a controller in plugin 'A'. It passes all tests so thats great.
But, when I load plugin 'B' in my system and I test the samen controller from plugin 'A' it fails because plugin 'B' wants a specific table who doesn't exist because my test didn't load it's fixture.
This gave me the question: How do i have to test? Should I only focus on plugin 'A' our keep in mind that plugin 'B' could possibly join the system (which is very complicated)...
Greetz
The best idea is to test your plugins in isolation, that means to only install plugin A and run its tests and in another place install plugin B and run its tests.
If you are affecting plugin A from plugin B so that tests fail you will need to fix your tests to account for those cases if you want all tests to be ran in the same test suite.
As written in a fairly old book XUnit Patterns NUnit 2.0 did not create new test fixtures for each test, and because of that if tests were manipulating some state of fixture it became shared and could cause various bad side effects.
Is this still the same? I tried to find it on official site but failed, and havent used NUnit for a while.
The fixture is created once for all of the tests in that fixture.
For a given fixture class, a FixtureSetup method is run once for all of the tests in a fixture, and a Setup method is run once for each test. So, any state that needs to be reset should be done in a Setup method (or TearDown, which is run at the end of each test.)
Since 3.13 you can configure that with
LifeCycle.SingleInstance A single instance is created and shared for all test cases
LifeCycle.InstancePerTestCase A new instance is created for each test case
https://docs.nunit.org/articles/nunit/writing-tests/attributes/fixturelifecycle.html
I found that this was an issue that affected me and also found this link which provides a bit of history to the issue;
https://blogs.msdn.microsoft.com/jamesnewkirk/2004/12/04/why-variables-in-nunit-testfixture-classes-should-be-static
I think one of the biggest screw-ups that was made when we wrote NUnit V2.0 was to not create a new instance of the test fixture class for each contained test method.
Not yet tested this in V3 to see if its changed
I'm playing with MVC2, Entity Framework and CTP4 using code only persistence. I've created some unit tests in MSUnit for my domain objects, including some to see how persistence works in this paradigm. I'm using Sql Server CE 4.0 for these tests. This works fine, except for one problem...data seems to be persisted between tests within the same class.
I have previous experience using Java, Hibernate Annotations, and HSQLDB and in that case the DB is created and torn-down on each test execution. In SqlCompact, however, I have a couple tests that use the same test data fixture and end up with constraint violations if I run them both.
I can fix this via some hacks to drop tables/delete data explicitly within [TestCleanup] but what is the proper way to ensure that I start with a fresh DB for each test when using SQL Compact in this case? I'm sure the answer is simple, but I can't seem to find it anywhere. Thanks.
EDIT: For the moment, I'm doing this, which works--but I don't like it. Better ideas are welcome:
[TestCleanup]
public void teardown()
{
mgr.Database.DeleteIfExists();
mgr.Database.Create();
}
I think a better approach is to add the code you have in teardown to [TestInitialize], which gets called before each test executes. Compare this with [ClassInitialize] which gets called once for the entire fixture.
I am more familiar with NUnit, and found this table helpful to map NUnit attributes to MSUnit
http://blogs.msdn.com/b/nnaderi/archive/2007/02/01/mstest-vs-nunit-frameworks.aspx
I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way.
The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail.
This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself.
So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically.
Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb.
It is hard to run all the unit tests and not integration tests in IDEA.
Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain.
Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!#!#!
Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it.
Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful.
After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.
You can set this property in your config file:
grails.gorm.failOnError=true
That will make it a system wide default for save (which you can override with .save(failOnError: false) if you want).
If you only want this behavior in the test, you can put it in that environment specific stanza in Config.groovy. I actually like this as a project wide behavior.
I'm sure theres a way that you could turn failOnError on/off within a defined scope, but I haven't investigated how to do it yet (might be a good blog post, I'll update this if I write one).
I'm not sure what you've got misconfigured in IDEA, but it shows me a red bar when my tests fail and I can click on the lines in the stacktrace and get right to the issues. The latest version of intellij even collapses down the majority of metaclass cruft that isn't interesting when trying to fix issues.
If you haven't done this already to generate your project, I'd try wiping away your existing .ipr/.iml/.iws/.idea files and running this command to have grails regenerate your configuration:
grails integrate-with --intellij
Then run the .ipr file that gets generated.