What is the equivalent of autotest/guard for django - django

When I code in Ruby on Rails, I rely on Guard to listen for changes to the code base so when I'm writing tests, I don't need to manually run the tests in the file I'm working on each time.
https://github.com/guard/guard-rspec
What is the closest thing to thing for django so I can enjoy the same workflow?
Specifically, what I want to do is be able to have tests run, based on:
what run tests based on files I have changed, and not
know whether to run the test command based on whether a test run is currently taking place
work with existing tests written with unittest
work with something like factory boy to let me use factories instead of fixtures
I've used nose before, and pytest and I'm comfortable using both - but I haven't used many of pytests extensive set of libraries.
What are my options for this?

Related

CQ - Writing Server-side JUnit tests

I've been trying to write a JUnit test case for one of my Java class which creates a page with some given properties in CQ. For it, it need to get reference of SlingRepository and ResourceResolverFactory. I was using this to get an idea on how to achieve this. In the document it says that a POST to "http://$HOST:$PORT/system/sling/junit/" path is used to execute tests on server side. But in CQ I get a 404 error for this path.
Is there any alternative URL in CQ for this? Or will really appreciate if anyone can suggest a better approach?
Thanks
One approach is to use a Sling test runner to execute the JUnit tests via a browser. This is the approach you are mentioning. We had to first install the code in this JAR (org.apache.sling.junit.core) to add the code that allows the URL you listed to work. Once that code is there, this URL will allow you to run tests using the test runner's built in page to run/display tests: http://localhost:4502/system/sling/junit/). My team did this for a while, but we soon moved to a different approach--using the Intellij IDE to develop the Java code for CQ and write the JUnit tests, then executing them within the IDE using the built-in JUnit test runner. The same approach works in Eclipse. For our team this approach was superior because it allowed developers to remain in context in the IDE without having to switch to a browser to run the tests.
The key is being able to resolve the references to classes that are installed/available via CQ, such as the SlingRepository and ResourceResolverFactory classes--and other stuff we commonly used, such as the Resource, ResourceResolver, Node, and Session classes. We use a CQ extension (http://helpx.adobe.com/experience-manager/kb/HowToUseCQ5AsMavenRepository.html) to allow our CQ instance to act like a Maven repository. This allows us to export the CQ JARs so we can then reference them as dependencies in the Java projects we create whenever we may need to use some of the classes available via CQ itself.
Once we set up the project dependencies, then we were able to write code--and corresponding unit tests--within the Intellij IDE. We were able to run the tests within the IDE, allowing developers to remain in context and work on the code that will run in CQ just like they work on any Java code (including things like running tests in debug mode or with code coverage, running single tests, running all tests in a class, using keyboard shortcuts to kick off tests, etc.). For us this approach had many advantages over the browser-based Sling test runner, so I recommend this approach.
Some potential considerations:
Exporting from CQ as a Maven repo may not be the best performance--you may want to add things to your own Maven repo for faster access
You may want to script some of the steps so adding project dependencies is not a manual process, but rather is something done via an automated process
You could even export all CQ JARs--or add some scripting to parse out and repackage only the public classes--and make any CQ class available to your Java projects

How do I unit test UI via console

I know this has been asked many times but I want to be specific.
I use to use selenium. After googling it looks like I can run it via console and it gives me a bunch of text output but I rather not parse that and I want a pass/fail type of thing
Every once in a while I like to run all of my unit test on the UI not code. I don't want to submit a form with certain values, I want to see if I click this img does the dropbox beside it pop out and if i select a name will it be in the form which I'll then submit after running a few other things.
The reason I'd like this is certain features MUST ALWAYS be working so i'm ok with adjusting the unit test everytime I modify the UI for those feature. For the rest the unit test in code which checks business logic will be enough as those UIs are always changing or not very important.
It be nice if something can kickoff firefox and chrome (or webkit) but thats not required.
Like I said I'd like pass/fail, some kind of easy text to parse. Complex test is ok as I know regex but I don't want to figure out when one unit test ends or starts.
If you're using java/maven - I wrote a maven plugin for selenium that should do what you want:
https://github.com/willwarren/selenium-maven-plugin. You generate the tests in firefox + selenium, then save the files to a directory in your maven project.
If you're not using maven you can use the project that I built upon:
http://code.google.com/p/selenium4j
From the Readme:
We use selenium IDE to record our tests. We then saved the test cases into our project in the following fashion: (Note: currently the code from selenium4j only suports one level, so don't nest your folders)
./src/test/selenium
|-signin
|-LoginGoodPassword.html
|-LoginBadPassword.html
|-selenium4j.properties
We didn't save the test suites as maven takes care of finding your tests.
The selenium4j.properties contains setup information about:
# the web site being tested
webSite=http://yourwebapp:8080
# A comma separated values of the WebDrivers being used. Accepted drivers:
# HtmlUnitDriver, FirefoxDriver, ChromeDriver, InternetExplorerDriver
driver=FirefoxDriver
# How many times we want to iterate and test
loopCount=1
The selenium maven plugin, which is bound to the process-test-resources phase, then converts these html files into junit 4 tests in your src/test/java folder.
So you end up with:
./src/test/java
|-signin
|-firefox
|-LoginGoodPasswordTest.java
|-LoginBadPasswordTest.java

Junit: testing chosen tests instead of all of them

I have a problem with executing tests in JUnit. Imagine you have one test case class with f.e. 100 tests, no test suite and no main program - test case class test the device on com port. JUnit project is in Netbeans. I want to run tests - but not all of them at the same time, i would like to choose tests to run before actual testing.
Once I saw something like that in eclipse - but it wasn't my project and I don't know how it was done and how to do the same thing in netbeans. It was a separate window, poping up before running tests. In this window there were checkboxes with names of methods with #Test annotation and you could choose tests you wanted to run and click run - so it let you to run what you wanted.
Does anyone know how to do it in netbeans? Is it any library or plugin?
Any help will be appreciated.
You can take a look at Run single test from a JUnit class using command-line. It does allow you to specify what test you want to run given a class with multiple test cases in it. Being command-line you can then script your own test suite that runs the specific ones you want.
I also noticed your other question Junit: changing sequence of test running. With the scripting approach you can actually control the order of your testing.
This approach does not take advantage of Eclipse's or NetBean's JUnit test runners though, so it is a very specific workaround.
Netbeans nowadays support running single tests:

Setting up proper testing for Django for TDD

I've been ignoring the need to test my project for far to long.
So I spent more than a day looking for ways to implement tests for my current apps and trying to get some TDD going for new apps.
I found a lot of "tutorials" with the steps: "1. Install this 2. Install that 3. Install thisnthat 4. Done!",
but noone seems to talk about how to structure your tests, both file and code wise.
And noone ever talks about how to set up a CI server, or just integrate the testing with the deployment of your project.
A lot of people mention fabric, virtualenv and nose - but noone describes how they work with them together as a whole.
What I keep finding is detailed information about how you set up a proper Rails environment with testing and CI etc...
Does anyone else feel that the Django community lacks in this area, or is it just me? :)
Oh, and does anyone else have any suggestions on how to do it?
As I see it, there are several parts to the problem.
One thing you need are good unit tests. The primary characteristic of unit tests is that they are very fast, so that they can test the combinatorial possibilities of function inputs and branch coverage. To get their speed, and to maintain isolation between tests, even if they are running in parallel, unit tests should not touch the database or network or file system. Such tests are hard to write in Django projects, because the Django ORM makes it so convenient to scatter database access calls throughout your product code. Hence any tests of Django code will inevitably hit the database. Ideally, you should approach this by limiting the database access in your product code to a thin data access layer built on top of the django ORM, which exposes methods pertinent to your application. Another approach is for your tests to mock out the ORM calls. In the worst case, you will give up on this: Your unit tests become integration tests: They actually hit the database, cross multiple layers of your architecture, and take minutes to run, which discourages developers from running them frequently enough.
The implication of this is that writing integration tests is easy - the canonical style of Django tests covers this perfectly.
The final, and hardest part of the problem, is running your acceptance tests. The defining characteristic of acceptance tests is that they invoke your application end-to-end, as a user does in production, to prove that your application actually works. Canonical dhango tests using the django testrunner fall short of this. They do not issue actually HTTP requests (instead, they examine the url config to figure out what middleware and view would get called to handle a particular request, and then they call it, in process.) This means that such tests are not testing your webserver config, nor any javascript, or rendering in the browser, etc. To test this, you need something like selenium.
Additionally, we have many server-side processes, such as cron jobs, which use code from our Django project. Acceptance tests which involve these processes should invoke the jobs just like cron does, as a new process.
Both these scenarios have some problems. Firstly, you can't simply run such tests under the Django test runner. If you try to do so, then you will find that the test data you have written during the test setup (either using the django fixtures mechanism, or by simply calling "MyModel().save()" in a test) are in a transaction which your product code, running in a different process, is not party to. So your tests have to commit the changes they make before the product code can see them. This interferes with the clean-up between tests that Django's testrunner helpfully does, so you have to switch it into a different mode, which does explicit deletes rather than rolling back. Sadly, this is much slower. At last night's London Django user group, a couple of Django core developers assured me that this scenario also has other complications (which I confess I don't know what they are), which it is best to avoid by not running acceptance tests within the Django test runner at all, but creating them as a completely stand-alone set of tests.
If you do this, then your immediate problem is that you have lost the benefits the Django test runnner provides: Namely it creates a test database, and cleans it between each test. You will have to create some equivalent mechanism for yourself. You will need your product code to run against a test database if it is being invoked as part of a test. You need to be absolutely certain that if product code is run as part of a test, even on a production box, then it can NEVER accidentally touch the production database, so this mechanism has to be absolutely failsafe. Forgetting to set an environment variable in a test setup, for example, should not cause a blooper in this regard.
This is all before even considering the complications that arise from deployment, having parts of your project in different repos, dependent on each other, creating pip-installable packages, etc.
All in all, I'd love to hear from someone who feels they have found a good solution to this problem. It is far from a trivial issue as some commenters imply.
Harry Percival is creating a Django / TDD / Selenium tutorial (and accompanying workshop, if you live in London.) His book reads like a hands-on tutorial, and goes into great detail on the subject:
https://www.obeythetestinggoat.com/book/part1.harry.html
In my experience, fine-grained unit tests for web apps are not worth it, the setup/teardown is too expensive and the tests are too fragile. The only exception is isolated components, especially those with clear inputs & outputs and complicated algorithms. Do unit-test those to the smallest details.
I had the best testing experience using a semi-functional testing tool called testbrowser, which simulates browser actions in Python. For integration with Django, install the homophony app (disclaimer: I am the author of the app).
Testbrowser may be a little too coarse for test-driven development, but it's the best testing tool of the ones I have used so far. Most importantly, it scales up fairly well, whereas unit tests and browser-based functional test tools tend to become very brittle as your app grows in size.
As for a CI tool, go with Buildbot or Jenkins.
I use a combination of Django's excellent extension of the python unittest framework for testing api's / models / helper functions, and selenium for in browser testing. Selenium has great instructions for how to set it up and write tests in python.

Unit testing an MVC project with EF?

I am trying to start unit testing an MVC2 project, which uses the Entity Framework. When I run my "hello world" test, it fails saying this:
The specified named connection is
either not found in the configuration,
not intended to be used with the
EntityClient provider, or not valid.
How can I pass the connection data (which were generated by the Entity Framework and are in the main Web.config) to the testing project?
Thanks
Depending on what unit testing framework you use you could try adding an app.config to your test-project with the right settings for EF. This works with xUnit.Net and I'm pretty sure most other test-frameworks also support this.
For completeness I do need to warn you that tests that touch the database aren't unit-tests but integration tests. Those are useful too but can become a hassle to maintain when your code changes. It's usually a good idea to test small pieces of code in isolation, this gets around problems like you describe because you won't need to access the database at all.
I would recommended using Dev Magic Fake to Mock the UI without need to use Entity framework or even DB, using Dev Magic Fake, you can run your MVC project and run the unit test without need for any DAL
for more information http://devmagicfake.codeplex.com/
Thanks