There are a lot of resources out there for unit testing in dart using the old unittest library, but I can't find much about the test library, which was just released at the end of last year.
In unittest you could call useHtmlConfiguration() or useHtmlEnhancedConfiguration() that would serve up test results on localhost:8081 or whatever port you used with pub serve. The new library doesn't seem to have that, or at least it's not well documented. So my first question is: Is there a good way to run unit tests in the browser by typing localhost:8081 like with the old library, or does everything have to be done from the terminal?
I'm able to run the tests with pub run test:test --pub-serve=8081 -p firefox, but I'm just wondering if anyone has some "best practices" with this library to share since it's so new.
The test package only prints progress and test results to the console. WebStorm/IntelliJ in the most recent version provide a GUI API for running the tests.
The readme https://pub.dartlang.org/packages/test is quite comprehensive about how to use the package.
I wouldn't say the new test package is so new anymore. I'm using it since more than a year AFAIR and almost all packages maintained by the Dart team and probably most others are already migrated to the new test package.
I've been trying to write a JUnit test case for one of my Java class which creates a page with some given properties in CQ. For it, it need to get reference of SlingRepository and ResourceResolverFactory. I was using this to get an idea on how to achieve this. In the document it says that a POST to "http://$HOST:$PORT/system/sling/junit/" path is used to execute tests on server side. But in CQ I get a 404 error for this path.
Is there any alternative URL in CQ for this? Or will really appreciate if anyone can suggest a better approach?
Thanks
One approach is to use a Sling test runner to execute the JUnit tests via a browser. This is the approach you are mentioning. We had to first install the code in this JAR (org.apache.sling.junit.core) to add the code that allows the URL you listed to work. Once that code is there, this URL will allow you to run tests using the test runner's built in page to run/display tests: http://localhost:4502/system/sling/junit/). My team did this for a while, but we soon moved to a different approach--using the Intellij IDE to develop the Java code for CQ and write the JUnit tests, then executing them within the IDE using the built-in JUnit test runner. The same approach works in Eclipse. For our team this approach was superior because it allowed developers to remain in context in the IDE without having to switch to a browser to run the tests.
The key is being able to resolve the references to classes that are installed/available via CQ, such as the SlingRepository and ResourceResolverFactory classes--and other stuff we commonly used, such as the Resource, ResourceResolver, Node, and Session classes. We use a CQ extension (http://helpx.adobe.com/experience-manager/kb/HowToUseCQ5AsMavenRepository.html) to allow our CQ instance to act like a Maven repository. This allows us to export the CQ JARs so we can then reference them as dependencies in the Java projects we create whenever we may need to use some of the classes available via CQ itself.
Once we set up the project dependencies, then we were able to write code--and corresponding unit tests--within the Intellij IDE. We were able to run the tests within the IDE, allowing developers to remain in context and work on the code that will run in CQ just like they work on any Java code (including things like running tests in debug mode or with code coverage, running single tests, running all tests in a class, using keyboard shortcuts to kick off tests, etc.). For us this approach had many advantages over the browser-based Sling test runner, so I recommend this approach.
Some potential considerations:
Exporting from CQ as a Maven repo may not be the best performance--you may want to add things to your own Maven repo for faster access
You may want to script some of the steps so adding project dependencies is not a manual process, but rather is something done via an automated process
You could even export all CQ JARs--or add some scripting to parse out and repackage only the public classes--and make any CQ class available to your Java projects
Any best way to run the jasmine HTML reporter with browserify styled code? I also want to be able to run this headless with phantomjs, thus the need for the HTML reporter.
I've created a detailed example project which addresses the jasmine testing (and others) - see https://github.com/amitayd/grunt-browserify-jasmine-node-example. Discussion at my blog post
The approach in this aspect was to create a Browserify bundle for the main source code (where all the modules are exposed), and one for tests which relies on external for the main source code. Then tests can be run both in PhantomJS or a real browser.
I don't think there's a jasmine-browserify package yet, and it doesn't really match Browserify/NPM's way of doing things (avoid global exports).
For now, I just include /node_modules/jasmine-reporters/ext/jasmine.js and jasmine-html.js at the top of my <head>, and require all my specs in a top-level spec_entry.js that I then use as the entry point for a Browserify bundle that I put right afterwards in the <head>. (Note that if the entry point is not top-level, you'll have a bad time due to a long-lasting, gnarly bug in Browserify).
This plays nicely with jasmine-node as long as you don't assume the presence of a global document or window. However, you do have to remember to register your specs in that spec_entry.js, unless you want to hack Browserify to get it to crawl your directories for .spec.js files.
I'd be very interested in a more elegant solution, though, that would transparently work with jasmine-node and browserify.
If you use grunt-watchify, no need to create spec_entry.js. Just use require in your specs, and then bundle your specs with grunt-watchify:
watchify: {
test: {
src: './spec/**/*Spec.js',
dest: 'spec/spec-bundle.js'
}
},
jasmine: {
test: {
options: {
specs: 'spec/spec-bundle.js'
}
}
},
Then run your tests with
grunt.registerTask('test', ['watchify:test','jasmine:test']);
As all above answers are little outdated (of course it doesn't mean that they are not working any more etc.) I would like to point to https://github.com/nikku/karma-browserify this is a preprocessor for karma runner. It combines test files with all required dependencies. Such created browserify bundle is passed to karma which base on configuration runs it. Please be aware that you can choose any modern test framework (jasmin,mocha...) and browsers (phantom, chrome ..) Probably this is exactly what you need :)
You may also want to look into Karma. It really simple to setup and it will watch for changes and rerun your test. Check out this sample project that uses Karma to test a browserify/react project. You just need to add a few dependancies and create a karma.conf.js file.
https://github.com/TYRONEMICHAEL/react-component-boilerplate
I'm just getting started with a project that combines GWT, Google App Engine and the Google Eclipse plugin. Where is the best place to store my tests? I normally keep my code organized Maven-style, with src/main/java, and tests in src/test/java. The default setup I get from the plugin dumped my source directly into src, which I'm not too fond of, but I'd prefer not to fight against the tools. What's the "standard" place to put unit tests in such a project?
Solution:
create src/main/java, move the existing code under there
create src/test/java, add your tests here
go to Project -> Properties -> Java Build Path, add the new locations as Source Folders.
I've faced a kind of problem woth GAE testing: Some tests require an appengine-testing.jar wich conflicts with the main appengine-api-xxx.jar of the poject. That way, I was able to run tests for GAE but it conflicted with a normal run/debug launch. To be able to run the app in my local machine, I had to remove the appengine-testing.jar and then, a lot of compilation errors appeared in my test/ clases.
If you want an advice, set your test clases in another project (where you can use the jars without conflict)
Otherwise, if you got make it work, please, tell me how did you do.
Thanks a lot.
Put it where it pains you least.
GWT on Google App Engine is pretty new at this point; you are
optimistic to expect there is a "standard" place, especially since
you've already found an inconsistency in what the tools do.
Since you've already accepted the source starting at "src/", why not
put the test source in "test/"? This is certainly standard in many
contexts.
I think everyone here would agree that in order to be considered a professional software house there are number fundamental things you must have in place.
There is no doubt that one of these things is a build server, the question is, how far do you need to go.
What are the minimum requirements for the build server? (Somewhere to just compile?)
What is the ultimate goal for your build server? (Scheduled, source control integration, auto deployment to test / live servers)
Where is a good place to start assuming you have nothing at the moment?
It would be great if we could list out a few simple tasks that an amateur developer could take on board in order to set them on the right track to a fully functional build server.
It would also be good to hear about people that feel they have a "complete" system setup that performs all the functionality they require and how they went about setting it all up from scratch.
You can start by looking into Cruise Control.
There's also CruiseControl.net if that's your poison.
Essentially though, you need the following ingredients:
A dedicated environment (Virtual Machine/server. Don't use a developer's machine, unless it's just you. Even then, run a VM if you can. Much easier to move it to a server when/if one becomes available in your organisation)
A source control system that supports labelled/tagged revisions (for example, Subversion+TortoiseSVN)
Build scripts. These can be batchfiles that start the devenv.exe or msbuild.exe applications with a command line, or you can use something like Ant or NAnt.
In this scenario, CruiseControl acts as the Continous Integration server, and can make sure that you have builds done as you check in your code. This means you know whether the build is broken quicker than if you just had nightly builds. You should probably also have nightly builds, though.
Hudson is a great CI.
We run farm locally, but we started by downloading hudson.war and doing
java -jar hudson.war
It integrates with SCM, bug trucking systems it is really awesome.
You'll need some disk space if you want to keep old build.
Enjoy it is most straightforward CI solution so far.
HTH,
Hubert.
If you're using Cruise Control, the place to start is an Ant build.xml that does the job manually.
You need a version control system that can do labeled check-outs.
You need JUnit tests to run using the Ant task and generate HTML reports.
Id say you'd have to start by implementing a build strategy so you can build your code in a structured way - I use NANT.
For a basic build server - use one of the CI offerings out there that monitors your source control and triggers a build whenever a change is detected. eg: cruiseControl.
Once you get the basic build together - add the running of your unit tests after a successfuly build.
The most successful system i've had in place had 3 different builds :-
- one that fired on a check in - all this did was build the code.
- an on demand one that would build the application, generate the installer and then put
the installer into a shared drive for the testers to pick up
- a daily build that fired at 10pm. This:
- ran some code generation to build DB and C# code from a UML model
- build the code
- created a new build verification test user on a test oracle instance
- ran the application schema into the db
- fired off a bunch of unit tests
- cleaned up the db user (if the tests were successful)
- ran coverage analysis to build a report of the unit code coverage
Software we used for this was NANT, CruiseControl.NET, a custom code generation system, custom app to build an oracle schema, and NCover for the code analysis.
Start by having a read of Martin Fowler's excellent paper on Continuous Integration.
We built such a system for a major project >2,000 kSLOC and it proved itself to be invaluable.
HTH
cheers,
Rob
Cruise, Maven, Hudson etc are all great but its always worth having a stopgap solution.
You should have a batch file, shell script or simply written instructions that will allow you to run a build from any machine. We have had build servers unavailable in the past and the ability to switch quickly to another machine was invaluable!
The spec of the build machine need not be important unless you have a monster project. We try and keep our build times down to 10 minutes (including unit tests) and we have a pretty big project.
Don't be tempted to create or write your own build system because "none of the tools out there are good enough". All modern build systems allow you to write plugins to do custom stuff.
I'm using Cruisecontrol.NET and an msbuild buildscript.
I can use the buildscript manually so that I can get the latest version of the codebase, built the codebase very easily using the commandline. (This is very interesting if you are working on an application that consists of multiple solutions).
Next to that, my CruiseControl.NET buildserver uses this buildscript as well. It checks on a regular interval if there have been changes committed to the source-control.
If that happens, CC.NET performs the 'get-latest' task that I've defined in the buildscript, builds everything, executes unit-tests and performs a statical code analysis (fxcop).
My 'buildserver' is just an old workstation. It's a PIV, 3Ghz with 1gb RAM, and it does its job perfectly.
One additional thing that I would find interesting, is to have the ability to automatically deploy a new version, or build a setup.
I haven't done that yet, since I'm not sure whether it is a good idea, nor have I found a good strategy yet to do so ...
I mean; is deploying a new version of some components into production for a mission-critical application a good idea ? I don't think so ...
I think this is a good place to start:
[http://confluence.public.thoughtworks.org/display/CC/Home;jsessionid=5201DA7E8D361EB164C40E519DA0F0DE][1]
At least, that's where I started looking when setting up my build server. :)
[1]: Home of CruiseControl
Roughly in order - minimal/least sophisticated through more sophisticated
able to get a specific set of source onto any machine
able to build that source (with no problems)
able to (schedule) build each night/or some other defined period with no user intervention
One (or more) dedicated build server (not shared as qa or dev machine)
able to do a build after each check-in/commit
Notify interested parties of the build status after a build
Provide build status at any time
Create installers as part of the build
ability to deploy/live if build is good
Run unit tests
Run tests on the product
Report the results of those tests
Static code analysis and reporting
...
And the list goes on and on
Don't be afraid to just start with batch files or shell scripts or other ad-hoc means. People made perfectly good software before the CI craze. there were plenty of good processes before Hudson and Cruise Control - ( I am not knocking those or others - I use Hudson among others) - but don't miss the point - these things are here to help you - not become overbearing process)
I couldn't give you all the details about how we set our build server up (I was only involved at the start), but:
We started with an in-house system, implemented in ASP.NET and a .NET Windows Service, using NAnt to do the actual builds. Actually, most of the workflow was implemented in NAnt (e.g. emailing people, copying stuff around, etc.).
We moved to JetBrains TeamCity (there's a free cut-down version available), which is still serving us well.
We use it for builds triggered by a commit: these just build the binaries and run the unit tests. From here, we can do a complete build, which does the MSI as well. From there, we have system test builds that run more in-depth tests, across an environment built with virtual machines (with a separate domain controller, SQL Server box, etc.). When the system tests pass, the build is made available to our QA department for manual testing and some regression tests that we've not automated yet.
In the java space I've tested most of the available build environments. The issue with automatic build is that you quite often end up spending a fair amount of time following it up. After we switched to the commercial bamboo from atlassian, we found that we have to spend a lot less time pampering the build box, which in our case turns out to be very good economy. Bamboo also supports clustering, so you can add inexpensive boxes as needs evolves.
Try & find something that fits in with your existing practices in terms of building - e.g. it's not going to be a good fit to try & use an Ant-based buildserver if you're using Maven, for instance!
Ideally, it should just be able to monitor your source-control system, checkout the code, build, run some tests & publish the results without you being aware of it, or at least not 'till it's reporting a failure. Personally, I'd suggest Hudson (https://hudson.dev.java.net/) as a good starting point as it's easy to get installed & running & has a decent UI.
We start by writing batch scripts that will run on the developers machine. Once we have all the processes automated, we move them to the build server.
On the tools side we are currently moving from Cruise Control to TFS.