Does anyone know of a way to do continuous integration with R programming? I'm aware of tools like the svUnit package to do the unit tests, but has anyone tried to run these with Hudson/Jenkins?
I do not see any particular problem. These things tend to be scripted so could just
point to the top of your repository
N minutes after each checkin, loop over source directories
invoke R CMD check on each
your package has to be set to use unit test, for which you can use
RUnit which is the initial unit testing for R ; it is widely used
testthat which is a newer package by Hadley, and used by many of his packages
svUnit by Philippe which AFAIK never caught on quite as much as the other two.
That is really not any different from continuous integration with compiled languages. Your question is really about how to do unit testing within R, and that question has been covered before.
at office we have been using Hudson/Jenkins for quite a while. I have contributed integration of svUnit to Jenkins so I would strongly advise you to use svUnit before trying anything else.
have a look at the two libraries I maintain: logging and delftfews or at myself trying to follow zoo and redistribute it on github.
I have not been doing housekeeping recently so the three scripts (in zoo, logging, delftfews) are all slightly different. the one in my zoo version will stop if any test fails. this is practical when you are running R CMD check, but probably less of a good idea when doing continuous integration.
Hudson/Jenkins supports running bash scripts. I think that you can use this as an entrance to your R world. In R, a simple way to keep result is to use sink("toYourFile.txt") and then use CI's result display function to show the toYourFile.txt.
Related
I'm creating a library of components in Modelica, and would appreciate some input on techniques for unit testing the package.
So far I have a test package, consisting of a set of models, one per component. Each test model instantiates a component, and connects it to some very simple helper classes that provide the necessary inputs and outputs.
This works fine when using it interactively in the OMEditor, but I'm looking for a more automated solution with pass/fail criteria etc.
Should I start writing .mos scripts, or is there another/better way ?
Thanks.
I like how Openmodelica testing results look, see
https://test.openmodelica.org/libraries/MSL_3.2.1/BuildModelRecursive.html
click on a red cell: https://test.openmodelica.org/libraries/MSL_3.2.1/files/Modelica.Electrical.Analog.Examples.AD_DA_conversion.diff.html
choose "javascript" for a failing signal: https://test.openmodelica.org/libraries/MSL_3.2.1/files/Modelica.Electrical.Analog.Examples.AD_DA_conversion.diff.resistor.v.html
No idea how they are doing it, though. Obviously some kind of regression testing is done, with previous results stored, but no idea if that is from some testing library or self-made.
In general, I find it kinda sad/suboptimal, that there isn't "the one" testing solution everybody can/should use (cf. e.g. nose or pytest in the python ecosystem), instead everybody seems to cook up their own solutions (or tries to), and all you find is some Modelica conference papers (often without a trace of implementation) or unmaintained library of unknown status.
Off the top of my head, I found/know of (some already linked in other answers here)
OM testing
JModelica testing (seems to only test for compiler errors?)
Xogeny test (Some tests of the library itself fail for me. Also, does not seem to include a test runner)
MoUnit (something by Fraunhofer, and not publically available - maybe in OneWind/OneModelica?)
UnitTesting (apparently some kind of predecessor of XogenyTest. Also, no sources/implementation found)
Optimica Testing Toolkit (apparently a commercial product by Modelon)
SystemModeler VerificationTest
buildingspy Python package, for regression testing among other things. Under the umbrella of the Berkeley Modelica Buildings Library. (Simulation only with Dymola)
Modelica_Requirements library -- define requirements for simulation. (claimed to be open source and implemented, but apparently not available anywhere)
... I'm sure there are more I have forgotten or am not aware of
This seems like a pathological instance of https://xkcd.com/927/. It's kinda impossible for a (non-dev) user to know which of those to choose, which are actually good/usable/available/...
(Not real testing, but also relevant: parsing and semantic analysis using ANTLR: modelica.org/events/Conference2003/papers/h31_parser_Tiller.pdf)
Writing a .mos script would be one way but there is also a small proof-of-concept library by Michael Tiller: XogenyTest which you could use as a basis.
I prefer using the .mos script, it works pretty well when you further integrate your test framework into a continuous integration tool. BuildingPy is a good example of this, though it's not implemented in CI tools, it's still a good tool.
Here's a reference of a good framework design:
UnitTesting: A Library for Modelica Unit Testing
If you have Mathematica and SystemModeler you can run the simulation from Mathematica and use the VerificationTest "function" to test:
VerificationTest[Abs[WSMSimulate["HelloWorld"]["x", .1] - .90] < .01].
Multiple tests can then be simulated in a TestReport[].
I work with a team that develops MPI-based C++ numerical applications. The group uses cxxtest for constructing individual unit tests or small suites, but 1) there are some complications aggregating across directories with cxxtest's usual features and 2) there are some integration tests that are simply easier to implement "from the outside" by launching mpirun from a single python thread.
We would like to use py.test as the glue that holds this together, since it advertises itself as being able to run non-python tests (I could be convinced to jump to nose).
Can anyone get me started on the best practice for doing this? Again, since it seems to be one of the advertised features of py.test I'd love to go about it the way that was originally envisioned.
Thanks,
Eli
This guide from Feb 2014 has some worked examples of using pytest to run C tests, perhaps it will help.
Checkout pytest-cpp, it might be exactly what you need.
you might also be interested in Saru, it's a minimal testing framework that will let you write your tests in python and C++
https://github.com/squishyhumans/saru/wiki/Writing-tests
In one of my Django projects I have a suite of unit tests that are based on TransactionalTestCase class (it takes much longer than TestCase). It is impossible to run tests after each change in code because it takes more than 0.5 hour to run all tests. We looked some time ago for some easy contiuous integration tool that could allow us to (at least) run tests on tests server and send emails with errors to the team members (we have of course code repository and we don't need auto deployment at the momment). Do you have some working solutions or ideas how to accomplish this?
We wrote some 'super extra simple CI server' which does nothing more than running tests and sending email reports (it is integrated with our code repository). But since we had some problems with our not-ideal simple tool recently I'm wondering now if you have sucessfully completed similar scenarios in your working enviroment?
I'm looking for something ligthweight, easy to install and use.
Disclaimer: I don't know Django. But I do know that I use Hudson as my continuous integration tool for a number of languages and platforms. I found it easy to install and confgure on both Windows and Linux (set & forget) and was impressed with the number of plugins available.
Basically, if what you want to do can be automated by a sctript file, then you can use Hudson. It really is worth checking out.
It took me only a few minutes to set it so that I get an email if, and only if, something goes wrong, although you might want to do somethinng else (for which there probably exists a plugin). Hudson also plays well with other tools like BigZilla, all major version control tools, etc
Have you considered having two kinds of tests - basic and advanced and adding additional django command, that would run only basic tests, that are fast? This way you can do basic testing on small changes and run the full test suite only when you are about to commit/push changes?
Our installer program is going to be installing a number of system services, under both Windows and UNIX, using JavaServiceWrapper. There will be a class responsible for creating JavaServiceWrapper config files, installing the services, etc.
Can I have some suggestions on how to unit-test this class?
I would not struggle too much with unit testing such a class, rather I would go for integration / smoke tests. You need these anyway to verify that your installation works properly - preferably not only on your own machine, but also in the target environment, in real life, before you are about to demonstrate it to your boss and most important client :-)
Update: I assume that the class in question would not contain much complicated logic, rather just gluing together different pieces supplied by other APIs. However, if this is not the case, and you feel you can't easily test a significant part of its functionality via integration tests, you can still try unit testing with good ol' mocks and/or dependency injection.
Lol! Found this last night. Environmentally Friendly Deployment. I really think as more complex your deployment, the more you need to validate your environment.
I've been using MSTest so far for my unit-tests, and found that it would sometimes randomly break my builds for no reason. The builds would fail in VS but compile fine in MSBuild - with error like 'option strict does not allow IFoo to cast to type IFoo'. I believe I have finally fixed it, but after the bug coming back and struggling to make it go away again, and little help from MS, it left a bad taste in my mouth. I also noticed when looking at this forum and other blogs and such, that most people are using NUnit, xUnit, or MBUnit.. We are on VS2008 at work BTW.. So now I am looking to explore other options..
I'm working on moving our team to start doing TDD and real unit testing and have some training planned, but first would like to come up with a set of standard tools & best practices. To this end I've been looking online to come up with the right infrastructure for both a build server and dev machines...I was looking at the typemock website as I've heard great things about their mocking framework, and noticed that it seems like they promote MSTest, and even have some links of people moving TO MSTest from NUnit..
This is making me re-think my decision.. so I guess I'm asking - is anyone using MSTest as part of their TDD infrastructure? Any known limitiations it has, if I want to integrate with a build / CI server, or code coverage or any other kind of TDD tool I may need? I did search these forums and mostly find people comparing the 3rd party frameworks to eachother and not even giving MSTest much of a chance... Is there a good reason why.. ?
Thanks for the advice
EDIT: Thanks to the replies in this thread, I've confirmed MSTest works for my purposes and integreated gracefully with CI tools and build servers.
But does anyone have any experience with FinalBuilder?? This is the tool that I'd like us to use for the build scripts to prevent having to write a ton of XML compared to other build tools. Any limitiations here that I should be aware of before committing to MS Test?
I should also note - we are using VSS =(. I'm hoping we can ax this soon - hopefully as part of, maybe even the first step, of setting up all of this infrastructure.
At Safewhere we currently use MSTest for TDD, and it works out okay.
Personally, I love the IDE integration, but dislike the API. If it ever becomes possible to integrate xUnit.NET with the VS test runner, we will migrate very soon thereafter.
At least with TFS, MSTest works pretty well as part of our CI.
All in all I find that MSTest works adequately for me, but I don't cling to it.
If you are evaluating mock libraries, take a look at this comparison.
I've been using MS Test since VS 2008 came out, but I haven't managed to strong-arm anything like TDD or CI here at work, although I've messed with Cruise Control a little in an attempt to build a CI server on my local box.
In general I've found MS Test to be pretty decent for testing locally, but there are some pain points for institutional use.
First, MS Test adds quite a few things that probably don't belong in source control. The .VSMDI files are particularly annoying; just running MS Test creates anywhere from 1 to 5 of them and adds them to the solution file. Which means churn on your .SLN in source control, and churn of that sort is bad.
I understand the supposed point behind these extra files -- tracking test run history and such -- but I don't find them particularly useful for anything but a single developer. You should use your build service and CI for that sort of thing!
Second, you either must have Team Foundation Server to run your unit tests as part of CI, or you have to have a copy of Visual Studio installed on your build server if you use, for example, Cruise Control.NET. See this Stack Overflow question for details.
In general, there's nothing wrong with MS Test. But going CI will not be as smooth as it could be.
I have been using MSTest very successfully in our company. We are currently setting up standardised build processes within our company and so far, we have had good success with TeamCity. For Continuous integration, we use out the box TeamCity configurations. For the actual release builds, we set up large msbuild scripts that automate the entire process.
I really like mstest because of the IDE integration and also that all our devs automatically can use it without installing any 3rd party dependencies. I would not recommend switching just because of the problem you are experiencing. I have come full circle, where we went over to nunit and then came back again. These frameworks are all the same at the end of the day so pick the one that is easiest for most your devs to get access to and start using.
What I suspect your problem might be... sounds like an obscure problem I have had before where incorrect references of dll's (eg: adding explicit references (via browse) to projects in your solution, and not using the project reference) leads to out-of-date problems that only come up after clean checkouts or builds.
The other really suspect issue that I have found before is if you have some visual component or control that has a public property of some custom type that is being serialised in the forms .resx file. I typically need to flag them with an attribute that says SerializationVisibility.Hidden. This means that the IDE will not try to generate setters for the property value (which is typically some object graph). Just a thought. Could be way out.
I trust the tools and they don't really lie about there being a genuine problem. They only misrepresent them or report them as something completely obscure. It sounds to me like you have this. I suspect this because the error message doesn't make sense if all is in order, but it does make sense if some piece of code has loaded up an out of date or modified version of the dll at that point.
I have successfully deployed several FinalBuilder installations and the customers have been very happy with the outcome. I can highly recommend it.