Some unit tests not starting in IntelliJ, need help diagnosing issue - unit-testing

I have a huge group of unit tests that I'm running against our software where I work. I'm running them using IntelliJ IDEA. I'm using Spock and Groovy to make these tests, but they are, in turn, using JUnit. So anything that applies to JUnit should apply here as well... (in theory)
In order to figure out what coverage is I'm right clicking on the root of the code I want to run and simply selecting "Test in 'example' with Coverage".
IntelliJ then proceeds to try and run the 270 or some odd tests we have in this part of the project. The problem is that it isn't running a couple of them and I can't figuring out why for the life of me.
I've googled the issue and turned up nothing substantial. The list of tests in IntelliJ only tells me that these couple of tests didn't start, it doesn't give a reason at all. I tried checking the logs, but all it says about these particular tests is that it's trying to open them, no comment on how successful they were or were not, and why they aren't working.
If someone could just point me in a useful direction? I need to expand my test bases coverage and being able to see what code is actually covered is a pretty critical...
Some Clarification:
The tests can be run on their own, and they do computer coverage for themselves if you do so. They only can't be run when I run them in a batch.
I'm pretty sure it's not a display issue because I get several different reports telling me that the tests were not run. So, more than just the frozen yellow spinner it also warns me with a red "attention" bubble that the tests did not run.

Related

How do I get SonarQube to analyse Code Smells in unit tests but not count them for coverage reports?

I have C++ project being analysed with the commercial SonarQube plugin.
My project has, in my mind, an artificially high code coverage percentage reported as both the "production" source code lines and Unit Test code lines are counted. It is quite difficult to write many unit test code lines that are not run as a part of the unit testing, so they give an immediate boost to the coverage reports.
Is it possible to have the Unit Test code still analysed for Code Smells but not have it count towards the test coverage metric?
I have tried setting the sonar.tests=./Tests parameter (where ./Tests is the directory with my test code. This seems to exclude the test code from all analysis, leaving smells undetected. I would rather check that the test code is of good quality than hope it is obeying the rules applied to the project.
I tried adding the sonar.test.inclusions=./Tests/* in combination with the above. However, I either got the file path syntax incorrect or setting this variable causes a complete omission of the Test code, so that it no longer appears under the 'Code' tab at all as well as being excluded.
The documentation on Narrowing the Focus of what is analysed is not all the clear on what the expected behaviour is, at least to me. Any help would be greatly appreciated as going through every permutation will be quite confusing.
Perhaps I should just accept the idea that with ~300 lines of "production" code and 900 lines of stubs, mocks and unit tests a value of 75% test coverage could mean running 0 lines of "production" code. I checked and currently, my very simple application is at about that ratio of test code to "production" code. I'd expect the ratio to move more towards 50:50 over time but it might not do.
One solution I found was to have two separate SonarQube projects for a single repository. The first you setup in the normal way, with the test code excluded via sonar.tests=./Tests. The second you make a -test repository where you exclude all your production code.
This adds some admin and setup but guarantees that coverage for the normal project is a percentage of only the production code and that you have SonarQube Analysis performed on all your test code (which can also have coverage tracked and would be expected to be very high).
I struggle to remember where I found this suggestion a long time ago. Possibly somewhere in the SonarQube Community Forum, which is worth a look if you are stuck on something.

Running Isolated Tests Without Relying on Terminal? IntelliJ IDEA

I understand that I should use the sure-fire plugin for unit tests, and failsafe for integration. I can run unit tests with mvn test and integration tests with mvn verify but this annoys me for 2 reasons:
I'd prefer to be able to select any test class (or method in that class) and run it individually by a simple click, rather than typing it into terminal every time.
The terminal returns the test results in ugly black/white paragraphs, requiring me to sift through them. I'd much prefer to have the results returned in a visually organized manner, similar to if I right-clicked on the test class in IntelliJ and click 'RunDemoTest`. This produces:
I find the error results much easier to sift through, for example it shows red/green #Test results on the left, and on the right it cleanly organizes the error into
Expected : 3
Actual :1
I'm sure there are advantages to using terminal for automated test runs later into production, but during development I don't find the terminal conducive to my tinkering.
How do I benefit from IntelliJ's visual feedback of test results, while simultaneously ensuring unit & integration tests are run separately, and preserving my freedom to pick and choose which test classes and test methods I can run at any time?
I'm assuming I can't have my cake and eat it too. Please explain.
If you are using the IntelliJ view "Maven Projects" you can very easy toggle on/off the exection of maven integrated tests.
Via "Run/Debug Configurations" you can create test executions that match your reqirement for a comfortable UI.
After these steps, there is a new entry in the drop down list "Run/Debug configurations". When you start the new JUnit Test configuration, the defined tests are executed and the results are presented exactly in the same manner as the screenshot in your question.
The options in my second screenshot allow a very flexible definition of the scope. You don't have go to every java file and click on the green arrows in the editor view.
This configuration isn't related to any maven configuration, and you can use them at any time in your coding process.

Mark job as unstable when all tests skip

I have test scenario where its possible that intermittently all tests within a test run will skip (intentionally), in this case I want Jenkins to mark the build as UNSTABLE. At the moment it marks the job as PASSED, which causes issues when we want quick visual feedback (via dashboard) as to what jobs need attention as all we see are green jobs.
Background:
Tests written in python 2.7.
Test runner used is Nose.
Test results are output using ‘—with-xunit’ flag in nose.
Its a single job that's sole purpose is to run the tests.
Hoping there is a solution as I’m yet to find an obvious one. cheers.
I would suggest a post-build Groovy script to investigate the results and mark it as unstable. Take a look at this similar question:
Jenkins Groovy Script finding null testResultAction for a successful run
The plug-in page (which contains a lot of examples):
https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin

What is the best way to test using grails using IDEA?

I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way.
The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail.
This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself.
So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically.
Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb.
It is hard to run all the unit tests and not integration tests in IDEA.
Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain.
Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!#!#!
Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it.
Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful.
After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.
You can set this property in your config file:
grails.gorm.failOnError=true
That will make it a system wide default for save (which you can override with .save(failOnError: false) if you want).
If you only want this behavior in the test, you can put it in that environment specific stanza in Config.groovy. I actually like this as a project wide behavior.
I'm sure theres a way that you could turn failOnError on/off within a defined scope, but I haven't investigated how to do it yet (might be a good blog post, I'll update this if I write one).
I'm not sure what you've got misconfigured in IDEA, but it shows me a red bar when my tests fail and I can click on the lines in the stacktrace and get right to the issues. The latest version of intellij even collapses down the majority of metaclass cruft that isn't interesting when trying to fix issues.
If you haven't done this already to generate your project, I'd try wiping away your existing .ipr/.iml/.iws/.idea files and running this command to have grails regenerate your configuration:
grails integrate-with --intellij
Then run the .ipr file that gets generated.

How to deal with code coverage?

The other day we had a hard discussion between different developers and project leads, about code coverage tools and the use of the corresponding reports.
Do you use code coverage in your projects and if so, why not?
Is code coverage a fixed part of your builds or continous integration
or do you just use it from time to time?
How do you deal with the numbers derived from the reports?
We use code coverage to verify that we aren't missing big parts in our testing efforts. Once a milestone or so we run a full coverage report and spend a few days analyzing the results, adding test coverage for areas we missed.
We don't run it every build because I don't know that we would analyze it on a regular enough basis to justify that.
We analyze the reports for large blocks of unhit code. We've found this to be the most efficient use. In the past we would try to hit a particular code coverage target but after some point, the returns become very diminishing. Instead, it's better to use code coverage as a tool to make sure you didn't forget anything.
1) Yes we do use code coverage
2) Yes it is part of the CI build (why wouldn't it be?)
3) The important part - we don't look for 100% coverage. What we do look for is buggy/complex code, that's easy to find from your unit tests, and the Devs/Leads will know the delicate parts of the system. We make sure the coverage of such code areas is good, and increases with time, not decreases as people hack in more fixes without the requisite tests.
Code coverage tells you how big your "bug catching" net is, but it doesn't tell you how big the holes are in your net.
Use it as an indicator to gauge your testing efforts but not as an absolute metric.
It is possible to write code that will give you 100% coverage and does not test anything at all.
The way to look at Code Coverage is to see how much is NOT covered and find out why it is not covered. Code coverage simply tells us that the lines of code is being hit when the unit tests are running. It does not tell us that the code works correctly or not. 100% code coverage is a good number but in medium/large projects it is very hard to achieve.
I like to measure code coverage on any non-trivial project. As has been mentioned, try not to get too caught up in achieving an arbitrary/magical percentage. There are better metrics, such as riskiness based on complexity, coverage by package/namespace, etc.
Take a look at this sample Clover dashboard for similar ideas.
We do it in a build, and we see that it should not drop below some value, like 85%.
I also do automatic Top 10 Largest Not-covered methods, to know what to start covering.
Many teams switching to Agile/XP use code coverage as an indirect way of gauging the ROI of their test automation efforts.
I think of it as an experiment - there's an hypothesis that "if we start writing unit tests, our code coverage will improve" - and it makes sense to collect the corresponding observation automatically, via CI, report it in a graph etc.
You use the results to detect rough spots: if the trend toward more coverage levels off at some point, for instance, you might stop to ask what's going on. Perhaps the team has trouble writing tests that are relevant.
We use code coverage to assure that we have no major holes in our tests, and it's run nightly in our CI.
Since we also have a full set of selenium-web tests that run all the way through the stack we also do an additional coverage trick:
We set up the web-application with coverage running. Then we run the full automated test battery of selenium tests. Some of these are smoke tests only.
When the full suite of tests has been run, we can identify suspected dead code simply by looking at the coverage and inspecting code. This is really nice when working on large projects, because you can have big branches of dead code after some time.
We don't really have any fixed metrics on how often we do this, but it's all set up to run with a keypress or two.
We do use code coverage, it is integrated in our nightly build. There are several tools to analyze the coverage data, commonly they report
statement coverage
branch coverage
MC/DC coverage
We expect to reach + 90% statement and branch coverage. MC/DC coverage on the other hand gives broader sense for test team. For the uncovered code, we expect justification records by the way.
I find it depends on the code itself. I won't repeat Joel's statements from SO podcast #38, but the upshot is 'try to be pragmatic'.
Code coverage is great in core elements of the app.
I look at the code as a tree of dependency, if the leaves work (e.g. basic UI or code calling a unit tested DAL) and I've tested them when I've developed them, or updated them, there is a large chance they will work, and if there's a bug, then it won't be difficult to find or fix, so the time taken to mock up some tests will probably be time wasted. Yes there is an issue that updates to code they are dependent on may affect them, but again, it's a case by case thing, and unit tests for the code they are dependent on should cover it.
When it comes to the trunks or branch of the code, yes code coverage of functionality (as opposed to each function), is very important.
For example, I recently was on a team that built an app that required a bundle of calculations to calculate carbon emissions. I wrote a suite of tests that tested each and every calculation, and in doing so was happy to see that the dependency injection pattern was working fine.
Inevitably, due to a government act change, we had to add a parameter to the equations, and all 100+ tests broke.
I realised to update them, over and above testing for a typo (which I could test once), I was unit/regression testing mathematics, and ended up spending the time on building another area of the app instead.
1) Yes we do measure simple node coverage, beacause:
it is easy to do with our current project* (Rails web app)
it encourages our developers to write tests (some come from backgrounds where testing was ad-hoc)
2) Code coverage is part of our continuous integration process.
3) The numbers from the reports are used to:
enforce a minimum level of coverage (95% otherwise the build fails)
find sections of code which should be tested
There are parts of the system where testing is not all that helpful (usually where you need to make use of mock-objects to deal with external systems). But generally having good coverage makes it easier to maintain a project. One knows that fixes or new features do not break existing functionality.
*Details for setting up required coverage for Rails: Min Limit 95 Ahead