I was hoping someone could explain how coverage.py works since after reading the documentation on it I am still rather confused. I am attempting to figure out code coverage by a TestCase class and the results haven't been very logical, particularly since after commenting out large chunks of tests the percent missing remains the same. I am working in the community version of PyCharm and haven't been able to find any alternatives to coverage.py so if you could recomend another option that would be appreciated too.
Related
I have spent hours looking for this and can't figure it out. I have a program that I have made which I would like to add voice recognition to (all it does is a few simple commands like time, date, things like that...it's just for fun) and I know I have some form of SAPI on my computer because I had to include sapi.h to get the voice synthesis to work (and that's works fine by the way) but I can't figure out for the life of me how to use the voice recognition.
It appears people have already asked about C++ voice recognition on here so I apologize if this is just a duplicate but none of the others seemed to answer my question, perhaps I'm just missing something (I'm fairly new to C++ so it's very possible) but I could really use some help here.
Thanks a bunch!
----edit----
The code in the link has an issue on my computer, it can't find the file "atlbase.h" which then of course is causing all sorts of other problems (hopefully these will all be resolved when I fix the atlbase.h problem). I found this site which seems to offer an explanation which shows up on quite a few other sites and appears to work, but I don't know how to get to the file that everyone is changing.
https://answers.unrealengine.com/questions/12757/error-cannot-find-atlbaseh-when-compiling-in-vs201.html
Could someone please help me as to where the file they're all changing is?
My Django app is in need of some automated testing.
Many of the views produce tabular data (from a generic list view). I have created fixtures to test some of the more complex cases that have been causing subtle bugs.
What should I be using to test the value in a specific table cell (or column)?
There seems to be a lot of testing tools / libraries out there django-test client, Selenium, Nose. A lot of things seem to be aimed at unit testing (while I am not finding so many bugs at this level). I am looking more to integration testing. Reading all the documentation for all the libraries is going to take a while to find what I want.
So can someone advise what libraries / tools I should use to check the final output values in my list view's tabular output? I would like to give a URL, and confirm that the page returned has a value in particular row / column is equal to my expected value.
There seems to be a lot of testing tools / libraries out there django-test client, Selenium, Nose. A lot of things seem to be aimed at unit testing (while I am not finding so many bugs at this level). I am looking more to integration testing.
It works very well for integration testing. Maybe you would have found out if you had tried ? Maybe it's time for you to (re?) read about the correct hacker attitude
Also, I've been having loads of fun with ghost.py
Reading all the documentation for all the libraries is going to take a while to find what I want.
It's your work to do research as well. Believe it or not it took me like 5 hours to check all solutions yesterday and decide to go with ghost.py and get it to work nicely with django (hence the gist upload !).
But yeah, if you don't want to learn anything new then you're stuck at "Not knowing out to do integration testing". If you want to learn "how to make integration testing" then you have to do research. There's no secret my friend :)
I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way.
The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail.
This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself.
So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically.
Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb.
It is hard to run all the unit tests and not integration tests in IDEA.
Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain.
Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!#!#!
Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it.
Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful.
After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.
You can set this property in your config file:
grails.gorm.failOnError=true
That will make it a system wide default for save (which you can override with .save(failOnError: false) if you want).
If you only want this behavior in the test, you can put it in that environment specific stanza in Config.groovy. I actually like this as a project wide behavior.
I'm sure theres a way that you could turn failOnError on/off within a defined scope, but I haven't investigated how to do it yet (might be a good blog post, I'll update this if I write one).
I'm not sure what you've got misconfigured in IDEA, but it shows me a red bar when my tests fail and I can click on the lines in the stacktrace and get right to the issues. The latest version of intellij even collapses down the majority of metaclass cruft that isn't interesting when trying to fix issues.
If you haven't done this already to generate your project, I'd try wiping away your existing .ipr/.iml/.iws/.idea files and running this command to have grails regenerate your configuration:
grails integrate-with --intellij
Then run the .ipr file that gets generated.
We are thinking about moving our tests from MSTest to XUnit.
Is there any migration application that takes a MSTest and migrates it to XUnit?
Also, if not, what should I look out for when doing this?
Thanks.
JD.
I moved quite a few tests recently. It depends on how many and what type of tests you're converting, and you didnt kill yourself giving us details. In general, I think its safe to assume that your average MSTest minded shop wont be massively Test Infected and thus wont have delved into each dark corner of MSTest.
All the Assert.* methods and the basic Test Attributes are simple find and replaces. The more rare ones, I'd generally steer towards assessing each case individually. Unless you're already a xUnit.net expert, you've got lots to learn and this will help you.
Also, usage of Assert.Fail isnt a simple transformation. The other thing is the transformation of TestClassInitialize to IUseFixture - simple to do, but hard to automate.
If people are using Test References, you won't be able to remove the reference to the MSTest assembly (and you'll still need to have VS on your build server - and it will continue to randomly fail on the Shadow taks, see my questions).
The biggest manual work for me was going through the 20 lines of boilerplate in a region at the top to see whether anyone actually used any of the custom attributes before deleting them.
The main thing that would have been a lot of work had it not been for a CodeRush template was converting ExpectedException to Assert.Throws. If you havent got CodeRush or ReSharper on this job, you'd be stealing money from your employer.
Consider Compare MSTest and xUnit
Are there any tools that can tell me what percentage of a XSL document get actually executed during tests?
UPDATE
I could not find anything better than Oxygen's XSL debugger and profiler, so I'm accepting Mladen's answer.
This didn't exist back when this question was asked, but now there is ONE option for finding code coverage of XSLT documents:
http://code.google.com/p/cakupan/
I'll admit that I haven't used it yet, as I'm still gathering information right now, but as far as I'm aware, this is IT.
Not sure about code coverage itself, but you can find an XML debugger and profiler from Oxygen which might help you out.
If anyone is still interested, Saxon has a performance analysis, which has a functionality that gives you a breakdown of each template and the number of times they are used (which is great for optimisation).
This is how my output looks like: