I'm confused as to how to enable code coverage for unit tests while writing a Dart application. I have a number of non-web based tests -- they test methods of my domain model used by my web app. These methods don't need to run in the browser for unit testing.
Right now, I run them in the Idea Intellij Community Edition IDE by creating a run configuration for "Run all tests in /test directory".
I can also run them via the terminal like this: "pub run test".
I've read the following guides:
https://dart-lang.github.io/observatory/code-coverage.html
However I cannot seem to get anything to work. Following the guides above, I assume I have to do this from the terminal:
$ collect_coverage --uri=http://... -o coverage.json --resume-isolates
But I see nowhere I can get the URI mentioned. The documentation says "where --uri specifies the Observatory URI emitted by the VM."
Okaaaay... Where do I see the URI emitted by the VM?
Reading other sections of the guide, I see that I can start standalone Dart apps in the Observatory like so:
$ dart --observe <script>.dart
I have tried selecting one of the unit test files and invoking it using that command, like this:
$ dart --observe test/die_roll_test.dart
Observatory listening on http://127.0.0.1:8181/
00:00 +0: can be constructed from a String
00:00 +1: should create normalized DieRoll
00:00 +2: can denormalize
00:00 +3: can add
00:00 +4: can subtract
00:00 +5: can multiply
00:00 +6: can divide
00:00 +7: All tests passed!
vm-service: isolate(297422250) 'die_roll_test.dart:main()' has no debugger attached and is paused at exit. Connect to Observatory at http://127.0.0.1:8181/ to debug.
Visiting URL http://127.0.0.1:8181 in my browser yields an empty page with the title 'Dart VM Observatory'.
In any case, I can try issuing the code coverage command using that URL:
$ collect_coverage --uri=http://127.0.0.1:8181 coverage.json --resume-isolates
This spits out a huge glob of text, but I don't see a coverage.json file created anywhere.
Ideally, this could be integrated with the IDE so I can choose coverage as an option when running the unit tests. But even if I have to live with command line utilities, how do I get it work?
If it helps, I'm running on Mac OS X 10.13 High Sierra, with Dart 2.0.0 and test 1.3.0.
Follow this issue: https://github.com/dart-lang/test/issues/36
We hope to make this easier. It is possible today, but it's not nicely aligned with our test package.
You could also try this package:
https://pub.dartlang.org/packages/test_coverage
I have no experience, but it might be worth a shot!
Related
I'm writing a project to learn how to use Rust and I'm calling my project future-finance-labs. After writing some basic functions and verifying the app can be built I wanted to include some tests, located in aggregates/mod.rs. [The tests are in the same file as the actual code as per the documentation.] I'm unable to get the tests to run despite following the documentation to the best of my ability. I have tried to build the project using PowerShell as well as Bash. [It fails to run on Fedora Linux as well]
Here is my output on Bash:
~/future-finance-labs$ cargo test -- src/formatters/mod.rs
Finished test [unoptimized + debuginfo] target(s) in 5.98s
Running target/debug/deps/future_finance_labs-16ed066e1ea3b9a1
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Using PowerShell I get the same output with some errors like the following:
error: failed to remove C:\Users\jhale\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs\home\jhale\future-finance-labs\target\debug\build\mime_guess-890328c8763afc22\build_script_build-890328c8763afc22.build_script_build.c22di3i8-cgu.0.rcgu.o: The system cannot find the path specified. (os error 3)
After my initial excitement at the prospect of writing a few tests that passed on the first attempt, I quickly realized all the green was indicative; rather, of a failure to even run the tests. I just want to run the unit tests. Running cargo test alone without a separate and file fails as well. Why can't I run any test in this project with my current setup?
It can't find your test because the rust compiler doesn't know about it. You need to add mod aggregates to main.
mod aggregates;
fn main() {
println!("Hello, world!");
}
After you do that, you'll see that your aggregates/mod.rs doesn't compile for many reasons.
And as Mihir was trying to say, you need to use the name of the test, not the name of the file to run a specific test:
cargo test min_works
cargo test aggregates
See also:
How do I “use” or import a local Rust file?
Rust Book: Controlling How Tests Are Run
I'm learning how to code and I'm at a lesson where I'm building a "messaging app" called FlashChat, I've done everything according to my class but a couple of days ago I pressed Command B after I ran the app on simulator, and the following issue kept showing up and I haven't been able to fix it:
CodeSign /Users/XXXXXXXXXXXXX/Library/Developer/Xcode/DerivedData/Flash_Chat_iOS13-bccxjpmgzvggxgetotmpidocaviy/Build/Products/Debug-iphonesimulator/gRPC-C++/grpcpp.framework (in target 'gRPC-C++' from project 'Pods')
cd /Users/XXXXXXXXXXXXX/Desktop/Development/Flash-Chat-iOS13/Pods
export CODESIGN_ALLOCATE=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/codesign_allocate
Signing Identity: "-"
/usr/bin/codesign --force --sign - --timestamp=none /Users/XXXXXXXXXXXXX/Library/Developer/Xcode/DerivedData/Flash_Chat_iOS13-bccxjpmgzvggxgetotmpidocaviy/Build/Products/Debug-iphonesimulator/gRPC-C++/grpcpp.framework
/Users/XXXXXXXXXXXXX/Library/Developer/Xcode/DerivedData/Flash_Chat_iOS13-bccxjpmgzvggxgetotmpidocaviy/Build/Products/Debug-iphonesimulator/gRPC-C++/grpcpp.framework: resource fork, Finder information, or similar detritus not allowed
Command CodeSign failed with a nonzero exit code
I have tried everything from previous threads from checking KeyChain to verifying files, check bundle identifiers, signing licenses and all other suggestions related to anything written on my issue, any other ideas?
Thanks!!!
Follow these steps:
Update all the pods.
Run this command from the terminal.
$ xattr -cr /Users/user_name/Desktop/Lab/Flash-Chat-iOS13/Pods/gRPC-C++
Go to ~Library/Developer/Xcode/DerivedData. Delete all the folders present in that. The issue might be because Xcode is trying to run a older version of the code.
I am also running the same Flash Chat app from Udemy class. The above steps fixed the issue for me.
Start by trying with step 3 directly. If that doesn't work try all the 3 steps in the sequence.
I've been doing some thunderbird develop lately, and wanna do some unit test for my XPCOM module. I notice that there is a tool called mach in mozilla which can automatically run a set of test cases at one time, but when I run "mach help" on windows, It shows that "is_platform_supported - Must have a Firefox, Android or B2G build."
Since I'm using thunderbird, does that mean that I can't use mach for unit test for thunderbird? If not so - which I hope - how can I change my config to use that tool?
Any reples from you will be appreciated!
Thanks in advance.
At the moment, mozilla/mach xpcshell-test does not work (see Bug 934170). As a workaround, use the following:
First, switch to the object directory after building Thunderbird.
To run all xpcshell tests, use: make xpcshell-tests
To run a single xpcshell test, use (e.g.): make xpcshell-tests TEST_PATH=mailnews/news/test/unit/test_server.js
To run all xpcshell tests in a given directory, use (e.g.): make xpcshell-tests TEST_PATH=mailnews/news
I am running tests through Jenkins on a windows box. In my "Execute Windows Batch command" portion of the project configuration I have the following command:
nosetests --nocapture --with-xunitmp --eval-attr "%APPLICATION% and priority<=%PRIORITY% and smoketest and not dev" --processes=4 --process-timeout=2000
The post build actions have "Publish JUnit test result report" with the Test report XMLs path being:
trunk\automation\selenium\src\nosetests.xml
When I do a test run, the nosetests.xml file is created, however it is empty, and I am not getting any Test Results for the build.
I am not really sure what is wrong here.
EDIT 1
I ran the tests with just --with-xunit and REM'd out the --processes and got test results. Does anyone of problems with xunitmp not working with a Windows environment?
EDIT 2
I unstalled an reinstalled nose and nose_xunitmp to no avail.
The nosetest plugin for parallelizing tests and plugin for producing xml output are incompatible. Enabling them at the same time will produce the exact result you got.
If you want to keep using nosetest, you need to execute tests sequentially or find other means of parallelizing them (e.g. by executing multiple parallel nosetest commands (which is what I do at work.))
Alternatively you can use another test runner like nose2 or py.test which do not have this limitation.
Apparently the problem is indeed Windows and how it handles threads. We attempted several tests outside of our Windows Jenkins server and they do not work either. Stupid Windows.
I have about 130 lettuce tests which runs fine locally, but when travis runs them it hangs after a few tests.
Here the tests fails at the 8th scenario: https://travis-ci.org/h3/django-editlive/jobs/3945466
And when I remove the last scenario it passes: https://travis-ci.org/h3/django-editlive/builds/3945648
I tried splitting my tests in separate features files, same problem.
It's doesn't seem to be caused by a specific scenario, but rather by the number of scenario ran.
According to Travis' docs:
Waiting for keyboard input or other kind of human interaction
Concurrency issues (deadlocks, livelocks and so on)
Installation of native extensions that take very long time to compile
The only possibility I could see is a concurrency issue .. but how can I debug it ?
My project is open source so the entire source code is available here:
https://github.com/h3/django-editlive
lettuce-features
I have no definitive answer about the problem, but I managed to work around it.
Since I had no output whatsoever I tried to strace my tests so I could see exactly where it hangs.
But the strace output was to big hand was trimmed by travis .. So I had to grep -v some lines.
Here's what it looks like in my .travis.yml file:
script:
- "strace -q python project/manage.py harvest 2>&1 | grep -v ENOENT"
ENOENT Stands for "No such file or directory", I didn't really need it to make sense of strace output and it cutted enough line to let me see where it hanged.
Turns out it was hanging on a request to selenium:
socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 4
connect(4, {sa_family=AF_INET, sin_port=htons(35146), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
send(4, "POST /hub/session/e7cba641-2842-"..., 359, 0) = 359
I couldn't really replace selenium, so I took a wild guess and replaced firefox with google chrome to run my test .. et voila. Tests ran perfectly.
It sucks that I haven't really solved the problem, but debugging remotely on travis ci is a PITA at best. And with a waiting time of 35min between each iterations I have more important things to do.