cargo build failed with #[rstest] inside - unit-testing

I want to use parametrized testing and found rstest that can do this well.
adding: use rstest::rstest; in the main.rs file and putting the #[rstest] also inside main.rs runs fine on cargo test
but if I want to build the programm with cargo build I got this error
| use rstest::rstest;
| ^^^^^^ use of undeclared crate or module `rstest`
so the question is: How do I have to organize my code to use #[rstest] and also be able to build/run the program ?

Depending if you want to run the code using rstest with the non-test build or not you either have to add rstest in your Cargo.toml
[dependencies]
rstest = "*"
or you have to remove the code using rstest from non test builds:
#[cfg(test)]
use rstest::rstest;

Related

How can I make running a Cargo build script optional?

I have a Rust project that generates a dynamic (cdylib) library. The project uses a cbindgen build script to create an according C header file, matching the exported functions of the library. Cargo.toml looks like this:
[package]
name = "example"
version = "0.1.0"
authors = ["Me <me#foo.bar>"]
build = "build.rs"
[lib]
name = "example"
crate-type = ["cdylib"]
[dependencies]
[build-dependencies]
cbindgen = "0.6.2"
Unfortunately RLS (Rust Language Server) doesn't work very well when the build script is active which makes editing in VS Code rather unpleasant. Is there a way to make running the build script optional, having it disabled by default and only manually enable it when requested on the command-line (i.e. something like cargo build --release --enable-build-scripts)?
You can't conditionally disable build scripts or pass variables to them via cargo build, but you can make use of environment variables instead.
Inside your build.rs:
use std::env;
fn main() {
let build_enabled = env::var("BUILD_ENABLED")
.map(|v| v == "1")
.unwrap_or(true); // run by default
if build_enabled {
// do your build
}
}
Build with your build script:
BUILD_ENABLED=1 cargo build
Build without your build script:
BUILD_ENABLED=0 cargo build
To extend the answer from #PeterHall one can use a Cargo "features" section to pass information on to the build script.
Insert the following lines into Cargo.toml:
[features]
headers = []
Then check for environment variable CARGO_FEATURE_HEADERS in build.rs:
use std::env;
fn write_headers() {
// call cbindgen ...
}
fn main() {
let headers_enabled = env::var_os("CARGO_FEATURE_HEADERS").is_some();
if headers_enabled {
write_headers();
}
}
To make a release build run cargo build --features=headers --release.
Now this solution still compiles the build script and all cbindgen dependencies when RLS updates its status or when manually running cargo test. But cbindgen run-time errors do not hamper RLS anymore.

Generate test results using xunit in VSO build task for asp.net core app

I have this build :
It works fine. The only issue is that the Test Results are overridden. So I actually end up with the test results for the last test project executed.
This is executed by build engine;
C:\Program Files\dotnet\dotnet.exe test C:/agent/_work/4/s/test/Services.UnitTests/project.json --configuration release -xml ./TEST-tle.xml
C:\Program Files\dotnet\dotnet.exe test C:/agent/_work/4/s/test/Web.UnitTests/project.json --configuration release -xml ./TEST-tle.xml
What could help:
1) having "dotnet test" generate XML output file - did not find a way how to do that
2) Use a variable for -xml output file in Build Task. That variable could be a random string/number or just a project name being tested - like what Build engine feeds to "dotnet.exe test". No way how to do that.
Any ideas? Thanks.
I think that, although you're running the task against all of the projects in one go, as the .Net Core (Preview) task doesn't have a working directory, that the test results are being generated at solution root (or similar) and done for each project in turn.
I set mine up using simple command line tasks...
Tool: dotnet
Arguments: test -xml testresults.xml
Working folder: {insert the folder for the project to test here}
These work fine but I have one set up for each project. You could try creating a task for each library and adding the full path to the test results argument (or name them appropriately as starain suggested).
This feels like a minor bug to me.
Based on my test, it doesn’t recognize the date variable as Build Number.
To deal with this issue, you can add another .Net Core (Test) step to run xunit test with different result file.
For example:

Adding unit tests to a F# project in VSCode

I'm using VSCode and the Ionide suite of packages to create a console application in F#. I need to add unit tests to the application so that when I ctrl+shift+p FAKE: Build the project, the tests are run during the build process.
I've created a dummy project in Github as an example.
Initially, the test dir was not there. I created the test dir and into that folder created a second project TestProj.Test (in hindsight, I should have used more descriptive names) for testing purposes. I added the .fsproj file from TestProj to this project so that I could reference the SimpleFunctions.fs. NUnit.Framework and FsUnit are added to the TestProj.Test. Test.fs contains two simple tests.
I intentionally created the TestProj.Test as an F# library because I read on SO that the testing project needed to be a library rather than a console app.
I added lines 9, 31-37, and 47 to the default build.fsx file that comes from Ionide.. However, when I build the whole project (i.e., TestProj), the build fails and I get the following error:
1) System.Exception: NUnit: cannot run tests (the assembly list is empty).
at Fake.NUnitSequential.NUnit(FSharpFunc`2 setParams, IEnumerable`1 assemblies) in C:\code\fake\src\app\FakeLib\UnitTest\NUnit\Sequential.fs:line 22
at FSI_0005.Build.clo#31-3.Invoke(Unit _arg3)
at Fake.TargetHelper.runSingleTarget(TargetTemplate`1 target) in C:\code\fake\src\app\FakeLib\TargetHelper.fs:line 492
Line 22 of the Sequential.fs suggests that assemblies is empty.
What am I doing wrong? How should I set up the build.fsx file so that the tests in TestProj.test run successfully? Alternatively, is there something wrong with the Tests.fs file in TestProj.Test? This seems particularly difficult; is there an easier way to include tests that run automatically with VSCode, Iondide, and F#?
There are a few issues in your project:
trying to test before build "Clean" ==> "Test" ==> "Build" ==> "Deploy"
=> change target dependencies to "Clean" ==> "Build" ==> "Test" ==> "Deploy"
separate paket configuration for test (paket.dependencies, paket.lock in test subfolder) which leads to inconsistent versions of referenced dependencies
=> remove paket.dependencies and paket.lock from test
poisonous mix of NUnit versions
=> remove explicit references to NUnit.Framework from paket.dependencies and run paket.exe install
invalid type extension in test project
=> change to type Test() or delete useless file
building creates output of all projects (and not just src/app) in ./build but tests look for DLLs in ./test
=> change test file pattern to buildDir + "**/*.Test.dll"
if you want to use NUnit3
=> open Fake.Testing and use NUnit3 instead of NUnit
finally, you should commit paket.bootstrapper.exe
I recommend you either use a predefined template or start small and make sure you understand each step and check that it is working as expected. Once you've run over the point of a non-working solution it is extremely hard to get back on track.

CMake and CTest's default "test" command skips a specially named test

I'm using CTest with CMake to run some tests. I use the enable_testing() command which provides me with a default command for make test. All of the tests in my subdirectory are accounted for (by doing an add_test command) and make test works great, except one problem.
There is a certain test, which I've named skip_test, that I do NOT want being run when I do make test. I would like to add a custom target so I can run make skip_test and it will run that test.
I can do this by doing add_custom_target(skip_test ...) and providing CTest with the -R flag and telling it to look for files containing "skip_test" in their name. This also seems to work. My problem now is: how can I get the make test command to ignore skip_test?
If I try commenting out enable_testing and adding my own add_custom_target(test ....), I get "No tests found!!!" now for either make test or make skip_test. I also tried making a Custom CTest file and adding set(CTEST_CUSTOM_TESTS_IGNORE skip_test). This worked so that now make test ignored "skip_test", but now running make skip_test responds with "no tests found!!!".
Any suggestions would be appreciated!
I actually used a different solution. Here is what I did. For the tests that I wanted to exclude, I used the following command when adding them:
"add_test( ..... CONFIGURATIONS ignore_flag)" where ignore_flag is whatever phrase you want. Then, in my CMakeLists.txt, when I define a custom target
add_custom_target( ignore_tests ...)
I give it ctest .... -C ignore_flag
Now, make test WILL skip these tests! make ignore_Tests will run the ignored tests + the un-ignored tests, which I'm okay with.
I'm not sure of a way to do this entirely via CTest, but since you've tagged this question with "googletest", I assume you're using that as your test framework. So, you could perhaps make use of Gtest's ability to disable tests and also to run disabled tests.
By changing the test(s) in question to have a leading DISABLED_ in their name(s), these won't be run by default when you do make test.
You can then add your custom target which will invoke your test executable with the appropriate Gtest flags to run only the disabled tests:
add_custom_target(skip_test
MyTestBinary --gtest_filter=*DISABLED_* --gtest_also_run_disabled_tests VERBATIM)
It's a bit of an abuse of the Gtest functionality - it's really meant to be used to temporarily disable tests while you refactor whatever to get the test passing again. This beats just commenting out the test since it continues to compile it, and it gives a nagging reminder after running the suite that you have disabled tests.

How do you create tests for "make check" with GNU autotools

I'm using GNU autotools for the build system on a particular project. I want to start writing automated tests for verifcation. I would like to just type "make check" to have it automatically run these. My project is in C++, although I am still curious about writing automated tests for other languages as well.
Is this compatible with pretty much every unit testing framework out there (I was thinking of using cppunit)? How do I hook these unit testing frameworks into make check? Can I make sure that I don't require the unit test software to be installed to be able to configure and build the rest of the project?
To make test run when you issue make check, you need to add them to the TESTS variable
Assuming you've already built the executable that runs the unit tests, you just add the name of the executable to the TESTS variable like this:
TESTS=my-test-executable
It should then be automatically run when you make check, and if the executable returns a non-zero value, it will report that as a test failure. If you have multiple unit test executables, just list them all in the TESTS variable:
TESTS=my-first-test my-second-test my-third-test
and they will all get run.
I'm using Check 0.9.10
configure.ac
Makefile.am
src/Makefile.am
src/foo.c
tests/check_foo.c
tests/Makefile.am
./configure.ac
PKG_CHECK_MODULES([CHECK], [check >= 0.9.10])
./tests/Makefile.am for test codes
TESTS = check_foo
check_PROGRAMS = check_foo
check_foo_SOURCES = check_foo.c $(top_builddir)/src/foo.h
check_foo_CFLAGS = #CHECK_CFLAGS#
and write test code, ./tests/check_foo.c
START_TEST (test_foo)
{
ck_assert( foo() == 0 );
ck_assert_int_eq( foo(), 0);
}
END_TEST
/// And there are some tcase_xxx codes to run this test
Using check you can use timeout and raise signal. it is very helpful.
You seem to be asking 2 questions in the first paragraph.
The first is about adding tests to the GNU autotools toolchain - but those tests, if I'm understanding you correctly, are for both validating that the environment necessary to build your application exists (dependent libraries and tools) as well as adapt the build to the environment (platform specific differences).
The second is about unit testing your C++ application and where to invoke those tests, you've proposed doing so from the autotools tool chain, presumably from the configure script. Doing that isn't conventional though - putting a 'test' target in your Makefile is a more conventional way of executing your test suite. The typical steps for building and installing an application with autotools (at least from a user's perspective, not from your, the developer, perspective) is to run the configure script, then run make, then optionally run make test and finally make install.
For the second issue, not wanting cppunit to be a dependency, why not just distribute it with your c++ application? Can you just put it right in what ever archive format you're using (be it tar.gz, tar.bz2 or .zip) along with your source code. I've used cppunit in the past and was happy with it, having used JUnit and other xUnit style frameworks.
Here is a method without dependencies:
#src/Makefile.am
check_PROGRAMS = test1 test2
test1_SOURCES = test/test1.c code_needed_to_test1.h code_needed_to_test1.c
test2_SOURCES = test/test2.c code_needed_to_test2.h code_needed_to_test2.c
TESTS = $(check_PROGRAMS)
The make check will naturally work and show formatted and summarized output:
$ make check
...
PASS: test1
PASS: test2
============================================================================
Testsuite summary for foo 1.0
============================================================================
# TOTAL: 2
# PASS: 2
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
When you do a make dist nothing from src/test/* will be
in the tarball. Test code is not in the dist, only source will be.
When you do a make distcheck it will run make check and run your tests.
You can use Automake's TESTS to run programs generated with check_PROGRAMS but this will assume that you are using a log driver and a compiler for the output. It is probably easier to still use check_PROGRAMS but to invoke the test suite using a local rule in the Makefile:
check_PROGRAMS=testsuite
testsuite_SOURCES=...
testsuite_CFLAGS=...
testsuite_LDADD=...
check-local:
./testsuite