Is it possible to enable a rust feature only in test? - unit-testing

Say I have crate A, B and I want to share a test helper function helper() in crate A to crate B, so I use a feature test-utils:
#[cfg(feature="test-utils")]
pub fn helper(){
}
So the problem is since the helper function contains modification of sensitive data in A, I don't want this function to be compiled in production, such as cargo build --all-features.
Is there a way I can enable this feature only in test, and disable it in production?

This is a requested feature of Cargo and is only possible using the version 2 resolver. If crate A has the function you mentioned, then crate B's Cargo.toml may contain
[package]
name = "B"
resolver = "2"
[features]
test-utils = []
[dependencies]
A = "*"
[dev-dependencies]
A = { version = "*", features = ["test-utils"] }
B = { path = ".", features = ["test-utils"] }
This would ensure that both crates are built with the test-utils feature only when testing them. I use to run
cargo build --release -vv
cargo test --release -vv
and make sure I really get the features I asked in both cases.

Related

Is there a programmatic approach to check for unused unit test files in a crate?

The documentation for Rust tests indicates that unit tests are run when the module that defines them is in scope, with the test configuration flag activated:
#[cfg(test)]
mod tests {
#[test]
fn it_works() {
let result = 2 + 2;
assert_eq!(result, 4);
}
}
In a large project, however, it is frequent to confine a test module to a different file, so that the test module is pointed to differently:
#[cfg(test)]
#[path = "unit_tests/tests.rs"]
mod tests;
With this setup, in a large project, it may happen that someone deletes the reference to src/unit_tests/tests.rs (i.e. the code block above), without deleting the test file itself. This usually indicates a bug:
either the removal was intentional, and the file src/unit_tests/tests.rs should not remain in the repository,
or it wasn't, in which case the tests in src/unit_tests/tests.rs are meant to be run and aren't, and CI should be vocal about that.
I'd like to add a verification that there are no such unmoored test files in my crate, to the CI of my project. Is there any tooling in the rust ecosystem that could help me detect those unlinked test files programmatically?
(integration tests are automatically compiled if found in the tests/ repository of the crate, but IIUC, that does not apply to unit tests)
There is an issue asking for cargo to do this check, cargo (publish) should complain about unused source files, and from that found the cargo-modules crate can do this. For example:
//! src/lib.rs
#[cfg(test)]
mod tests;
//! src/tests.rs
#[test]
fn it_works() {
let result = 2 + 2;
assert_eq!(result, 4);
}
> cargo modules generate tree --lib --with-orphans --cfg-test
crate mycrate
└── mod tests: pub(crate) #[cfg(test)]
However, if I remove the #[cfg(test)] mod tests; the output would look like this:
> cargo modules generate tree --lib --with-orphans --cfg-test
crate mycrate
└── mod tests: orphan
You would be able to grep for "orphan" and fail your check based on that.
I will say though that this tool seems very slow from my small tests. The discussion in the linked issue indicate that a better way would have to be used if implemented in cargo.
You will also have to keep in mind any other conditional compilation may yield false positives (i.e. you legitimately include or exclude certain modules based on architecture, OS, or features). There is an --all-features flag that could help with some of that.
There is a very old rustc issue on the subject, which remains open.
However one of the commenters provides a hack:
One possible way to implement this (including as a third-party tool) is to parse the .d files rustc emits for cargo, and look for any .rs files that aren't mentioned.
rg --no-filename '^[^/].*\.rs:$' target/debug/deps/*.d | sed 's/:$//' | sort -u | diff - <(fd '\.rs$' | sort -u)
Apparently the .d files are
makefile-compatible dependency lists
and so I assume they should list the relationship between all rust files in the project. Which is why if one is missing, it's not seen by cargo / rustc.

How to make a test for a function defined under some configuration?

How to make a unit test for a function defined some configuration like what follows
struct I32Add;
impl I32Add{
#[cfg(unstable)]
fn add(x:i32, y:i32) -> i32{x+y}
}
#[test]
fn add_test(){
assert_eq!(I32Add::add(1,2),3)
}
Of course, the test doesn't work. how to make it work?
You can add #[cfg(unstable)] to your test just as you've done for your function. So the test is only compiled if that function is compiled:
#[cfg(unstable)]
#[test]
fn add_test() {
assert_eq!(I32Add::add(1, 2), 3)
}
To get your function and the test to compile and run, you have to enable the unstable config option:
RUSTFLAGS="--cfg unstable" cargo test
However, I would recommended that you use a cargo feature instead of a config option for conditionally enabling portions of your code-base.
struct I32Add;
impl I32Add{
#[cfg(feature = "unstable")]
fn add(x:i32, y:i32) -> i32{x+y}
}
#[cfg(feature = "unstable")]
#[test]
fn add_test(){
assert_eq!(I32Add::add(1,2),3)
}
with this in your cargo.toml:
[features]
unstable = []
And then run it like:
cargo test --features=unstable
See:
How to set cfg options to compile conditionally?
How do I use conditional compilation with `cfg` and Cargo?
Is it possible to write a test in Rust so it does not run on a specific operating system?

Passing custom command-line arguments to a Rust test

I have a Rust test which delegates to a C++ test suite using doctest and wants to pass command-line parameters to it. My first attempt was
// in mod ffi
pub fn run_tests(cli_args: &mut [String]) -> bool;
#[test]
fn run_cpp_test_suite() {
let mut cli_args: Vec<String> = env::args().collect();
if !ffi::run_tests(
cli_args.as_mut_slice(),
) {
panic!("C++ test suite reported errors");
}
}
Because cargo test help shows
USAGE:
cargo.exe test [OPTIONS] [TESTNAME] [-- <args>...]
I expected
cargo test -- --test-case="X"
to let run_cpp_test_suite access and pass on the --test-case="X" parameter. But it doesn't; I get error: Unrecognized option: 'test-case' and cargo test -- --help shows it has a fixed set of options
Usage: --help [OPTIONS] [FILTER]
Options:
--include-ignored
Run ignored and not ignored tests
--ignored Run only ignored tests
...
My other idea was to pass the arguments in an environment variable, that is
DOCTEST_ARGS="--test-case='X'" cargo test
but then I need to somehow split that string into arguments (handling at least spaces and quotes correctly) either in Rust or in C++.
There are two pieces of Rust toolchain involved when you run cargo test.
cargo test itself looks for all testable targets in your package or workspace, builds them with cfg(test), and runs those binaries. cargo test processes the arguments to the left of the --, and the arguments to the right are passed to the binary.
Then,
Tests are built with the --test option to rustc which creates an executable with a main function that automatically runs all functions annotated with the #[test] attribute in multiple threads. #[bench] annotated functions will also be run with one iteration to verify that they are functional.
The libtest harness may be disabled by setting harness = false in the target manifest settings, in which case your code will need to provide its own main function to handle running tests.
The “libtest harness” is what rejects your extra arguments. In your case, since you're intending to run an entire other test suite, I believe it would be appropriate to disable the harness.
Move your delegation code to its own file, conventionally located in tests/ in your package directory:
Cargo.toml
src/
lib.rs
...
tests/
cpp_test.rs
Write an explicit target section in your Cargo.toml for it, with harness disabled:
[[test]]
name = "cpp_test"
# path = "tests/cpp_test.rs" # This is automatic; you can use a different path if you really want to.
harness = false
In cpp_test.rs, instead of writing a function with the #[test] attribute, write a normal main function which reads env::args() and calls the C++ tests.
[Disclaimer: I'm familiar with these mechanisms because I've used Criterion benchmarking (which similarly requires disabling the default harness) but I haven't actually written a test with custom arguments the way you're looking for. So, some details might be wrong. Please let me know if anything needs correcting.]
In addition to Kevin Reid's answer, if you don't want to write your own test harness, you can use the shell-words crate to split an environment variable into individual arguments following shell rules:
let args = var ("DOCTEST_ARGS").unwrap_or_else (|_| String::new());
let args = shell_words::split (&args).expect ("failed to parse DOCTEST_ARGS");
Command::new ("cpptest")
.args (args)
.spawn()
.expect ("failed to start subprocess")
.wait()
.expect ("failed to wait for subprocess");

How to test kotlin function declared `internal` from within tests when java-test-fixtures module is in use

I've tried to make use of test fixtures in my project in kotlin. Unfortunately I have encountered a strange problem, probably a bug, in a component of the toolchain (not sure which one).
Normally a Kotlin function in main declared internal can be accessible from a unit test from the same package. There's a number of proofs supporting this claim, particularly Kotlin: Make an internal function visible for unit tests
Indeed if we have src/main/kotlin/ main.kt:
#file:JvmName("Main")
package org.example.broken_test_fixtures
internal fun sayHello(who: String): String = "Hello, $who!"
fun main() {
println(sayHello("world"))
}
and src/test/kotlin/SayHelloTest.kt:
package org.example.broken_test_fixtures
import org.junit.jupiter.api.Assertions.assertEquals
import org.junit.jupiter.api.Test
class SayHelloTest {
#Test
fun testSayHello() {
val expected = "Hello, world!"
val actual = sayHello("world")
assertEquals(actual, expected)
}
}
the test is passed successfully with a regular build.gradle.kts:
plugins {
kotlin("jvm") version "1.3.61"
}
group = "org.example"
version = "1.0-SNAPSHOT"
repositories {
mavenCentral()
}
dependencies {
implementation(kotlin("stdlib-jdk8"))
testImplementation("org.junit.jupiter:junit-jupiter-api:5.5.2")
testRuntimeOnly("org.junit.jupiter:junit-jupiter-engine:5.5.2")
}
tasks {
compileKotlin {
kotlinOptions.jvmTarget = "1.8"
}
compileTestKotlin {
kotlinOptions.jvmTarget = "1.8"
}
}
tasks.withType<Test> {
useJUnitPlatform()
}
So far so good.
But with an addition of a single line into the plugins list:
id("java-test-fixtures")
the build of the test gets broken with the following error:
e: /home/work/users/alex/broken-test-fixtures/src/test/kotlin/SayHelloTest.kt: (10, 22): Cannot access 'sayHello': it is internal in 'org.example.broken_test_fixtures'
I've discovered that similar problems were mentioned in Gradle, Kotlin Gradle plugin bug trackers. Unfortunately I couldn't extract a solution from those issues. Perhaps, it's a different problem.
For the reader's convenience I have prepared a tiny repo on Github demonstrating the problem.

Testing Node.js, mock out and test a module that has been required?

I am struggling to write high quality tests around my node modules. The problem is the require module system. I want to be able to check that a certain required module has a method or its state has changed. There seem to be 2 relatively small libraries which can be used here: node-gently and mockery. However, due to their low 'profile' it makes me think that either people don't test this, or there is another way of doing this that I am not aware of.
What is the best way to mock out and test a module that has been required?
----------- UPDATE ---------------
node-sandbox works on the same principals as stated below but is wrapped up in a nice module. I am finding it very nice to work with.
--------------- detailed awnser ---------------
After much trial I have found the best way to test node modules in isolation while mocking things out is to use the method by Vojta Jina to run each module inside of a vm with a new context as explained here.
with this testing vm module:
var vm = require('vm');
var fs = require('fs');
var path = require('path');
/**
* Helper for unit testing:
* - load module with mocked dependencies
* - allow accessing private state of the module
*
* #param {string} filePath Absolute path to module (file to load)
* #param {Object=} mocks Hash of mocked dependencies
*/
exports.loadModule = function(filePath, mocks) {
mocks = mocks || {};
// this is necessary to allow relative path modules within loaded file
// i.e. requiring ./some inside file /a/b.js needs to be resolved to /a/some
var resolveModule = function(module) {
if (module.charAt(0) !== '.') return module;
return path.resolve(path.dirname(filePath), module);
};
var exports = {};
var context = {
require: function(name) {
return mocks[name] || require(resolveModule(name));
},
console: console,
exports: exports,
module: {
exports: exports
}
};
vm.runInNewContext(fs.readFileSync(filePath), context);
return context;
};
it is possible to test each module with its own context and easily stub out all external dependencys.
fsMock = mocks.createFs();
mockRequest = mocks.createRequest();
mockResponse = mocks.createResponse();
// load the module with mock fs instead of real fs
// publish all the private state as an object
module = loadModule('./web-server.js', {fs: fsMock});
I highly recommend this way for writing effective tests in isolation. Only acceptance tests should hit the entire stack. Unit and integration tests should test isolated parts of the system.
I think the mockery pattern is a fine one. That said, I usually opt to send in dependencies as parameters to a function (similar to passing dependencies in the constructor).
// foo.js
module.exports = function(dep1, dep2) {
return {
bar: function() {
// A function doing stuff with dep1 and dep2
}
}
}
When testing, I can send in mocks, empty objects instead, whatever seems appropriate. Note that I don't do this for all dependencies, basically only IO -- I don't feel the need to test that my code calls path.join or whatever.
I think the "low profile" that is making you nervous is due to a couple of things:
Some people structure their code similar to mine
Some people have their own helper fulfilling the same objective as mockery et al (it's a very simple module)
Some people don't unit test such things, instead spinning up an instance of their app (and db, etc) and testing against that. Cleaner tests, and the server is so fast it doesn't affect test performance.
In short, if you think mockery is right for you, go for it!
You easily mock require by using "a": https://npmjs.org/package/a
//Example faking require('./foo') in unit test:
var fakeFoo = {};
var expectRequire = require('a').expectRequire;
expectRequire('./foo).return(fakeFoo);
//in sut:
var foo = require('./foo); //returns fakeFoo