I'm trying to get started with TDD in Rust and I need to write a macro, which returns the number of variants in an enum. My implementation is similar to this one:
extern crate proc_macro;
extern crate syn;
#[macro_use]
extern crate quote;
use proc_macro::TokenStream;
#[proc_macro_derive(EnumVariantCount)]
pub fn derive_enum_variant_count(input: TokenStream) -> TokenStream {
let syn_item: syn::DeriveInput = syn::parse(input).unwrap();
let len = match syn_item.data {
syn::Data::Enum(enum_item) => enum_item.variants.len(),
_ => panic!("EnumVariantCount only works on Enums"),
};
let expanded = quote! {
const LENGTH: usize = #len;
};
expanded.into()
}
So first I want to write a test to check if this macro only works on an enum. How would this even work? Can I somehow check if a file compiles in a unit test? Is there some documentation on testing rust macros that I overlooked?
The trybuild crate has been created specifically for this: it compiles a test file and then checks for expected compile-time errors.
Related
I'm trying to test a struct I have that looks something like this
struct CANProxy {
socket: CANSocket
// other stuff .......
}
impl CANProxy {
pub fn new(can_device: &str) -> Self {
let socket = CANSocket::open(can_device).unwrap();
// other stuff .......
Self { socket }
}
}
What I want to test is that the proper messages are being sent across the socket, but I don't want to actually initialize a new can device while running my tests. I wanted to make a dummy CANSocket (which is from the cansocket crate) that uses the same functions and whatnot.
I tried creating a trait and extending the socketcan::CANSocket but it is super tedious and very redundant. I've looked at the mockall crate but I'm not sure if this would help in this situation. Is there an elegant way to accomplish what I want?
trait CANInterface {
fn open(name: &str) -> Result<Self, SomeError>;
// ... all the functions that are a part of the socketcan::CANSocket
// which is a lot of repetition
}
///////////// Proxy code
struct<T: CANInterface> CANProxy<T> {
socket: T
// other stuff .......
}
impl<T: CANInterface> CANProxy<T> {
pub fn open(can_device: &str) -> Result<Self, SomeError> {
let socket = T::open(can_device).unwrap();
// other stuff .......
Ok(Self { socket })
}
}
////////////// Stubbed CANInterfaces
struct FakeCANSocket;
impl CANInterface for FakeCANSocket {
// ..... implementing the trait here
}
// extension trait over here
impl CANInterface for socketcan::CANSocket {
// this is a lot of repetition and is kind of silly
// because I'm just calling the same things
fn open(can_device: &str) -> Self {
CANSocket::open(can_device)
}
/// ..............
/// ..............
/// ..............
}
So, first of all, there are indeed mock-targeted helper tools and crates such as ::mockall to help with these patterns, but only when you already have a trait-based API. If you don't, that part can be quite tedious.
For what is worth, know that there are also other helper crates to help write that boiler-plate-y and redundantly-delegating trait impls such as your open -> open situation. One such example could be the ::delegate crate.
Mocking it with a test-target Cargo feature
With all that being said, my personal take for your very specific situation —the objective is to override a genuine impl with a mock one, but just for testing purposes—, would be to forgo the structured but heavyweight approach of generics & traits, and to instead embrace "duck-typed" APIs, much like it is often done when having implementations on different platforms. In other words, the following suggestion, conceptually, could be interpreted as your test environment being one such special "platform".
You'd then #[cfg(…)]-feature-gate the usage of the real impl, that is, the CANSocket type, in one case, and #[cfg(not(…))]-feature gate a mock definition of your own CANSocket type, provided you managed to copy / mock all of the genuine's type API that you may, yourself, be using.
Add a mock-socket Cargo feature to your project:
[features]
mock-socket = []
Remark: some of you may be thinking of using cfg(test) rather than cfg(feature = "…"), but that approach only works for unit (src/… files with #[cfg(test)] mod tests, cargo test --lib invocation) tests, it doesn't for integration tests (tests/….rs files, cargo test --tests invocation) or doctests (cargo test --doc invocation), since the library itself is then compiled without cfg(test).
Then you can feature-gate Rust code using it
#[cfg(not(feature = "mock-socket"))]
use …path::to::genuine::CANSocket;
#[cfg(feature("mock-socket"))]
use my_own_mock_socket::CANSocket;
So that you can then define that my_own_mock_socket module (e.g., in a my_own_mock_socket.rs file using mod my_own_mock_socket; declaration), provided you don't forget to feature-gate it itself, so that the compiler doesn't waste time and effort compiling it when not using the mocked CANSocket (which would yield dead_code warnings and so on):
#[cfg(feature = "mock-socket")]
mod my_own_mock_socket {
//! It is important that you mimic the names and APIs of the genuine type!
pub struct CANSocket…
impl CANSocket { // <- no traits!
pub fn open(can_device: &str) -> Result<Self, SomeError> {
/* your mock logic */
}
…
}
}
That way, you can use:
either cargo test
or cargo test --features mock-socket
to run pick the implementation of your choice when running your tests
(Optional) if you know you will never want to run the tests for the real implementation, and only the mock one, then you may want to have that feature be enabled by default when running tests. While there is no direct way to achieve this, there is a creative way to work around it, by explicitly telling of the self-as-a-lib dev-dependency that test code has (this dependency is always present implicitly, for what is worth). By making it explicit, we can then use the classic features .toml attribute to enable features for that dev-dependency:
[dev-dependencies]
your_crate_name = { path = ".", features = ["mock-socket"] }
Bonus: not having to define an extra module for the mock code.
When the mock impls in question are short enough, it could be more tempting to just inline its definition and impl blocks. The issue then is that for every item so defined, it has to carry that #[cfg…] attribute which is annoying. That's when helper macros such as that of https://docs.rs/cfg-if can be useful, albeit adding a dependency for such a simple macro may seem a bit overkill (and, very personally, I find cfg_if!'s syntax too sigil heavy).
You can, instead, reimplement it yourself in less than a dozen lines of code:
macro_rules! cfg_match {
( _ => { $($tt:tt)* } $(,)? ) => ( $($tt)* );
( $cfg:meta => $expansion:tt $(, $($($rest:tt)+)?)? ) => (
#[cfg($cfg)]
cfg_match! { _ => $expansion }
$($(
#[cfg(not($cfg))]
cfg_match! { $($rest)+ }
)?)?
);
} use cfg_match;
With it, you can rewrite steps 2. and 3. above as:
cfg_match! {
feature = "mock-socket" => {
/// Mock implementation
struct CANSocket …
impl CANSocket { // <- no traits!
pub fn open(can_device: &str) -> Result<Self, SomeError> {
/* your mock logic */
}
…
}
},
_ => {
use …path::to::genuine::CANSocket;
},
}
You can avoid a lot of the boilerplate by using a macro to create the wrapper trait and implement it for the base struct. Simplified example:
macro_rules! make_wrapper {
($s:ty : $t:ident { $(fn $f:ident ($($p:ident $(: $pt:ty)?),*) -> $r:ty;)* }) => {
trait $t {
$(fn $f ($($p $(: $pt)?),*) -> $r;)*
}
impl $t for $s {
$(fn $f ($($p $(: $pt)?),*) -> $r { <$s>::$f ($($p),*) })*
}
}
}
struct TestStruct {}
impl TestStruct {
fn foo (self) {}
}
make_wrapper!{
TestStruct: TestTrait {
fn foo (self) -> ();
}
}
Playground
This will need to be extended to handle references (at least &self arguments), but you get the idea. You can refer to The Little Book of Rust Macros for more information on writing the macro.
Then you can use a crate like mockall to create your mock implementation of TestTrait or roll your own.
I have a Rust app (a simple interpreter) that needs some setup (initialize a repo) before the environment is usable.
I understand that Rust runs its tests (via cargo test) in a multithreaded manner, so I need to initialize the repo before any tests run. I also need to do this only once per run, not before each test.
In Java's JUnit this would be done with a #BeforeClass (or #BeforeAll in JUnit 5) method. How can I acheive the same thing in Rust?
There's nothing built-in that would do this but this should help (you will need to call initialize() in the beginning of every test):
use std::sync::Once;
static INIT: Once = Once::new();
pub fn initialize() {
INIT.call_once(|| {
// initialization code here
});
}
If you use the ctor crate, you can take advantage of a global constructor function that will run before any of your tests are run.
Here's an example initialising the popular env_logger crate (assuming you have added ctor to your [dev-dependencies] section in your Cargo.toml file):
#[cfg(test)]
#[ctor::ctor]
fn init() {
env_logger::init();
}
The function name is unimportant and you may name it anything.
Just to give people more ideas (for example, how not to call setup in every test), one additional thing you could do is to write a helper like this:
fn run_test<T>(test: T) -> ()
where T: FnOnce() -> () + panic::UnwindSafe
{
setup();
let result = panic::catch_unwind(|| {
test()
});
teardown();
assert!(result.is_ok())
}
Then, in your own tests you would use it like this:
#[test]
fn test() {
run_test(|| {
let ret_value = function_under_test();
assert!(ret_value);
})
}
You can read more about UnwindSafe trait and catch_unwind here: https://doc.rust-lang.org/std/panic/fn.catch_unwind.html
I've found the original idea of this test helper in this medium article by Eric Opines.
Also, there is rstest crate which has pytest-like fixtures which you can use as a setup code (combined with the Jussi Kukkonen's answer:
use std::sync::Once;
use rstest::rstest;
static INIT: Once = Once::new();
pub fn setup() -> () {
INIT.call_once(|| {
// initialization code here
});
}
#[rstest]
fn should_success(setup: ()) {
// do your test
}
Maybe one day rstest will gain scopes support and Once won't be needed anymore.
I have a struct which implements Deserialize and uses the serde(deserialize_with) on a field:
#[derive(Debug, Deserialize)]
struct Record {
name: String,
#[serde(deserialize_with = "deserialize_numeric_bool")]
is_active: bool,
}
The implementation of deserialize_numeric_bool deserializes a string "0" or "1" to the corresponding boolean value:
pub fn deserialize_numeric_bool<'de, D>(deserializer: D) -> Result<bool, D::Error>
where D: Deserializer<'de>
{
struct NumericBoolVisitor;
impl<'de> Visitor<'de> for NumericBoolVisitor {
type Value = bool;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
formatter.write_str("either 0 or 1")
}
fn visit_u64<E>(self, value: u64) -> Result<bool, E>
where E: DeserializeError
{
match value {
0 => Ok(false),
1 => Ok(true),
_ => Err(E::custom(format!("invalid bool: {}", value))),
}
}
}
deserializer.deserialize_u64(NumericBoolVisitor)
}
(I appreciate comments about code improvements)
I'd like to write unit tests for deserialization functions like deserialize_numeric_bool. Of course, my friendly search box revealed the serde_test crate and a documentation page about unit-testing.
But these resources couldn't help me in my case, as the crate tests a structure directly implementing Deserialize.
One idea I had was to create a newtype which only contains the output of my deserialize functions and test it with it. But this looks like a unnecessary indirection to me.
#[derive(Deserialize)]
NumericBool {
#[serde(deserialize_with = "deserialize_numeric_bool")]
value: bool
};
How do I write idiomatic tests for it?
My current solution uses only structures already provided by serde.
In my use case, I only wanted to test that a given string will deserialize successfully into a bool or has a certain error. The serde::de::value provides simple deserializers for fundamental data types, for example U64Deserializer which holds a u64. It also has an Error struct which provides a minimal representation for the Error traits – ready to be used for mocking errors.
My tests look currently like that: I mock the input with a deserializer and pass it to my function under test. I like that I don't need an indirection there and that I have no additional dependencies. It is not as nice as the assert_tokens* provided serde_test, as it needs the error struct and feels less polished. But for my case, where only a single value is deserialized, it fulfills my needs.
use serde::de::IntoDeserializer;
use serde::de::value::{U64Deserializer, StrDeserializer, Error as ValueError};
#[test]
fn test_numeric_true() {
let deserializer: U64Deserializer<ValueError> = 1u64.into_deserializer();
assert_eq!(numeric_bool(deserializer), Ok(true));
}
#[test]
fn test_numeric_false() {
let deserializer: U64Deserializer<ValueError> = 0u64.into_deserializer();
assert_eq!(numeric_bool(deserializer), Ok(false));
}
#[test]
fn test_numeric_invalid_number() {
let deserializer: U64Deserializer<ValueError> = 2u64.into_deserializer();
let error = numeric_bool(deserializer).unwrap_err();
assert_eq!(error.description(), "invalid bool: 2");
}
#[test]
fn test_numeric_empty() {
let deserializer: StrDeserializer<ValueError> = "".into_deserializer();
let error = numeric_bool(deserializer).unwrap_err();
assert_eq!(error.description(), "invalid type: string \"\", expected either 0 or 1");
}
I hope that it helps other folks too or inspire other people to find a more polished version.
I've come across this question several times while trying to solve a similar problem recently. For future readers, pixunil's answer is nice, straightforward, and works well. However, I'd like to provide a solution using serde_test as the unit testing documentation mentions.
I researched how serde_test is used across a few crates that I found via its reverse dependencies on lib.rs. Several of them define small structs or enums for testing deserialization or serialization as you mentioned in your original post. I suppose doing so is idiomatic when testing would be too verbose otherwise.
Here's a few examples; this is a non-exhaustive list:
Example from time
Another example from time
Example from slab (tokio)
Example from bitcoin_hashes
Example from uuid
Example from euclid
Anyway, let's say I have a function to deserialize a bool from a u8 and another function that serializes a bool to a u8.
use serde::{
de::{Error as DeError, Unexpected},
Deserialize, Deserializer, Serialize, Serializer,
};
fn bool_from_int<'de, D>(deserializer: D) -> Result<bool, D::Error>
where
D: Deserializer<'de>,
{
match u8::deserialize(deserializer)? {
0 => Ok(false),
1 => Ok(true),
wrong => Err(DeError::invalid_value(
Unexpected::Unsigned(wrong.into()),
&"zero or one",
)),
}
}
#[inline]
fn bool_to_int<S>(a_bool: &bool, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
if *a_bool {
serializer.serialize_u8(1)
} else {
serializer.serialize_u8(0)
}
}
I can test those functions by defining a struct in my test module. This allows constraining the tests to those functions specifically instead of ser/deserializing a larger object.
#[cfg(test)]
mod tests {
use super::{bool_from_int, bool_to_int};
use serde::{Deserialize, Serialize};
use serde_test::{assert_de_tokens_error, assert_tokens, Token};
#[derive(Debug, PartialEq, Deserialize, Serialize)]
#[serde(transparent)]
struct BoolTest {
#[serde(deserialize_with = "bool_from_int", serialize_with = "bool_to_int")]
a_bool: bool,
}
const TEST_TRUE: BoolTest = BoolTest { a_bool: true };
const TEST_FALSE: BoolTest = BoolTest { a_bool: false };
#[test]
fn test_true() {
assert_tokens(&TEST_TRUE, &[Token::U8(1)])
}
#[test]
fn test_false() {
assert_tokens(&TEST_FALSE, &[Token::U8(0)])
}
#[test]
fn test_de_error() {
assert_de_tokens_error::<BoolTest>(
&[Token::U8(14)],
"invalid value: integer `14`, expected zero or one",
)
}
}
BoolTest is within the tests module which is gated by #[cfg(test)] as per usual. This means that BoolTest is only compiled for tests rather than adding bloat. I'm not a Rust expert, but I think this is a good alternative if a programmer wishes to use serde_test as a harness.
I’m working on a Rust library that provides access to some hardware devices. There are two device types, 1 and 2, and the functionality for type 2 is a superset of the functionality for type 1.
I want to provide different test suites for different circumstances:
tests with no connected device (basic sanity checks, e. g. for CI servers)
tests for the shared functionality (requires a device of type 1 or 2)
tests for the type 2 exclusive functionality (requires a device of type 2)
I’m using features to represent this behavior: a default feature test-no-device and optional features test-type-one and test-type-two. Then I use the cfg_attr attribute to ignore the tests based on the selected features:
#[test]
#[cfg_attr(not(feature = "test-type-two"), ignore)]
fn test_exclusive() {
// ...
}
#[test]
#[cfg_attr(not(any(feature = "test-type-two", feature = "test-type-one")), ignore)]
fn test_shared() {
// ...
}
This is rather cumbersome as I have to duplicate this condition for every test and the conditions are hard to read and maintain.
Is there any simpler way to manage the test suites?
I tried to set the ignore attribute when declaring the module, but apparently it can only be set for each test function. I think I could disable compilation of the excluded tests by using cfg on the module, but as the tests should always compile, I would like to avoid that.
Is there a simple way to conditionally enable or ignore entire test suites in Rust?
The easiest is to not even compile the tests:
#[cfg(test)]
mod test {
#[test]
fn no_device_needed() {}
#[cfg(feature = "test1")]
mod test1 {
fn device_one_needed() {}
}
#[cfg(feature = "test2")]
mod test2 {
fn device_two_needed() {}
}
}
I have to duplicate this condition for every test and the conditions are hard to read and maintain.
Can you represent the desired functionality in pure Rust? yes
Is the existing syntax overly verbose? yes
This is a candidate for a macro.
macro_rules! device_test {
(no-device, $name:ident, {$($body:tt)+}) => (
#[test]
fn $name() {
$($body)+
}
);
(device1, $name:ident, {$($body:tt)+}) => (
#[test]
#[cfg_attr(not(feature = "test-type-one"), ignore)]
fn $name() {
$($body)+
}
);
(device2, $name:ident, {$($body:tt)+}) => (
#[test]
#[cfg_attr(not(feature = "test-type-two"), ignore)]
fn $name() {
$($body)+
}
);
}
device_test!(no-device, one, {
assert_eq!(2, 1+1)
});
device_test!(device1, two, {
assert_eq!(3, 1+1)
});
the functionality for type 2 is a superset of the functionality for type 1
Reflect that in your feature definitions to simplify the code:
[features]
test1 = []
test2 = ["test1"]
If you do this, you shouldn't need to have any or all in your config attributes.
a default feature test-no-device
This doesn't seem useful; instead use normal tests guarded by the normal test config:
#[cfg(test)]
mod test {
#[test]
fn no_device_needed() {}
}
If you follow this, you can remove this case from the macro.
I think if you follow both suggestions, you don't even need the macro.
How to mock Kotlin extension function using Mockito or PowerMock in tests? Since they are resolved statically should they be tested as static method calls or as non static?
I think MockK can help you.
It supports mocking extension functions too.
You can use it to mock object-wide extensions:
data class Obj(val value: Int)
class Ext {
fun Obj.extensionFunc() = value + 5
}
with(mockk<Ext>()) {
every {
Obj(5).extensionFunc()
} returns 11
assertEquals(11, Obj(5).extensionFunc())
verify {
Obj(5).extensionFunc()
}
}
If you extension is a module-wide, meaning that it is declared in a file (not inside class), you should mock it in this way:
data class Obj(val value: Int)
// declared in File.kt ("pkg" package)
fun Obj.extensionFunc() = value + 5
mockkStatic("pkg.FileKt")
every {
Obj(5).extensionFunc()
} returns 11
assertEquals(11, Obj(5).extensionFunc())
verify {
Obj(5).extensionFunc()
}
By adding mockkStatic("pkg.FileKt") line with the name of a package and file where extension is declared (pkg.File.kt in the example).
More info can be found here: web site and github
First of all, Mockito knows nothing Kotlin specific language constructs. In the end, Mockito will have a look into the byte code. Mockito is only able to understand what it finds there and what looks like a Java language construct.
Meaning: to be really sure, you might want to use javap to deassemble the compiled classfiles to identify the exact names/signatures of the methods you want to mock.
And obviously: when that method is static, you have to user PowerMock, or JMockit; if not, you should prefer to with Mockito.
From a java point of view, you simply avoid mocking static stuff; but of course, things get really interesting, now that different languages with different ideas/concepts come together.
Instance extension functions can be stubbed and verified like this with the help of mockito-kotlin:
data class Bar(thing: Int)
class Foo {
fun Bar.bla(anotherThing: Int): Int { ... }
}
val bar = Bar(thing = 1)
val foo = mock<Foo>()
with(foo) {
whenever(any<Bar>().bla(any()).doReturn(3)
}
verify(foo).apply {
bar.bla(anotherThing = 2)
}
I use mockk library.
For extension file write java name, like this:
#file:JvmName(name = "ExtensionUtils")
package myproject.extension
...
And for fast codding I created file with different extension mocks:
object FastMock {
fun extension() = mockkStatic("myproject.extension.ExtensionUtils")
fun listExtension() = mockkStatic("myproject.extension.ListExtensionUtils")
}
In test call this:
FastMock.listExtension()
every { itemList.move(from, to) } returns Unit