How to handle abort in test unit - unit-testing

I want to test one of my function that should panic, but my GitHub action abort() cause even if my code is asking for "only" 2**31 bytes (In my real code the limit is libc::c_int::MAX) that my PC has, GitHub action don't have such memory :p.
#[test]
#[should_panic]
fn reserve_with_max() {
Vec::with_capacity(usize::MAX);
}
But this fails with:
test tests::bad_fd ... ok
test tests::create_true ... ok
test tests::create_false ... ok
test tests::create_with_one ... ok
memory allocation of 25769803764 bytes failed
And this stops the testing and report an error, even if this what I expected (either panic or abort).
I didn't find much about this problem:
https://github.com/rust-lang/rust/issues/67650
https://doc.rust-lang.org/std/alloc/fn.set_alloc_error_hook.html (but nightly)
I expect there should be a #[should_abort], how could I handle this ?
For now my obvious solution is to ignore the test:
#[test]
#[should_panic]
#[ignore = "Ask too much memory"]

You could fork the test but there is a lot of cons:
need nix (maybe there is a OS agnostic fork crate somewhere...)
test code become complex
code coverage tool problem
ask more resource
hacky
#[test]
#[ignore = "Still ignored taupaulin doesn't like it too"]
fn create_with_max() {
use nix::{
sys::{
signal::Signal,
wait::{waitpid, WaitStatus},
},
unistd::{fork, ForkResult},
};
use std::panic;
use std::process::abort;
match unsafe { fork() } {
Ok(ForkResult::Parent { child }) => match waitpid(child, None) {
Ok(WaitStatus::Signaled(_, s, _)) => {
if s != Signal::SIGABRT {
panic!("Didn't abort")
}
}
o => panic!("Didn't expect: {:?}", o),
},
Ok(ForkResult::Child) => {
let result = panic::catch_unwind(|| {
Vec::with_capacity(usize::MAX);
});
if let Err(_) = result {
abort();
}
}
Err(_) => panic!("Fork failed"),
}
}
I don't think I would advice this.

There is an RFC 2116 that would allow to configure rust to make out of memory panic instead of abort():
[profile.dev]
oom = "panic"
[profile.release]
oom = "panic"
Cons:
Not implemented even on nightly #43596
It's will work for your use case cause it's an oom abort but it's not a general solution for handle abort() in test.

Related

Unit testing suspend coroutine

a bit new to Kotlin and testing it... I am trying to test a dao object wrapper with using a suspend method which uses an awaitFirst() for an SQL return object. However, when I wrote the unit test for it, it is just stuck in a loop. And I would think it is due to the awaitFirst() is not in the same scope of the testing
Implementation:
suspend fun queryExecution(querySpec: DatabaseClient.GenericExecuteSpec): OrderDomain {
var result: Map<String, Any>?
try {
result = querySpec.fetch().first().awaitFirst()
} catch (e: Exception) {
if (e is DataAccessResourceFailureException)
throw CommunicationException(
"Cannot connect to " + DatabaseConstants.DB_NAME +
DatabaseConstants.ORDERS_TABLE + " when executing querySelect",
"querySelect",
e
)
throw InternalException("Encountered R2dbcException when executing SQL querySelect", e)
}
if (result == null)
throw ResourceNotFoundException("Resource not found in Aurora DB")
try {
return OrderDomain(result)
} catch (e: Exception) {
throw InternalException("Exception when parsing to OrderDomain entity", e)
} finally {
logger.info("querySelect;stage=end")
}
}
Unit Test:
#Test
fun `get by orderid id, null`() = runBlocking {
// Assign
Mockito.`when`(fetchSpecMock.first()).thenReturn(monoMapMock)
Mockito.`when`(monoMapMock.awaitFirst()).thenReturn(null)
// Act & Assert
val exception = assertThrows<ResourceNotFoundException> {
auroraClientWrapper.queryExecution(
databaseClient.sql("SELECT * FROM orderTable WHERE orderId=:1").bind("1", "123") orderId
)
}
assertEquals("Resource not found in Aurora DB", exception.message)
}
I noticed this issue on https://github.com/Kotlin/kotlinx.coroutines/issues/1204 but none of the work around has worked for me...
Using runBlocking within Unit Test just causes my tests to never complete. Using runBlockingTest explicitly throws an error saying "Job never completed"... Anyone has any idea? Any hack at this point?
Also I fairly understand the point of you should not be using suspend with a block because that kinda defeats the purposes of suspend since it is releasing the thread to continue later versus blocking forces the thread to wait for a result... But then how does this work?
private suspend fun queryExecution(querySpec: DatabaseClient.GenericExecuteSpec): Map {
var result: Map<String, Any>?
try {
result = withContext(Dispatchers.Default) {
querySpec.fetch().first().block()
}
return result
}
Does this mean withContext will utilize a new thread, and re-use the old thread elsewhere? Which then doesnt really optimize anything since I will still have one thread that is being blocked regardless of spawning a new context?
Found the solution.
The monoMapMock is a mock value from Mockito. Seems like the kotlinx-test coroutines can't intercept an async to return a mono. So I forced the method that I can mock, to return a real Mono value instead of a Mocked Mono. To do so, as suggested by Louis. I stop mocking it and return a real value
#Test
fun `get by orderid id, null`() = runBlocking {
// Assign
Mockito.`when`(fetchSpecMock.first()).thenReturn(Mono.empty())
Mockito.`when`(monoMapMock.awaitFirst()).thenReturn(null)
// Act & Assert
val exception = assertThrows<ResourceNotFoundException> {
auroraClientWrapper.queryExecution(
databaseClient.sql("SELECT * FROM orderTable WHERE orderId=:1").bind("1", "123") orderId
)
}
assertEquals("Resource not found in Aurora DB", exception.message)
}

Weird behavior when creating, reading and deleting files in tests in Rust

I'm trying to write a test to a module, the module itself is not important.
The test part looks like this. In this two test I'm trying to do exactly the same thing: create a JSON file, read it and delete it (these are not real tests but these actions need to repeat from tests to test).
src/archive.rs
#[cfg(test)]
mod tests {
use super::*;
use std::{fs, path};
use crate::tests;
#[test]
fn test_archive_tar_archive_test1() {
tests::create_custom_settings(
r#"{
"tmp_path": "/tmp1",
"archive_tmp_path": "/tmp1"
}"#,
);
println!("reading {}", fs::read_to_string(tests::CUSTOM_SETTINGS_PATH).unwrap());
tests::delete_custom_settings();
}
#[test]
fn test_archive_tar_archive_test2() {
tests::create_custom_settings(
r#"{
"tmp_path": "/tmp2",
"archive_tmp_path": "/tmp2"
}"#,
);
println!("reading {}", fs::read_to_string(tests::CUSTOM_SETTINGS_PATH).unwrap());
tests::delete_custom_settings();
}
}
The second file as basic as the first one, these are common parts used in multiple modules:
src/tests.rs
use std::fs;
pub const CUSTOM_SETTINGS_PATH: &str = "/tmp/hvtools_custom_settings.json";
pub fn create_custom_settings(file_data: &str) {
println!("writing {}", file_data);
match fs::write(CUSTOM_SETTINGS_PATH, file_data) {
Ok(_) => {},
Err(e) => panic!(
"Could not create custom settings file under '{}': {}",
CUSTOM_SETTINGS_PATH, e
),
};
}
pub fn delete_custom_settings() {
match fs::remove_file(CUSTOM_SETTINGS_PATH) {
Ok(_) => {},
Err(e) => panic!(
"Could not delete custom settings file under '{}': {}",
CUSTOM_SETTINGS_PATH, e
),
};
}
When running these tests gets me about the same each time with slight differences:
running 2 tests
writing {
"tmp_path": "/tmp1",
"archive_tmp_path": "/tmp1"
}
writing {
"tmp_path": "/tmp2",
"archive_tmp_path": "/tmp2"
}
reading {
"tmp_path": "/tmp2",
"archive_tmp_path": "/tmp2"
}
reading {
"tmp_path": "/tmp2",
"archive_tmp_path": "/tmp2"
}
thread 'archive::tests::test_archive_tar_archive_test2' panicked at 'Could not delete custom settings file under '/tmp/hvtools_custom_settings.json': No such file or directory (os error 2)', src/tests.rs:19:19
As we can see:
reading the file in the first test returns the value which is written in the second test
trying to delete JSON file in the second test fails (while reading works)
Sometimes both tests read contents written in the first test, sometimes the first reading attempt returns an empty string (results vary without code changes):
running 2 tests
writing {
"tmp_path": "/tmp1",
"archive_tmp_path": "/tmp1"
}
writing {
"tmp_path": "/tmp2",
"archive_tmp_path": "/tmp2"
}
reading
reading {
"tmp_path": "/tmp2",
"archive_tmp_path": "/tmp2"
}
It's a physical filesystem (no NFS share or something). Also, as a side note, if I move contents of the second test to the first one like that:
fn test_archive_tar_archive_test1() {
tests::create_custom_settings(
r#"{
"tmp_path": "/tmp1",
"archive_tmp_path": "/tmp1"
}"#,
);
println!(
"reading {}",
fs::read_to_string(tests::CUSTOM_SETTINGS_PATH).unwrap()
);
tests::delete_custom_settings();
tests::create_custom_settings(
r#"{
"tmp_path": "/tmp2",
"archive_tmp_path": "/tmp2"
}"#,
);
println!(
"reading {}",
fs::read_to_string(tests::CUSTOM_SETTINGS_PATH).unwrap()
);
tests::delete_custom_settings();
}
everything works as expected. Tried adding thread::sleep, doesn't seem to change the outcome.
What am I doing wrong?
Tests run in parallel, so it is likely that two tests that access the same paths will interfere with each other. Use different paths in each test to prevent this.
Alternatively, you can force the tests the tests to run sequentially, in a single thread:
cargo test -- --test-threads=1

Running Some Tests Sequentially While Others in Parallel [duplicate]

I have a collection of tests. There are a few tests that need to access a shared resource (external library/API/hardware device). If any of these tests run in parallel, they fail.
I know I could run everything using --test-threads=1 but I find that inconvenient just for a couple of special tests.
Is there any way to keep running all tests in parallel and have an exception for a few? Ideally, I would like to say do not run X, Y, Z at the same time.
Use the serial_test crate. With this crate added, you put in your code:
#[serial]
in front of any test you want run in sequentially.
As mcarton mentions in the comments, you can use a Mutex to prevent multiple pieces of code from running at the same time:
use once_cell::sync::Lazy; // 1.4.0
use std::{sync::Mutex, thread::sleep, time::Duration};
static THE_RESOURCE: Lazy<Mutex<()>> = Lazy::new(Mutex::default);
type TestResult<T = (), E = Box<dyn std::error::Error>> = std::result::Result<T, E>;
#[test]
fn one() -> TestResult {
let _shared = THE_RESOURCE.lock()?;
eprintln!("Starting test one");
sleep(Duration::from_secs(1));
eprintln!("Finishing test one");
Ok(())
}
#[test]
fn two() -> TestResult {
let _shared = THE_RESOURCE.lock()?;
eprintln!("Starting test two");
sleep(Duration::from_secs(1));
eprintln!("Finishing test two");
Ok(())
}
If you run with cargo test -- --nocapture, you can see the difference in behavior:
No lock
running 2 tests
Starting test one
Starting test two
Finishing test two
Finishing test one
test one ... ok
test two ... ok
With lock
running 2 tests
Starting test one
Finishing test one
Starting test two
test one ... ok
Finishing test two
test two ... ok
Ideally, you'd put the external resource itself in the Mutex to make the code represent the fact that it's a singleton and remove the need to remember to lock the otherwise-unused Mutex.
This does have the massive downside that a panic in a test (a.k.a an assert! failure) will cause the Mutex to become poisoned. This will then cause subsequent tests to fail to acquire the lock. If you need to avoid that and you know the locked resource is in a good state (and () should be fine...) you can handle the poisoning:
let _shared = THE_RESOURCE.lock().unwrap_or_else(|e| e.into_inner());
If you need the ability to run a limited set of threads in parallel, you can use a semaphore. Here, I've built a poor one using Condvar with a Mutex:
use std::{
sync::{Condvar, Mutex},
thread::sleep,
time::Duration,
};
#[derive(Debug)]
struct Semaphore {
mutex: Mutex<usize>,
condvar: Condvar,
}
impl Semaphore {
fn new(count: usize) -> Self {
Semaphore {
mutex: Mutex::new(count),
condvar: Condvar::new(),
}
}
fn wait(&self) -> TestResult {
let mut count = self.mutex.lock().map_err(|_| "unable to lock")?;
while *count == 0 {
count = self.condvar.wait(count).map_err(|_| "unable to lock")?;
}
*count -= 1;
Ok(())
}
fn signal(&self) -> TestResult {
let mut count = self.mutex.lock().map_err(|_| "unable to lock")?;
*count += 1;
self.condvar.notify_one();
Ok(())
}
fn guarded(&self, f: impl FnOnce() -> TestResult) -> TestResult {
// Not panic-safe!
self.wait()?;
let x = f();
self.signal()?;
x
}
}
lazy_static! {
static ref THE_COUNT: Semaphore = Semaphore::new(4);
}
THE_COUNT.guarded(|| {
eprintln!("Starting test {}", id);
sleep(Duration::from_secs(1));
eprintln!("Finishing test {}", id);
Ok(())
})
See also:
How to limit the number of test threads in Cargo.toml?
You can always provide your own test harness. You can do that by adding a [[test]] entry to Cargo.toml:
[[test]]
name = "my_test"
# If your test file is not `tests/my_test.rs`, add this key:
#path = "path/to/my_test.rs"
harness = false
In that case, cargo test will compile my_test.rs as a normal executable file. That means you have to provide a main function and add all the "run tests" logic yourself. Yes, this is some work, but at least you can decide everything about running tests yourself.
You can also create two test files:
tests/
- sequential.rs
- parallel.rs
You then would need to run cargo test --test sequential -- --test-threads=1 and cargo test --test parallel. So it doesn't work with a single cargo test, but you don't need to write your own test harness logic.

Unit testing Cloud Functions for Firebase: what's the "right way" to test/mock `transaction`s with sinon.js

Man, this firebase unit testing is really kicking my butt.
I've gone through the documentation and read through the examples that they provide, and have gotten some of my more basic Firebase functions unit tested, but I keep running into problems where I'm not sure how to verify that the transactionUpdated function passed along to the refs .transaction is correctly updating the current object.
My struggle is probably best illustrated with their child-count sample code and a poor attempt I made at writing a unit test for it.
Let's say my function that I want to unit test does the following (taken straight from that above link):
// count.js
exports.countlikechange = functions.database.ref('/posts/{postid}/likes/{likeid}').onWrite(event => {
const collectionRef = event.data.ref.parent;
const countRef = collectionRef.parent.child('likes_count');
// ANNOTATION: I want to verify the `current` value is incremented
return countRef.transaction(current => {
if (event.data.exists() && !event.data.previous.exists()) {
return (current || 0) + 1;
}
else if (!event.data.exists() && event.data.previous.exists()) {
return (current || 0) - 1;
}
}).then(() => {
console.log('Counter updated.');
});
});
Unit Test Code:
const chai = require('chai');
const chaiAsPromised = require("chai-as-promised");
chai.use(chaiAsPromised);
const assert = chai.assert;
const sinon = require('sinon');
describe('Cloud Functions', () => {
let myFunctions, functions;
before(() => {
functions = require('firebase-functions');
myFunctions = require('../count.js');
});
describe('countlikechange', () => {
it('should increase /posts/{postid}/likes/likes_count', () => {
const event = {
// DeltaSnapshot(app: firebase.app.App, adminApp: firebase.app.App, data: any, delta: any, path?: string);
data: new functions.database.DeltaSnapshot(null, null, null, true)
}
const startingValue = 11
const expectedValue = 12
// Below code is misunderstood piece. How do I pass along `startingValue` to the callback param of transaction
// in the `countlikechange` function, and spy on the return value to assert that it is equal to `expectedValue`?
// `yield` is almost definitely not the right thing to do, but I'm not quite sure where to go.
// How can I go about "spying" on the result of a stub,
// since the stub replaces the original function?
// I suspect that `sinon.spy()` has something to do with the answer, but when I try to pass along `sinon.spy()` as the yields arg, i get errors and the `spy.firstCall` is always null.
const transactionStub = sinon.stub().yields(startingValue).returns(Promise.resolve(true))
const childStub = sinon.stub().withArgs('likes_count').returns({
transaction: transactionStub
})
const refStub = sinon.stub().returns({ parent: { child: childStub }})
Object.defineProperty(event.data, 'ref', { get: refStub })
assert.eventually.equals(myFunctions.countlikechange(event), true)
})
})
})
I annotated the source code above with my question, but I'll reiterate it here.
How can I verify that the transactionUpdate callback, passed to the transaction stub, will take my startingValue and mutate it to expectedValue and then allow me to observe that change and assert that it happened.
This is probably a very simple problem with an obvious solution, but I'm very new to testing JS code where everything has to be stubbed, so it's a bit of a learning curve... Any help is appreciated.
I agree that unit testing in the Firebase ecosystem isn't as easy as we'd like it to be. The team is aware of it, and we're working to make things better! Fortunately, there are some good ways forward for you right now!
I suggest taking a look at this Cloud Functions demo that we've just published. In that example we use TypeScript, but this'll all work in JavaScript too.
In the src directory you'll notice we've split out the logic into three files: index.ts has the entry-logic, saythat.ts has our main business-logic, and db.ts is a thin abstraction layer around the Firebase Realtime Database. We unit-test only saythat.ts; we've intentionally kept index.ts and db.ts really simple.
In the spec directory we have the unit tests; take a look at index.spec.ts. The trick that you're looking for: we use mock-require to mock out the entire src/db.ts file and replace it with spec/fake-db.ts. Instead of writing to the real database, we now store our performed operations in-memory, where our unit test can check that they look correct. A concrete example is our score field, which is updated in a transaction. By mocking the database, our unit test to check that that's done correctly is a single line of code.
I hope that helps you do your testing!

How can I test Rust methods that depend on environment variables?

I am building a library that interrogates its running environment to return values to the asking program. Sometimes as simple as
pub fn func_name() -> Option<String> {
match env::var("ENVIRONMENT_VARIABLE") {
Ok(s) => Some(s),
Err(e) => None
}
}
but sometimes a good bit more complicated, or even having a result composed of various environment variables. How can I test that these methods are functioning as expected?
"How do I test X" is almost always answered with "by controlling X". In this case, you need to control the environment variables:
use std::env;
fn env_is_set() -> bool {
match env::var("ENVIRONMENT_VARIABLE") {
Ok(s) => s == "yes",
_ => false
}
}
#[test]
fn when_set_yes() {
env::set_var("ENVIRONMENT_VARIABLE", "yes");
assert!(env_is_set());
}
#[test]
fn when_set_no() {
env::set_var("ENVIRONMENT_VARIABLE", "no");
assert!(!env_is_set());
}
#[test]
fn when_unset() {
env::remove_var("ENVIRONMENT_VARIABLE");
assert!(!env_is_set());
}
However, you need to be aware that environment variables are a shared resource. From the docs for set_var, emphasis mine:
Sets the environment variable k to the value v for the currently running process.
You may also need to be aware that the Rust test runner runs tests in parallel by default, so it's possible to have one test clobber another.
Additionally, you may wish to "reset" your environment variables to a known good state after the test.
Your other option (if you don't want to mess around with actually setting environment variables) is to abstract the call away. I am only just learning Rust and so I am not sure if this is "the Rust way(tm)" to do it... but this is certainly how I would do it in another language/environment:
use std::env;
pub trait QueryEnvironment {
fn get_var(&self, var: &str) -> Result<String, std::env::VarError>;
}
struct MockQuery;
struct ActualQuery;
impl QueryEnvironment for MockQuery {
fn get_var(&self, _var: &str) -> Result<String, std::env::VarError> {
Ok("Some Mocked Result".to_string()) // Returns a mocked response
}
}
impl QueryEnvironment for ActualQuery {
fn get_var(&self, var: &str) -> Result<String, std::env::VarError> {
env::var(var) // Returns an actual response
}
}
fn main() {
env::set_var("ENVIRONMENT_VARIABLE", "user"); // Just to make program execute for ActualQuery type
let mocked_query = MockQuery;
let actual_query = ActualQuery;
println!("The mocked environment value is: {}", func_name(mocked_query).unwrap());
println!("The actual environment value is: {}", func_name(actual_query).unwrap());
}
pub fn func_name<T: QueryEnvironment>(query: T) -> Option<String> {
match query.get_var("ENVIRONMENT_VARIABLE") {
Ok(s) => Some(s),
Err(_) => None
}
}
Example on the rust playground
Notice how the actual call panics. This is the implementation you would use in actual code. For your tests, you would use the mocked ones.
EDIT:
The test helpers below are now available in a dedicated crate
Disclaimer: I'm a co-author
I had the same need and implemented some small test helpers which take care of the caveats mentioned by #Shepmaster .
These test helpers enable testing like so:
#[test]
fn test_default_log_level_is_info() {
with_env_vars(
vec![
("LOGLEVEL", None),
("SOME_OTHER_VAR", Some("foo"))
],
|| {
let actual = Config::new();
assert_eq!("INFO", actual.log_level);
},
);
}
with_env_vars will take care of:
Avoiding side effects when running tests in parallel
Resetting the env variables to their original values when the test closure completes
Support for unsetting environment variables during the test closure
All of the above, also when the test-closure panics.
The helper:
use lazy_static::lazy_static;
use std::env::VarError;
use std::panic::{RefUnwindSafe, UnwindSafe};
use std::sync::Mutex;
use std::{env, panic};
lazy_static! {
static ref SERIAL_TEST: Mutex<()> = Default::default();
}
/// Sets environment variables to the given value for the duration of the closure.
/// Restores the previous values when the closure completes or panics, before unwinding the panic.
pub fn with_env_vars<F>(kvs: Vec<(&str, Option<&str>)>, closure: F)
where
F: Fn() + UnwindSafe + RefUnwindSafe,
{
let guard = SERIAL_TEST.lock().unwrap();
let mut old_kvs: Vec<(&str, Result<String, VarError>)> = Vec::new();
for (k, v) in kvs {
let old_v = env::var(k);
old_kvs.push((k, old_v));
match v {
None => env::remove_var(k),
Some(v) => env::set_var(k, v),
}
}
match panic::catch_unwind(|| {
closure();
}) {
Ok(_) => {
for (k, v) in old_kvs {
reset_env(k, v);
}
}
Err(err) => {
for (k, v) in old_kvs {
reset_env(k, v);
}
drop(guard);
panic::resume_unwind(err);
}
};
}
fn reset_env(k: &str, old: Result<String, VarError>) {
if let Ok(v) = old {
env::set_var(k, v);
} else {
env::remove_var(k);
}
}
A third option, and one I think is better, is to pass in the existing type - rather than creating a new abstraction that everyone would have to coerce to.
pub fn new<I>(vars: I)
where I: Iterator<Item = (String, String)>
{
for (x, y) in vars {
println!("{}: {}", x, y)
}
}
#[test]
fn trivial_call() {
let vars = [("fred".to_string(), "jones".to_string())];
new(vars.iter().cloned());
}
Thanks to qrlpz on #rust for helping me get this sorted for my program, just sharing the result to help others :)