How to test system that requires Time resource? - unit-testing

I want to write tests for system that moves entities and detects collisions, system uses Res<Time> to ensure that entities move with constant speed. I was trying to write test following this example, but it doesn't use Time. My test creates world and inserts two entities and tries to run my system.
let mut world = World::default();
/* here insert entities into world */
let mut update_stage = SystemStage::parallel();
update_stage.add_system(move_system);
It doesn't insert Time resource so unsurprisingly running it ends with
panicked at 'Resource requested by io_project::move_system::move_system does not exist: bevy_core::time::time::Time'
World has method insert_resource() but I wasn't able to figure out if I can use it to insert Time resource.
I was also wondering if it would be possible to use some kind of ''fake'' time, meaning that instead of checking how much time really passed I would call something like time.pass_seconds(1.). This would be useful to simulate very high or low framerate and create more consistent results.
I am using bevy 0.6.1

In the source code (https://github.com/bevyengine/bevy/blob/main/crates/bevy_core/src/time/time.rs) there's an example which covers adding the Time resource for tests, as well as simulating time passing.
I've included it below for quick reference:
# use bevy_core::prelude::*;
# use bevy_ecs::prelude::*;
# use bevy_utils::Duration;
# fn main () {
# test_health_system();
# }
struct Health {
// Health value between 0.0 and 1.0
health_value: f32,
}
fn health_system(time: Res<Time>, mut health: ResMut<Health>) {
// Increase health value by 0.1 per second, independent of frame rate, but not beyond 1.0
health.health_value = (health.health_value + 0.1 * time.delta_seconds()).min(1.0);
}
// Mock time in tests
fn test_health_system() {
let mut world = World::default();
let mut time = Time::default();
time.update();
world.insert_resource(time);
world.insert_resource(Health { health_value: 0.2 });
let mut update_stage = SystemStage::single_threaded();
update_stage.add_system(health_system);
// Simulate that 30 ms have passed
let mut time = world.resource_mut::<Time>();
let last_update = time.last_update().unwrap();
time.update_with_instant(last_update + Duration::from_millis(30));
// Run system
update_stage.run(&mut world);
// Check that 0.003 has been added to the health value
let expected_health_value = 0.2 + 0.1 * 0.03;
let actual_health_value = world.resource::<Health>().health_value;
assert_eq!(expected_health_value, actual_health_value);
}

Related

How to define testable timer loop in kotlin (android)?

I want to have a periodic timer loop (e.g. 1 second intervals). There are many ways to do that, but I haven't found a solution that would be suitable for unit testing.
Timer should be precise
Unit test should be able to skip the waiting
The closest that I came to a solution was to use coroutines: A simple loop with delay, runBlockingTest and advanceTimeBy.
coScope.launch {
while (isActive) {
// do stuff
delay(1000L)
}
}
and
#Test
fun timer_test() = coScope.runBlockingTest {
... // start job
advanceTimeBy(9_000L)
... // cancel job
}
It works to some degree, but the timer is not precise as it does not account for the execution time.
I haven't found a way to query internal timer used in a coroutine scope or a remaining timeout value inside withTimeoutOrNull:
coScope.launch {
withTimeoutOrNull(999_000_000L) { // max allowed looping time
while (isActive) {
// do stuff
val timeoutLeft // How to get that value ???
delay(timeoutLeft.mod(1000L))
}
}
}
Next idea was to use ticker:
coScope.launch {
val tickerChannel = ticker(1000L, 0L, coroutineContext)
var referenceTimer = 0L
for (event in tickerChannel) {
// do stuff
println(referenceTimer)
referenceTimer += 1000L
}
}
However, the connection between TestCoroutineDispatcher() and ticker does not produce right results:
private val coDispatcher = TestCoroutineDispatcher()
#Test timerTest() = runBlockingTest(coDispatcher) {
myTimer.lauchPeriodicJob()
advanceTimeBy(20_000L) // or delay(20_000L)
myTimer.cancelPeriodicJob()
println("end_of_test")
}
rather consistently results in:
0
1000
2000
3000
4000
5000
6000
end_of_test
I am also open for any alternative approaches that satisfy the two points above.

What is the idiomatic way to write Rust microservice with shared db connections and caches?

I'm writing my first Rust microservice with hyper. After years of development in C++ and Go I tend to use controller for processing requests (like here - https://github.com/raycad/go-microservices/blob/master/src/user-microservice/controllers/user.go) where the controller stores shared data like db connection pool and different kinds of cache.
I know, with hyper, I can write it this way:
use hyper::{Body, Request, Response};
pub struct Controller {
// pub cache: Cache,
// pub db: DbConnectionPool
}
impl Controller {
pub fn echo(&mut self, req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
// extensively using db and cache here...
let mut response = Response::new(Body::empty());
*response.body_mut() = req.into_body();
Ok(response)
}
}
and then use it:
use hyper::{Server, Request, Response, Body, Error};
use hyper::service::{make_service_fn, service_fn};
use std::{convert::Infallible, net::SocketAddr, sync::Arc, sync::Mutex};
async fn route(controller: Arc<Mutex<Controller>>, req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
let mut c = controller.lock().unwrap();
c.echo(req)
}
#[tokio::main]
async fn main() {
let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
let controller = Arc::new(Mutex::new(Controller{}));
let make_svc = make_service_fn(move |_conn| {
let controller = Arc::clone(&controller);
async move {
Ok::<_, Infallible>(service_fn(move |req| {
let c = Arc::clone(&controller);
route(c, req)
}))
}
});
let server = Server::bind(&addr).serve(make_svc);
if let Err(e) = server.await {
eprintln!("server error: {}", e);
}
}
Since the compiler doesn't let me share mutable structure between threads I got to use Arc<Mutex<T>> idiom. But I'm afraid the let mut c = controller.lock().unwrap(); part would block the entire controller while processing single request, i.e. there's no concurrency here.
What is the idiomatic way to address this problem?
&mut always acquires a (compile time or runtime) exclusive lock to the value.
Only acquire a &mut at the exact scope you want to get locked.
If a value owned by the locked value needs separate locking management,
wrap it in a Mutex.
Assuming your DbConnectionPool is structured like this:
struct DbConnectionPool {
conns: HashMap<ConnId, Conn>,
}
We need to &mut the HashMap when we add/remove items on the HashMap,
but we don't need to &mut the value in Conn.
So Arc allows us to separate the mutability boundary from its parent,
and Mutex allows us to add its own interior mutability.
Moreover, our echo method doesn't want to be &mut,
so another layer of interior mutability needs to be added on the HashMap.
So we change this to
struct DbConnectionPool {
conns: Mutex<HashMap<ConnId, Arc<Mutex<Conn>>>,
}
Then when you want to get a connection,
fn get(&self, id: ConnId) -> Arc<Mutex<Conn>> {
let mut pool = self.db.conns.lock().unwrap(); // ignore error if another thread panicked
if let Some(conn) = pool.get(id) {
Arc::clone(conn)
} else {
// here we will utilize the interior mutability of `pool`
let arc = Arc::new(Mutex::new(new_conn()));
pool.insert(id, Arc::clone(&arc));
arc
}
}
(the ConnId param and the if-exists-else logic is used to simplify the code; you can change the logic)
On the returned value you can do
self.get(id).lock().unwrap().query(...)
For convenient illustration I changed the logic to user supplying the ID.
In reality, you should be able to find a Conn that has not been acquired and return it.
Then you can return a RAII guard for Conn,
similar to how MutexGuard works,
to auto free the connection when user stops using it.
Also consider using RwLock instead of Mutex if that might result in a performance boost.

Running Some Tests Sequentially While Others in Parallel [duplicate]

I have a collection of tests. There are a few tests that need to access a shared resource (external library/API/hardware device). If any of these tests run in parallel, they fail.
I know I could run everything using --test-threads=1 but I find that inconvenient just for a couple of special tests.
Is there any way to keep running all tests in parallel and have an exception for a few? Ideally, I would like to say do not run X, Y, Z at the same time.
Use the serial_test crate. With this crate added, you put in your code:
#[serial]
in front of any test you want run in sequentially.
As mcarton mentions in the comments, you can use a Mutex to prevent multiple pieces of code from running at the same time:
use once_cell::sync::Lazy; // 1.4.0
use std::{sync::Mutex, thread::sleep, time::Duration};
static THE_RESOURCE: Lazy<Mutex<()>> = Lazy::new(Mutex::default);
type TestResult<T = (), E = Box<dyn std::error::Error>> = std::result::Result<T, E>;
#[test]
fn one() -> TestResult {
let _shared = THE_RESOURCE.lock()?;
eprintln!("Starting test one");
sleep(Duration::from_secs(1));
eprintln!("Finishing test one");
Ok(())
}
#[test]
fn two() -> TestResult {
let _shared = THE_RESOURCE.lock()?;
eprintln!("Starting test two");
sleep(Duration::from_secs(1));
eprintln!("Finishing test two");
Ok(())
}
If you run with cargo test -- --nocapture, you can see the difference in behavior:
No lock
running 2 tests
Starting test one
Starting test two
Finishing test two
Finishing test one
test one ... ok
test two ... ok
With lock
running 2 tests
Starting test one
Finishing test one
Starting test two
test one ... ok
Finishing test two
test two ... ok
Ideally, you'd put the external resource itself in the Mutex to make the code represent the fact that it's a singleton and remove the need to remember to lock the otherwise-unused Mutex.
This does have the massive downside that a panic in a test (a.k.a an assert! failure) will cause the Mutex to become poisoned. This will then cause subsequent tests to fail to acquire the lock. If you need to avoid that and you know the locked resource is in a good state (and () should be fine...) you can handle the poisoning:
let _shared = THE_RESOURCE.lock().unwrap_or_else(|e| e.into_inner());
If you need the ability to run a limited set of threads in parallel, you can use a semaphore. Here, I've built a poor one using Condvar with a Mutex:
use std::{
sync::{Condvar, Mutex},
thread::sleep,
time::Duration,
};
#[derive(Debug)]
struct Semaphore {
mutex: Mutex<usize>,
condvar: Condvar,
}
impl Semaphore {
fn new(count: usize) -> Self {
Semaphore {
mutex: Mutex::new(count),
condvar: Condvar::new(),
}
}
fn wait(&self) -> TestResult {
let mut count = self.mutex.lock().map_err(|_| "unable to lock")?;
while *count == 0 {
count = self.condvar.wait(count).map_err(|_| "unable to lock")?;
}
*count -= 1;
Ok(())
}
fn signal(&self) -> TestResult {
let mut count = self.mutex.lock().map_err(|_| "unable to lock")?;
*count += 1;
self.condvar.notify_one();
Ok(())
}
fn guarded(&self, f: impl FnOnce() -> TestResult) -> TestResult {
// Not panic-safe!
self.wait()?;
let x = f();
self.signal()?;
x
}
}
lazy_static! {
static ref THE_COUNT: Semaphore = Semaphore::new(4);
}
THE_COUNT.guarded(|| {
eprintln!("Starting test {}", id);
sleep(Duration::from_secs(1));
eprintln!("Finishing test {}", id);
Ok(())
})
See also:
How to limit the number of test threads in Cargo.toml?
You can always provide your own test harness. You can do that by adding a [[test]] entry to Cargo.toml:
[[test]]
name = "my_test"
# If your test file is not `tests/my_test.rs`, add this key:
#path = "path/to/my_test.rs"
harness = false
In that case, cargo test will compile my_test.rs as a normal executable file. That means you have to provide a main function and add all the "run tests" logic yourself. Yes, this is some work, but at least you can decide everything about running tests yourself.
You can also create two test files:
tests/
- sequential.rs
- parallel.rs
You then would need to run cargo test --test sequential -- --test-threads=1 and cargo test --test parallel. So it doesn't work with a single cargo test, but you don't need to write your own test harness logic.

Delaying actions using Decentraland's ECS

How do I make an action occur with a delay, but after a timeout?
The setTimeout() function doesn’t work in Decentraland scenes, so is there an alternative?
For example, I want an entity to wait 300 milliseconds after it’s clicked before I remove it from the engine.
To implement this you’ll have to create:
A custom component to keep track of time
A component group to keep track of all the entities with a delay in the scene
A system that updates the timers con all these
components on each frame.
It sounds rather complicated, but once you created one delay, implementing another delay only takes one line.
The component:
#Component("timerDelay")
export class Delay implements ITimerComponent{
elapsedTime: number;
targetTime: number;
onTargetTimeReached: (ownerEntity: IEntity) => void;
private onTimeReachedCallback?: ()=> void
/**
* #param millisecs amount of time in milliseconds
* #param onTimeReachedCallback callback for when time is reached
*/
constructor(millisecs: number, onTimeReachedCallback?: ()=> void){
this.elapsedTime = 0
this.targetTime = millisecs / 1000
this.onTimeReachedCallback = onTimeReachedCallback
this.onTargetTimeReached = (entity)=>{
if (this.onTimeReachedCallback) this.onTimeReachedCallback()
entity.removeComponent(this)
}
}
}
The component group:
export const delayedEntities = engine.getComponentGroup(Delay)
The system:
// define system
class TimerSystem implements ISystem {
update(dt: number){
for (let entity of delayedEntities.entities) {
let timerComponent = entity.getComponent(component)
timerComponent.elapsedTime += dt
if (timerComponent.elapsedTime >= timerComponent.targetTime){
timerComponent.onTargetTimeReached(entity)
}
})
}
}
// instance system
engine.addSystem(new TimerSystem())
Once all these parts are in place, you can simply do the following to delay an execution in your scene:
const myEntity = new Entity()
myEntity.addComponent(new Delay(1000, () => {
log("time ran out")
}))
engine.addEntity(myEntity)
A few years late, but the OP's selected answer is kind of deprecated because you can accomplish a delay doing:
import { Delay } from "node_modules/decentraland-ecs-utils/timer/component/delay"
const ent = new Entity
ent.addComponent(new Delay(3 * 1000, () => {
// this code will run when time is up
}))
Read the docs.
Use the utils.Delay() function in the utils library.
This function just takes the delay time in milliseconds, and the function you want to execute.
Here's the full documentation, explaining how to add the library + how to use this function, including example code:
https://www.npmjs.com/package/decentraland-ecs-utils

d3 transition in unit-testing

I have an chart built by d3 and which appears with transitions and I need to test chart when all transitions have ended. I use jasmine for unit-testing. How to do it?
I find method d3.timer.flush(), but it skips only first frame, but I want to skip all animations and see an final result right now and make some assertions on it.
You can execute transitions synchronously directly to their final state with one call to D3's timer flush if you mock out its timestamp determination during the flush like so:
An alternative to mocking out transitions is executing them synchronously directly to their final state.
With D3.js v4, do:
function flushAllD3Transitions() {
var now = performance.now;
performance.now = function() { return Infinity; };
d3.timerFlush();
performance.now = now;
}
With D3.js v3 and previous, do:
function flushAllD3Transitions() {
var now = Date.now;
Date.now = function() { return Infinity; };
d3.timer.flush();
Date.now = now;
}
Mocking the transition altogether (to avoid the calculation overhead) yielded mixed results for me, for example if your final state is created with an attrTween, it needs to be executed.
See also our discussion in d3 issue 1789 and SO 14443724.