I have a background in RxJava and now I am getting started using Akka Streams.
When I need to compose a stream with the result of the first stream in RxJava, I usually do the following:
val fooBarObserable = fooObservable.flatmap { foo ->
return barObservable.map { bar -> someOperation (foo, bar) }
}
In this example, fooObservableemits Foo type, barObservable emits Bar type and fooBarObserable emits FooBar type.
obs:
Observable is very similar to Source in Akka Streams.
barObservable will be a Flow
So what's the easy way to compose a stream like that in Akka?
you can use flatMapConcat, as follows :
final case class Foo(a: String)
final case class Bar(a: String)
val fooSource = Source(List(Foo("q-foo"), Foo("b-foo"), Foo("d-foo")))
val barSource = Source(List(Bar("q-bar"), Bar("b-bar"), Bar("d-bar")))
private val value: Source[String, NotUsed] = fooSource.flatMapConcat(foo => {
barSource.map(bar => {
foo.a + bar.a
})
})
Running this : value.runWith(Sink.foreach(println)) will yield :
q-foo q-bar
q-foo b-bar
q-foo d-bar
b-foo q-bar
b-foo b-bar
b-foo d-bar
d-foo q-bar
d-foo b-bar
d-foo d-bar
Caveat: I'm not quite familiar with the RxJava so I may misunderstand what semantics it is you are after.
As a flow is not terminated you couldn't really combine its output with a source in the style of fooSource.operation(barFlow) as that wouldn't give you a reasonable graph of stages back - you'd first need to provide a barSource and combine the barFlow with that.
You could compose it the other way around though, for example like this:
val barFlow: Flow[Bar, Bar, NotUsed] = ???
val fooSource: Source[Foo, NotUsed] = ???
val fooToBarFooPairsFlow: Flow[Bar, (Bar, Foo), NotUsed] =
barFlow.zip(fooSource)
Which would tuple up pairs of values of foo and bar let you later run it with a barSource.via(fooToBarFooPairsFlow).map { case (bar, foo) => op(bar, foo) }
Related
I currently have the following function:
fun createMask(mask : String){
val ssnField : mywidgets.SSNField = findViewById (R.id.editTextText)
ssnField.hint = mask
}
To unit test this I want to wrap the untestable code within createMask into a closure. (The untestable code is the view layer logic that's difficult to instantiate and execute in a unit test.) Here is what I want to do in pseudo code:
createMask(closure, mask : String){
closure = mask // closure function returns pointer to property (depending on closure return type, might need to use setter: closure.set(mask))
}
With the above, the caller then does:
fun caller(){
createMask((){
val ssnField : mywidgets.SSNField = findViewById (R.id.editTextText)
return ssnField.hint
}, "xxx-xx-xxx")
}
How do do what is expressed in pseudo code work in kotlin?
You can return a reference of the property if you make createMask accept a parameter of type () -> KMutableProperty0<String>. Then you can call the set method:
fun createMask(mask : String, block: () -> KMutableProperty0<String>) {
block().set(mask)
}
// caller
createMask("xxx-xx-xxx") {
val ssnField = ...
ssnField::hint
}
Alternatively, use (String) -> Unit to represent "any function that takes a string", if you want to allow callers to pass any function that has the "form" of a setter.
fun createMask(mask : String, block: () -> (String) -> Unit) {
block()(mask)
}
// caller
createMask("xxx-xx-xxx") {
val ssnField = ...
ssnField::hint.setter
}
Note that this method involves reflection, which may not be desirable. Alternatively, you can accept a closure that takes the string to be set, and let the caller set it in the closure:
fun createMask(mask: String, block: (String) -> Unit) {
block(mask)
}
// caller
createMask("xxx-xx-xxx") {
val ssnField = ...
// note that rather than responsible for returning a property, the caller
// is responsible for setting "it" to the property
ssnField.hint = it
}
(I'm assuming createMask does more than just setting a property. Otherwise it is quite pointless...)
I have a struct which implements Deserialize and uses the serde(deserialize_with) on a field:
#[derive(Debug, Deserialize)]
struct Record {
name: String,
#[serde(deserialize_with = "deserialize_numeric_bool")]
is_active: bool,
}
The implementation of deserialize_numeric_bool deserializes a string "0" or "1" to the corresponding boolean value:
pub fn deserialize_numeric_bool<'de, D>(deserializer: D) -> Result<bool, D::Error>
where D: Deserializer<'de>
{
struct NumericBoolVisitor;
impl<'de> Visitor<'de> for NumericBoolVisitor {
type Value = bool;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
formatter.write_str("either 0 or 1")
}
fn visit_u64<E>(self, value: u64) -> Result<bool, E>
where E: DeserializeError
{
match value {
0 => Ok(false),
1 => Ok(true),
_ => Err(E::custom(format!("invalid bool: {}", value))),
}
}
}
deserializer.deserialize_u64(NumericBoolVisitor)
}
(I appreciate comments about code improvements)
I'd like to write unit tests for deserialization functions like deserialize_numeric_bool. Of course, my friendly search box revealed the serde_test crate and a documentation page about unit-testing.
But these resources couldn't help me in my case, as the crate tests a structure directly implementing Deserialize.
One idea I had was to create a newtype which only contains the output of my deserialize functions and test it with it. But this looks like a unnecessary indirection to me.
#[derive(Deserialize)]
NumericBool {
#[serde(deserialize_with = "deserialize_numeric_bool")]
value: bool
};
How do I write idiomatic tests for it?
My current solution uses only structures already provided by serde.
In my use case, I only wanted to test that a given string will deserialize successfully into a bool or has a certain error. The serde::de::value provides simple deserializers for fundamental data types, for example U64Deserializer which holds a u64. It also has an Error struct which provides a minimal representation for the Error traits – ready to be used for mocking errors.
My tests look currently like that: I mock the input with a deserializer and pass it to my function under test. I like that I don't need an indirection there and that I have no additional dependencies. It is not as nice as the assert_tokens* provided serde_test, as it needs the error struct and feels less polished. But for my case, where only a single value is deserialized, it fulfills my needs.
use serde::de::IntoDeserializer;
use serde::de::value::{U64Deserializer, StrDeserializer, Error as ValueError};
#[test]
fn test_numeric_true() {
let deserializer: U64Deserializer<ValueError> = 1u64.into_deserializer();
assert_eq!(numeric_bool(deserializer), Ok(true));
}
#[test]
fn test_numeric_false() {
let deserializer: U64Deserializer<ValueError> = 0u64.into_deserializer();
assert_eq!(numeric_bool(deserializer), Ok(false));
}
#[test]
fn test_numeric_invalid_number() {
let deserializer: U64Deserializer<ValueError> = 2u64.into_deserializer();
let error = numeric_bool(deserializer).unwrap_err();
assert_eq!(error.description(), "invalid bool: 2");
}
#[test]
fn test_numeric_empty() {
let deserializer: StrDeserializer<ValueError> = "".into_deserializer();
let error = numeric_bool(deserializer).unwrap_err();
assert_eq!(error.description(), "invalid type: string \"\", expected either 0 or 1");
}
I hope that it helps other folks too or inspire other people to find a more polished version.
I've come across this question several times while trying to solve a similar problem recently. For future readers, pixunil's answer is nice, straightforward, and works well. However, I'd like to provide a solution using serde_test as the unit testing documentation mentions.
I researched how serde_test is used across a few crates that I found via its reverse dependencies on lib.rs. Several of them define small structs or enums for testing deserialization or serialization as you mentioned in your original post. I suppose doing so is idiomatic when testing would be too verbose otherwise.
Here's a few examples; this is a non-exhaustive list:
Example from time
Another example from time
Example from slab (tokio)
Example from bitcoin_hashes
Example from uuid
Example from euclid
Anyway, let's say I have a function to deserialize a bool from a u8 and another function that serializes a bool to a u8.
use serde::{
de::{Error as DeError, Unexpected},
Deserialize, Deserializer, Serialize, Serializer,
};
fn bool_from_int<'de, D>(deserializer: D) -> Result<bool, D::Error>
where
D: Deserializer<'de>,
{
match u8::deserialize(deserializer)? {
0 => Ok(false),
1 => Ok(true),
wrong => Err(DeError::invalid_value(
Unexpected::Unsigned(wrong.into()),
&"zero or one",
)),
}
}
#[inline]
fn bool_to_int<S>(a_bool: &bool, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
if *a_bool {
serializer.serialize_u8(1)
} else {
serializer.serialize_u8(0)
}
}
I can test those functions by defining a struct in my test module. This allows constraining the tests to those functions specifically instead of ser/deserializing a larger object.
#[cfg(test)]
mod tests {
use super::{bool_from_int, bool_to_int};
use serde::{Deserialize, Serialize};
use serde_test::{assert_de_tokens_error, assert_tokens, Token};
#[derive(Debug, PartialEq, Deserialize, Serialize)]
#[serde(transparent)]
struct BoolTest {
#[serde(deserialize_with = "bool_from_int", serialize_with = "bool_to_int")]
a_bool: bool,
}
const TEST_TRUE: BoolTest = BoolTest { a_bool: true };
const TEST_FALSE: BoolTest = BoolTest { a_bool: false };
#[test]
fn test_true() {
assert_tokens(&TEST_TRUE, &[Token::U8(1)])
}
#[test]
fn test_false() {
assert_tokens(&TEST_FALSE, &[Token::U8(0)])
}
#[test]
fn test_de_error() {
assert_de_tokens_error::<BoolTest>(
&[Token::U8(14)],
"invalid value: integer `14`, expected zero or one",
)
}
}
BoolTest is within the tests module which is gated by #[cfg(test)] as per usual. This means that BoolTest is only compiled for tests rather than adding bloat. I'm not a Rust expert, but I think this is a good alternative if a programmer wishes to use serde_test as a harness.
Is there a way in Rust to create a std::env::Args from a Vec<String> in order to use it in a #[test] function?
I wish to test a function that gets a std::env::Args as an argument, but I don't know how to create such an object with a list of arguments I supply for the test.
I wasn't able to figure this one out from the docs, the source nor from Google searches.
The fields of std::env::Args are not documented, and there doesn't appear to be a public function to create one with custom fields. So, you're outta luck there.
But since it's just "An iterator over the arguments of a process, yielding a String value for each argument" your functions can take a String iterator or Vec without any loss of functionality or type safety. Since it's just a list of Strings, it doesn't make much sense to arbitrarily limit your functions to strings which happen to come from the command line.
Looking through Rust's own tests, that's just what they do. There's a lot of let args: Vec<String> = env::args().collect();
There's even an example in rustbuild where they strip off the name of the program and just feed the list of arguments.
use std::env;
use bootstrap::{Config, Build};
fn main() {
let args = env::args().skip(1).collect::<Vec<_>>();
let config = Config::parse(&args);
Build::new(config).build();
}
And bootstrap::Config::parse() looks like so:
impl Config {
pub fn parse(args: &[String]) -> Config {
let flags = Flags::parse(&args);
...
I'm not a Rust expert, but that seems to be how the Rust folks handle the problem.
#Schwern's answer is good and it led me to this simpler version. Since std::env::Args implements Iterator with Item = String you can do this:
use std::env;
fn parse<T>(args: T)
where
T: Iterator<Item = String>,
{
for arg in args {
// arg: String
print!("{}", arg);
}
}
fn main() {
parse(env::args());
}
To test, you provide parse with an iterator over String:
#[test]
fn test_parse() {
let args = ["arg1", "arg2"].iter().map(|s| s.to_string());
parse(args);
}
I've wrote a little macro to make this easier, based on #Rossman's answer (and therefore also based on #Schwern's answer; thanks go to both):
macro_rules! make_string_iter {
($($element: expr), *) => {
{
let mut v = Vec::new();
$( v.push(String::from($element)); )*
v.into_iter()
}
};
}
It can be used in that way:
macro_rules! make_string_iter {
($($element: expr), *) => {
{
let mut v = Vec::new();
$( v.push(String::from($element)); )*
v.into_iter()
}
};
}
// We're using this function to test our macro
fn print_args<T: Iterator<Item = String>>(args: T) {
for item in args {
println!("{}", item);
}
}
fn main() {
// Prints a, b and c
print_args(make_string_iter!("a", "b", "c"))
}
Or try it out on the Rust Playground.
I'm not (yet) an expert in rust, any suggestions are highly welcome :)
I used graph dsl to create some stream processing jobs based on some example code I saw. Everything runs great, I am just having trouble understanding the notation: (updated for 2.4)
def elements: Source[Foos] = ...
def logEveryNSink = // a sink that logs
def cleaner: Flow[Foos, Bars, Unit] = ...
def boolChecker(bar: Bar)(implicit ex: ExecutionContext): Future[Boolean] = ...
val mySink = Sink.foreach[Boolean](println(_))
val lastly = Flow[Bars].mapAsync(2)(x => boolChecker(x).toMat(mySink)(Keep.right)
val materialized = RunnableGraph.fromGraph(
GraphDSL.create(lastly) { implicit builder =>
baz => {
import GraphDSL.Implicits._
val broadcast1 = builder.add(Broadcast[Foos](2))
val broadcast2 = builder.add(Broadcast[Bars](2))
elements ~> broadcast1 ~> logEveryNSink(1)
broadcast1 ~> cleaner ~> broadcast2 ~> baz
~> broadcast2 ~> logEveryNSink(1)
ClosedShape
}
}
).run()
I understand the implicit builder that is included, but Im uncertain what the baz represents in { implicit builder => baz => { .... is it just an implicit name for the entire shape?
The GraphDSL.create method is heavily overloaded to take in many variants of amounts of input shapes (including 0). If you pass in no initial shapes, then the signature of the buildBlock function arg (the body where you actually define how the graph is to be built) is as follows:
(Builder[NotUsed]) => S
So this is simply a Function1[Builder[NotUsed], S], that is, a function that takes an instance of a Builder[NotUsed] and returns a Shape instance which is the final graph. The NotUsed here is synonymous with Unit in that you are saying that by not passing in any input shares that you do not care about the materialized value of the output graph being produced.
If you do decide to pass in input shapes, then the signature of that buildBlock function changes a bit to accomadate the input shapes. In your case, you are passing in 1 input shape, so the signature of buildBlock changes to:
(Builder[Mat]) => Graph.Shape => S
Now, this is essentially a Function1[Builder[Mat], Function1[Graph.Shape, S]], or a function that takes a Builder[Mat] (where Mat is the materialized value type of the input shape) and returns a function that takes a Graph.Shape and returns an instance of S (which is a Shape).
Long story short, if you pass in shapes, then you also need to declare them as bound params on the graph building block function but as a second input function (hence the additional =>).
I want to create a function object, which also has some properties held on it. For example in JavaScript I would do:
var f = function() { }
f.someValue = 3;
Now in TypeScript I can describe the type of this as:
var f: { (): any; someValue: number; };
However I can't actually build it, without requiring a cast. Such as:
var f: { (): any; someValue: number; } =
<{ (): any; someValue: number; }>(
function() { }
);
f.someValue = 3;
How would you build this without a cast?
Update: This answer was the best solution in earlier versions of TypeScript, but there are better options available in newer versions (see other answers).
The accepted answer works and might be required in some situations, but have the downside of providing no type safety for building up the object. This technique will at least throw a type error if you attempt to add an undefined property.
interface F { (): any; someValue: number; }
var f = <F>function () { }
f.someValue = 3
// type error
f.notDeclard = 3
This is easily achievable now (typescript 2.x) with Object.assign(target, source)
example:
The magic here is that Object.assign<T, U>(t: T, u: U) is typed to return the intersection T & U.
Enforcing that this resolves to a known interface is also straight-forward. For example:
interface Foo {
(a: number, b: string): string[];
foo: string;
}
let method: Foo = Object.assign(
(a: number, b: string) => { return a * a; },
{ foo: 10 }
);
which errors due to incompatible typing:
Error: foo:number not assignable to foo:string
Error: number not assignable to string[] (return type)
caveat: you may need to polyfill Object.assign if targeting older browsers.
TypeScript is designed to handle this case through declaration merging:
you may also be familiar with JavaScript practice of creating a function and then extending the function further by adding properties onto the function. TypeScript uses declaration merging to build up definitions like this in a type-safe way.
Declaration merging lets us say that something is both a function and a namespace (internal module):
function f() { }
namespace f {
export var someValue = 3;
}
This preserves typing and lets us write both f() and f.someValue. When writing a .d.ts file for existing JavaScript code, use declare:
declare function f(): void;
declare namespace f {
export var someValue: number;
}
Adding properties to functions is often a confusing or unexpected pattern in TypeScript, so try to avoid it, but it can be necessary when using or converting older JS code. This is one of the only times it would be appropriate to mix internal modules (namespaces) with external.
So if the requirement is to simply build and assign that function to "f" without a cast, here is a possible solution:
var f: { (): any; someValue: number; };
f = (() => {
var _f : any = function () { };
_f.someValue = 3;
return _f;
})();
Essentially, it uses a self executing function literal to "construct" an object that will match that signature before the assignment is done. The only weirdness is that the inner declaration of the function needs to be of type 'any', otherwise the compiler cries that you're assigning to a property which does not exist on the object yet.
EDIT: Simplified the code a bit.
Old question, but for versions of TypeScript starting with 3.1, you can simply do the property assignment as you would in plain JS, as long as you use a function declaration or the const keyword for your variable:
function f () {}
f.someValue = 3; // fine
const g = function () {};
g.someValue = 3; // also fine
var h = function () {};
h.someValue = 3; // Error: "Property 'someValue' does not exist on type '() => void'"
Reference and online example.
As a shortcut, you can dynamically assign the object value using the ['property'] accessor:
var f = function() { }
f['someValue'] = 3;
This bypasses the type checking. However, it is pretty safe because you have to intentionally access the property the same way:
var val = f.someValue; // This won't work
var val = f['someValue']; // Yeah, I meant to do that
However, if you really want the type checking for the property value, this won't work.
I can't say that it's very straightforward but it's definitely possible:
interface Optional {
<T>(value?: T): OptionalMonad<T>;
empty(): OptionalMonad<any>;
}
const Optional = (<T>(value?: T) => OptionalCreator(value)) as Optional;
Optional.empty = () => OptionalCreator();
if you got curious this is from a gist of mine with the TypeScript/JavaScript version of Optional
An updated answer: since the addition of intersection types via &, it is possible to "merge" two inferred types on the fly.
Here's a general helper that reads the properties of some object from and copies them over an object onto. It returns the same object onto but with a new type that includes both sets of properties, so correctly describing the runtime behaviour:
function merge<T1, T2>(onto: T1, from: T2): T1 & T2 {
Object.keys(from).forEach(key => onto[key] = from[key]);
return onto as T1 & T2;
}
This low-level helper does still perform a type-assertion, but it is type-safe by design. With this helper in place, we have an operator that we can use to solve the OP's problem with full type safety:
interface Foo {
(message: string): void;
bar(count: number): void;
}
const foo: Foo = merge(
(message: string) => console.log(`message is ${message}`), {
bar(count: number) {
console.log(`bar was passed ${count}`)
}
}
);
Click here to try it out in the TypeScript Playground. Note that we have constrained foo to be of type Foo, so the result of merge has to be a complete Foo. So if you rename bar to bad then you get a type error.
NB There is still one type hole here, however. TypeScript doesn't provide a way to constrain a type parameter to be "not a function". So you could get confused and pass your function as the second argument to merge, and that wouldn't work. So until this can be declared, we have to catch it at runtime:
function merge<T1, T2>(onto: T1, from: T2): T1 & T2 {
if (typeof from !== "object" || from instanceof Array) {
throw new Error("merge: 'from' must be an ordinary object");
}
Object.keys(from).forEach(key => onto[key] = from[key]);
return onto as T1 & T2;
}
This departs from strong typing, but you can do
var f: any = function() { }
f.someValue = 3;
if you are trying to get around oppressive strong typing like I was when I found this question. Sadly this is a case TypeScript fails on perfectly valid JavaScript so you have to you tell TypeScript to back off.
"You JavaScript is perfectly valid TypeScript" evaluates to false. (Note: using 0.95)