can crypto-browserify pbkdf2Sync replace scrypt-async - cryptojs

I use an npm library called "scrypt-async". It exposes a method like:
scryptAsync("mypassword", "mysalt", {
N: 16384,
r: 8,
p: 1,
dkLen: 64,
encoding: 'hex'
}, function (derivedKey: string) {
res(derivedKey)
});
And it works well. I want to replace this module with a more generic "crypto-browserify" with method:
crypto.pbkdf2Sync("mypassword", "mysalt", 16384,64,"sha512").toString('hex')
But changing all possible options I cannot get same result in output.

Related

What does the ** (Double splat) in Crystal lang do?

What does the ** prefix do in this method call using Crystal-lang? This is from the shrine file package. Can you explain how I would use a double splat?
class FileImport::AssetUploader < Shrine
def generate_location(io : IO | UploadedFile, metadata, context, **options) HERE
name = super(io, metadata, **options)
File.join("imports", context[:model].id.to_s, name)
end
end
FileImport::AssetUploader.upload(file, "store", context: { model: YOUR_ORM_MODEL } })
According to the official docs:
A double splat (**) captures named arguments that were not matched by
other parameters. The type of the parameter is a NamedTuple.
def foo(x, **other)
# Return the captured named arguments as a NamedTuple
return other
end
foo 1, y: 2, z: 3 # => {y: 2, z: 3}
foo y: 2, x: 1, z: 3 # => {y: 2, z: 3}
The usefulness of the double splat is that it captures all named arguments. For example, you may create a function that handles any number of keyword arguments.
def print_any_tuple_with_any_keys(**named_tuple)
named_tuple.each { |k, v| puts "Options #{k}: #{v}" }
end
print_any_tuple_with_any_keys(api: "localhost")
print_any_tuple_with_any_keys(fruit: "banana", color: "yellow")
print_any_tuple_with_any_keys(hash: "123", power: "2", cypher: "AES")
This will output:
Options api: localhost
Options fruit: banana
Options color: yellow
Options hash: 123
Options power: 2
Options cypher: AES
In the code you provided, all the other named arguments passed to generate_location that do not match io, metadata, or context will be passed down to the super function that is calling the parent class, in this case, a Shrine class.
The use for Shrine specifically, is that they provide a generic upload function for different storage engines, any extra arguments may or may not be used down the call tree, and in the case of AWS S3 storage, there may be a metadata argument that adds metadata to the file.

Flattening data with same performance for recursive $query->with

I am using Laravel 5.5.13.
Thanks to awesome help from awesome member of SO I currently get nested (and repeated) data by doing this:
public function show(Entity $entity)
{
return $entity->with([
'comments' => function($query) {
$query->with([
'displayname',
'helpfuls' => function($query) {
$query->with('displayname');
}
]);
},
'thumbs' => function($query) {
$query->with('displayname');
}
])->firstOrFail();
}
This gives me example data like this: https://gist.githubusercontent.com/blagoh/ee5e70dfe35aa5c68b2d445c63887aaa/raw/a0612fb770a27eaacfbb1e87987aa4fd8902a8a3/nested.json
However I want to flatten it to this: https://gist.github.com/blagoh/7076be06c400d04941a0593267e11e81 - look at the version diff we see the changes:
https://gist.github.com/blagoh/7076be06c400d04941a0593267e11e81/revisions#diff-cb567797700e4d4b63b106653162c671R15
We see line 15 is now "helpful_ids": [] and has just array of ids, and then all displaynames and helpfuls were moved to top of array on line 45 and 78.
Is it possible to flatten this data, while keeping same query performance (or better)?

RethinkDB Multiple emits in Map

I've been trying out RethinkDB for a while and i still don't know how to do something like this MongoDB example:
https://docs.mongodb.com/manual/tutorial/map-reduce-examples/#calculate-order-and-total-quantity-with-average-quantity-per-item
In Mongo, in the map function, I could iterate over an array field of one document, and emit multiple values.
I don't know how to set the key to emit in map or return more than one value per document in the map function.
For example, i would like to get from this:
{
'num' : 1,
'lets': ['a','b,'c']
}
to
[
{'num': 1, 'let' : 'a' },
{'num': 1, 'let' : 'b' },
{'num': 1, 'let' : 'c' }
]
I'm not sure if I should think this differently in RethinkDB or use something different from map-reduce.
Thanks.
I'm not familiar with Mongo; addressing your transformation example directly:
r.expr({
'num' : 1,
'lets': ['a', 'b', 'c']
})
.do(function(row) {
return row('lets').map(function(l) {
return r.object(
'num', row('num'),
'let', l
)
})
})
You can, of course, use map() instead of do() (in case not a singular object)

How can I create parameterized tests in Rust?

I want to write test cases that depend on parameters. My test case should be executed for each parameter and I want to see whether it succeeds or fails for each parameter.
I'm used to writing things like that in Java:
#RunWith(Parameterized.class)
public class FibonacciTest {
#Parameters
public static Collection<Object[]> data() {
return Arrays.asList(new Object[][] {
{ 0, 0 }, { 1, 1 }, { 2, 1 }, { 3, 2 }, { 4, 3 }, { 5, 5 }, { 6, 8 }
});
}
private int fInput;
private int fExpected;
public FibonacciTest(int input, int expected) {
fInput= input;
fExpected= expected;
}
#Test
public void test() {
assertEquals(fExpected, Fibonacci.compute(fInput));
}
}
How can I achieve something similar with Rust? Simple test cases are working fine, but there are cases where they are not enough.
#[test]
fn it_works() {
assert!(true);
}
Note: I want the parameters as flexible as possible, for example: Read them from a file, or use all files from a certain directory as input, etc. So a hardcoded macro might not be enough.
The built-in test framework does not support this; the most common approach used is to generate a test for each case using macros, like this:
macro_rules! fib_tests {
($($name:ident: $value:expr,)*) => {
$(
#[test]
fn $name() {
let (input, expected) = $value;
assert_eq!(expected, fib(input));
}
)*
}
}
fib_tests! {
fib_0: (0, 0),
fib_1: (1, 1),
fib_2: (2, 1),
fib_3: (3, 2),
fib_4: (4, 3),
fib_5: (5, 5),
fib_6: (6, 8),
}
This produces individual tests with names fib_0, fib_1, &c.
My rstest crate mimics pytest syntax and provides a lot of flexibility. A Fibonacci example can be very neat:
use rstest::rstest;
#[rstest]
#[case(0, 0)]
#[case(1, 1)]
#[case(2, 1)]
#[case(3, 2)]
#[case(4, 3)]
#[case(5, 5)]
#[case(6, 8)]
fn fibonacci_test(#[case] input: u32, #[case] expected: u32) {
assert_eq!(expected, fibonacci(input))
}
pub fn fibonacci(input: u32) -> u32 {
match input {
0 => 0,
1 => 1,
n => fibonacci(n - 2) + fibonacci(n - 1)
}
}
Output:
/home/michele/.cargo/bin/cargo test
Compiling fib_test v0.1.0 (file:///home/michele/learning/rust/fib_test)
Finished dev [unoptimized + debuginfo] target(s) in 0.92s
Running target/debug/deps/fib_test-56ca7b46190fda35
running 7 tests
test fibonacci_test::case_1 ... ok
test fibonacci_test::case_2 ... ok
test fibonacci_test::case_3 ... ok
test fibonacci_test::case_5 ... ok
test fibonacci_test::case_6 ... ok
test fibonacci_test::case_4 ... ok
test fibonacci_test::case_7 ... ok
test result: ok. 7 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Every case is run as a single test case.
The syntax is simple and neat and, if you need, you can use any Rust expression as the value in the case argument.
rstest also supports generics and pytest-like fixtures.
Don't forget to add rstest to dev-dependencies in Cargo.toml.
Probably not quite what you've asked for, but by using TestResult::discard with quickcheck you can test a function with a subset of a randomly generated input.
extern crate quickcheck;
use quickcheck::{TestResult, quickcheck};
fn fib(n: u32) -> u32 {
match n {
0 => 0,
1 => 1,
_ => fib(n - 1) + fib(n - 2),
}
}
fn main() {
fn prop(n: u32) -> TestResult {
if n > 6 {
TestResult::discard()
} else {
let x = fib(n);
let y = fib(n + 1);
let z = fib(n + 2);
let ow_is_ow = n != 0 || x == 0;
let one_is_one = n != 1 || x == 1;
TestResult::from_bool(x + y == z && ow_is_ow && one_is_one)
}
}
quickcheck(prop as fn(u32) -> TestResult);
}
I took the Fibonacci test from this Quickcheck tutorial.
P.S. And of course, even without macros and quickcheck you still can include the parameters in the test. "Keep it simple".
#[test]
fn test_fib() {
for &(x, y) in [(0, 0), (1, 1), (2, 1), (3, 2), (4, 3), (5, 5), (6, 8)].iter() {
assert_eq!(fib(x), y);
}
}
It's possible to construct tests based on arbitrarily complex parameters and any information known at build time (including anything you can load from a file) with a build script.
We tell Cargo where the build script is:
Cargo.toml
[package]
name = "test"
version = "0.1.0"
build = "build.rs"
In the build script, we generate our test logic and place it in a file using the environment variable OUT_DIR:
build.rs
fn main() {
let out_dir = std::env::var("OUT_DIR").unwrap();
let destination = std::path::Path::new(&out_dir).join("test.rs");
let mut f = std::fs::File::create(&destination).unwrap();
let params = &["abc", "fooboo"];
for p in params {
use std::io::Write;
write!(
f,
"
#[test]
fn {name}() {{
assert!(true);
}}",
name = p
).unwrap();
}
}
Finally, we create a file in our tests directory that includes the code of the generated file.
tests/generated_test.rs
include!(concat!(env!("OUT_DIR"), "/test.rs"));
That's it. Let's verify that the tests are run:
$ cargo test
Compiling test v0.1.0 (...)
Finished debug [unoptimized + debuginfo] target(s) in 0.26 secs
Running target/debug/deps/generated_test-ce82d068f4ceb10d
running 2 tests
test abc ... ok
test fooboo ... ok
Without using any additional packages, you can do it like this, since you can write tests that return a Result type
#[cfg(test)]
mod tests {
fn test_add_case(a: i32, b: i32, expected: i32) -> Result<(), String> {
let result = a + b;
if result != expected {
Err(format!(
"{} + {} result: {}, expected: {}",
a, b, result, expected
))
} else {
Ok(())
}
}
#[test]
fn test_add() -> Result<(), String> {
[(2, 2, 4), (1, 4, 5), (1, -1, 0), (4, 2, 0)]
.iter()
.try_for_each(|(a, b, expected)| test_add_case(*a, *b, *expected))?;
Ok(())
}
}
You will even get a nice error message:
---- tests::test_add stdout ----
Error: "4 + 2 result: 6, expected: 0"
thread 'tests::test_add' panicked at 'assertion failed: `(left == right)`
left: `1`,
right: `0`: the test returned a termination value with a non-zero status code (1) which indicates a failure', /rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/test/src/lib.rs:194:5
Use https://github.com/frondeus/test-case crate.
Example:
#[test_case("some")]
#[test_case("other")]
fn works_correctly(arg: &str) {
assert!(arg.len() > 0)
}
EDIT: This is now on crates.io as parameterized_test::create!{...} - Add parameterized_test = "0.2.0" to your Cargo.toml file.
Building off Chris Morgan’s answer, here's a recursive macro to create parameterized tests (playground):
macro_rules! parameterized_test {
($name:ident, $args:pat, $body:tt) => {
with_dollar_sign! {
($d:tt) => {
macro_rules! $name {
($d($d pname:ident: $d values:expr,)*) => {
mod $name {
use super::*;
$d(
#[test]
fn $d pname() {
let $args = $d values;
$body
}
)*
}}}}}}}
You can use it like so:
parameterized_test!{ even, n, { assert_eq!(n % 2, 0); } }
even! {
one: 1,
two: 2,
}
parameterized_test! defines a new macro (even!) that will create parameterized tests taking one argument (n) and invoking assert_eq!(n % 2, 0);.
even! then works essentially like Chris' fib_tests!, though it groups the tests into a module so they can share a prefix (suggested here). This example results in two tests functions, even::one and even::two.
This same syntax works for multiple parameters:
parameterized_test!{equal, (actual, expected), {
assert_eq!(actual, expected);
}}
equal! {
same: (1, 1),
different: (2, 3),
}
The with_dollar_sign! macro used above to essentially escape the dollar-signs in the inner macro comes from #durka:
macro_rules! with_dollar_sign {
($($body:tt)*) => {
macro_rules! __with_dollar_sign { $($body)* }
__with_dollar_sign!($);
}
}
I've not written many Rust macros before, so feedback and suggestions are very welcome.
Riffing off that great answer by Chris Morgan above, I offer my use of it here. Apart from minor refactoring, this extension allows for an evaluator function which gathers the "actual" value from the system under test. The output is pretty nice. My VS Code setup automatically expands the macro invocation into a list of tests functions that may be individually invoked within the editor. In any event, since label becomes the corresponding test function name, cargo test does allows easy test selection as in, cargo test length_
macro_rules! test_cases {
($($label:ident: $evaluator:ident $case:expr,)*) => {
$(
#[test]
fn $label() {
let (expected, input) = $case;
assert_eq!(expected, $evaluator(input));
}
)*
}
}
fn get_len(s: &str) -> usize {
s.len()
}
test_cases! {
length_0: get_len (0, ""), //comments are permitted
length_1: get_len (2, "AB"),
length_2: get_len (9, "123456789"),
length_3: get_len (14, "not 14 long"),
}
Output...
running 4 tests
test length_0 ... ok
test length_1 ... ok
test length_2 ... ok
test length_3 ... FAILED
failures:
---- length_3 stdout ----
thread 'length_3' panicked at 'assertion failed: `(left == right)`
left: `14`,
right: `11`', src/lib.rs:17:1
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
length_3
test result: FAILED. 3 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Couchdb: relational database capabilities

Let's assume that I have a list of 239800 documents like the following:
{
name: somename,
data:{age:someage, income:somevalue, height:someheight, dumplings_consumed:somenumber}
}
I know that I can index the doc by doc.data.age, doc.data.income, height, dumplings_consumed and get list of the doc that after giving a range for each parameters but how can I get a result for query like following:
List of the docs where age is between 25 and 30, income is less than $10 and height is more than 7ft?
Is there a way to get multiple indexes working?
Assuming all three of your example query parameters need to remain dynamic, you would not be able to do such a join with a single CouchDB query. The simplest strategy would be to emit an index that lets you narrow down the "biggest" aspect/dimension of your data, and then filter the rest out in your app's code or a _list function.
Now, for filtering on two aspects of numeric data, GeoCouch could potentially be used — it provides a generic 2-dimensional index, not just limited to latitude and longitude! So you would emit points that contain (say) "age" and "income" mapped to x and y. You'd then query a bbox with first two "between" parameters, and then you'd only have to filter out height on the app side.
Let's have a look at:
http://guide.couchdb.org/draft/views.html
You can search with any expression you want (javascript code) and index documents with it.
For example, by means of Futon, you can create a test database and add the two following documents based on your question:
{ "_id": "36fef0472fb7eec035c87e4f4b0381bf", "_rev": "12-4ef9014a3670a7e6acd58ad92d26fc1e", "data": { "age": 6, "income": 10, "height": 20, "dumplings_consumed": 5 }, "name": "joe" }
{ "_id": "36fef0472fb7eec035c87e4f4b038ffa", "_rev": "8-f0a0a51b830bf3d4bc3ec5697440792f", "name": "mike", "data": { "age": 27, "income": 9, "height": 78, "dumplings_consumed": 256 } }
You just have to go to your database still with Futon and create a temporary view with the following Map function:
function(doc) { var age, income, height; if (doc.name && doc.data && doc.data.age && doc.data.income && doc.data.height) { if ( doc.data.age > 25 && doc.data.age < 30 && doc.data.income < 10 && doc.data.height > 7) { emit(doc.name, doc.data); } } }
Just run and you get the result.
With a permanent view, first time the request is executed, the internal B-tree is built and it takes time. Further executions should be very fast even if documents are added to the database (as long as their number is a fraction of the totality)