What does the ** (Double splat) in Crystal lang do? - crystal-lang

What does the ** prefix do in this method call using Crystal-lang? This is from the shrine file package. Can you explain how I would use a double splat?
class FileImport::AssetUploader < Shrine
def generate_location(io : IO | UploadedFile, metadata, context, **options) HERE
name = super(io, metadata, **options)
File.join("imports", context[:model].id.to_s, name)
end
end
FileImport::AssetUploader.upload(file, "store", context: { model: YOUR_ORM_MODEL } })

According to the official docs:
A double splat (**) captures named arguments that were not matched by
other parameters. The type of the parameter is a NamedTuple.
def foo(x, **other)
# Return the captured named arguments as a NamedTuple
return other
end
foo 1, y: 2, z: 3 # => {y: 2, z: 3}
foo y: 2, x: 1, z: 3 # => {y: 2, z: 3}
The usefulness of the double splat is that it captures all named arguments. For example, you may create a function that handles any number of keyword arguments.
def print_any_tuple_with_any_keys(**named_tuple)
named_tuple.each { |k, v| puts "Options #{k}: #{v}" }
end
print_any_tuple_with_any_keys(api: "localhost")
print_any_tuple_with_any_keys(fruit: "banana", color: "yellow")
print_any_tuple_with_any_keys(hash: "123", power: "2", cypher: "AES")
This will output:
Options api: localhost
Options fruit: banana
Options color: yellow
Options hash: 123
Options power: 2
Options cypher: AES
In the code you provided, all the other named arguments passed to generate_location that do not match io, metadata, or context will be passed down to the super function that is calling the parent class, in this case, a Shrine class.
The use for Shrine specifically, is that they provide a generic upload function for different storage engines, any extra arguments may or may not be used down the call tree, and in the case of AWS S3 storage, there may be a metadata argument that adds metadata to the file.

Related

Replacing string values in a FeatureCollection with numbers in google earth engine

I have a FeatureCollection with a column named Dominance which has classified regions into stakeholder dominance. In this case, Dominance contains values as strings; specifically 'Small', 'Medium', 'Large' and 'Others'.
I want to replace these values/strings with 1,2,3 and 4. For that, I use the codes below:
var Shape = ee.FeatureCollection('XYZ')
var Shape_custom = Shape.select(['Dominance'])
var conditional = function(feat) {
return ee.Algorithms.If(feat.get('Dominance').eq('Small'),
feat.set({class: 1}),
feat)
}
var test = Shape_custom.map(conditional)
## This I plan to repeat for all classes
However, I am not able to change the values. The error I am getting is feat.get(...).eq is not a function.
What am I doing wrong here?
The simplest way to do this kind of mapping is using a dictionary. That way you do not need more code for each additional case.
var mapping = ee.Dictionary({
'Small': 1,
'Medium': 2,
'Large': 3,
'Others': 4
});
var mapped = Shape
.select(['Dominance'])
.map(function (feature) {
return feature.set('class', mapping.get(feature.get('Dominance')));
});
https://code.earthengine.google.com/8c58d9d24e6bfeca04e2a92b76d623a2

can crypto-browserify pbkdf2Sync replace scrypt-async

I use an npm library called "scrypt-async". It exposes a method like:
scryptAsync("mypassword", "mysalt", {
N: 16384,
r: 8,
p: 1,
dkLen: 64,
encoding: 'hex'
}, function (derivedKey: string) {
res(derivedKey)
});
And it works well. I want to replace this module with a more generic "crypto-browserify" with method:
crypto.pbkdf2Sync("mypassword", "mysalt", 16384,64,"sha512").toString('hex')
But changing all possible options I cannot get same result in output.

RethinkDB Multiple emits in Map

I've been trying out RethinkDB for a while and i still don't know how to do something like this MongoDB example:
https://docs.mongodb.com/manual/tutorial/map-reduce-examples/#calculate-order-and-total-quantity-with-average-quantity-per-item
In Mongo, in the map function, I could iterate over an array field of one document, and emit multiple values.
I don't know how to set the key to emit in map or return more than one value per document in the map function.
For example, i would like to get from this:
{
'num' : 1,
'lets': ['a','b,'c']
}
to
[
{'num': 1, 'let' : 'a' },
{'num': 1, 'let' : 'b' },
{'num': 1, 'let' : 'c' }
]
I'm not sure if I should think this differently in RethinkDB or use something different from map-reduce.
Thanks.
I'm not familiar with Mongo; addressing your transformation example directly:
r.expr({
'num' : 1,
'lets': ['a', 'b', 'c']
})
.do(function(row) {
return row('lets').map(function(l) {
return r.object(
'num', row('num'),
'let', l
)
})
})
You can, of course, use map() instead of do() (in case not a singular object)

evaluate mixed type with eval()

I have two date/time variables that contains list of the date/time values and another variable containing the list of operator to operate over the date/time variables. The format can be expressed as follows:
column1 = np.array([date1, date2,.......,dateN])
column2 = np.array([date1, date2,.......,dateN])
Both of the above variables of type Date/Time. Then I have the following variable operator that has the same length of column1 and column2:
operator = np.array(['>=','<=','==','=!',......])
I am getting "Invalid Token" with the following operation:
np.array([eval('{}{}{}'.format(v1,op,v2)) for v1,op,v2 in zip(column1,operator,column2)])
Any hint to get around this issue ?
-------------------EDIT----------------------
With some sample data and without eval I get the following output:
np.array(['{} {} {}'.format(v1,op,v2) for v1,op,v2 in zip(datelist1,operator,datelist2)])
array(['2017-03-30 10:30:22.928000 <= 2012-05-23 00:00:00',
'2011-01-07 00:00:00 == 2017-03-30 10:31:14.477000'],
dtype='|S49')
Once I bring in eval(), I get the following error:
eval('2011-01-07 00:00:00 == 2017-03-30 10:31:14.477000')
File "<string>", line 1
2011-01-07 00:00:00 == 2017-03-30 10:31:14.477000
^
SyntaxError: invalid syntax
----------------------EDIT & CORRECTIONS ----------------------------
Date/Time variables that I mentioned before are basically of type numpy datetime64 type and I am now getting the following issue while trying two date comparions with eval:
np.array([(repr(d1)+op+repr(d2)) for d1,op,d2 in zip(${Column Name1},${Operator},${Column Name2})])
The above snippet is tried over a table with three columns where ${Column Name1} and ${Column Name2} is of numpy.datetime64 type and ${Operator} is of string type. The result is as follows for one of the rows:
numpy.datetime64('2014-08-13T02:00:00.000000+0200')>=numpy.datetime64('2014-08-13T02:00:00.000000+0200')
Now I want to evaluate the above expression with function eval as follows:
np.array([eval(repr(d1)+op+repr(d2)) for d1,op,d2 in zip(${Column Name1},${Operator},${Column Name2})])
Eventually I get the following error:
NameError:name 'numpy' is not defined
I can assume the problem. The Open Source Tool that I am using is importing numpy as np whereas repr() returning numpy that it does not recognize. If this is the problem , how to fix this issue ?
datetime objects can be compared:
In [506]: datetime.datetime.today()
Out[506]: datetime.datetime(2017, 3, 30, 10, 43, 18, 363747)
In [507]: t1=datetime.datetime.today()
In [508]: t2=datetime.datetime.today()
In [509]: t1 < t2
Out[509]: True
In [510]: t1 == t2
Out[510]: False
Numpy's own version of datetime objects can also be compared
In [516]: nt1 = np.datetime64('2017-03-30 10:30:22.928000')
In [517]: nt2 = np.datetime64('2017-03-30 10:31:14.477000')
In [518]: nt1 < nt2
Out[518]: True
In [519]: nt3 = np.datetime64('2012-05-23 00:00:00')
In [520]: [nt1 <= nt2, nt2==nt3]
Out[520]: [True, False]
Using the repr string version of a datetime object works:
In [524]: repr(t1)+'<'+repr(t2)
Out[524]: 'datetime.datetime(2017, 3, 30, 10, 47, 29, 69324)<datetime.datetime(2017, 3, 30, 10, 47, 33, 669494)'
In [525]: eval(repr(t1)+'<'+repr(t2))
Out[525]: True
Not that I recommend that sort of construction. I like the dictionary mapping to an operator better.
You might want to use python operator for this:
# import operators used
import operator
from operator import ge, eq, le, ne
# build a look up table from string to operators
ops = {">=": ge, "==": eq, "<=": le, "!=": ne}
import numpy as np
# used some numbers to simplify the case, should work on datetime as well
a = np.array([1, 3, 5, 3])
b = np.array([2, 3, 2, 1])
operator = np.array(['>=','<=','==','!='])
# evaluate the operation
[ops[op](x, y) for op, x, y in zip(operator, a, b)]
# [False, True, False, True]

How to set a default value for a method argument

def my_method(options = {})
# ...
end
# => Syntax error in ./src/auto_harvest.cr:17: for empty hashes use '{} of KeyType => ValueType'
While this is valid Ruby it seems not to be in Crystal, my suspicion is that it is because of typing. How do I tell compiler I want to default to an empty hash?
Use a default argument (like in Ruby):
def my_method(x = 1, y = 2)
x + y
end
my_method x: 10, y: 20 #=> 30
my_method x: 10 #=> 12
my_method y: 20 #=> 21
Usage of hashes for default/named arguments is totally discouraged in Crystal
(edited to include the sample instead of linking to the docs)
It seems the error has all the information I needed, I need to specify the type for the key and values of the Hash.
def my_method(options = {} of Symbol => String)
# ...
end
It is quite clearly in the docs too.