Why doesn't crystal's type inference work on classes as expected? - crystal-lang

Why can I define a method like that in Crystal:
def foo(bar): String
bar.to_json
end
foo({"x" => 1, "y" => 2})
but that kind of type inference doesn't work with classes:
class Foo
def initialize(bar)
#bar = bar
end
def foo: String
#bar.to_json
end
end
Foo.new({"x" => 1, "y" => 2}).foo
and it ends up with
Error: can't infer the type of instance variable '#bar' of Foo
What am I missing about Crystal's type inference and what is the workaround for this?

The equivalent class based approach is making the class a generic:
require "json"
class Foo(T)
def initialize(#bar : T)
end
def foo
#bar.to_json
end
end
puts Foo.new({"x" => 1, "y" => 2}).foo
Instance variables need their type set in one way or another because lexicographical type flow analysis is much harder and thus slower to do for them. Also classes build the base of your program so typing them as narrow as possible not only makes the compiler's job easier, it also makes them easier to use. Too open type restrictions on instance variables can lead to quite long and confusing error messages.
You can read more at the original proposal introducing the change to require type annotations on instance variables: https://github.com/crystal-lang/crystal/issues/2390

Related

How does minElement template containers work?

In the function below, if I change a to kv:
void main()
{
import std.algorithm.searching : minElement;
import std.stdio : writeln;
import std.array: byPair;
long[string] aa = [
"foo": 5,
"bar": 10,
"baz": 2000
];
writeln(aa.byPair().minElement!"a.value"().value);
}
compiler throws the following error message:
/dlang/dmd/linux/bin64/../../src/phobos/std/functional.d-mixin-215(215): Error: undefined identifier kv
/dlang/dmd/linux/bin64/../../src/phobos/std/algorithm/searching.d(1351): Error: template instance std.functional.binaryFun!("kv.value", "a", "b").binaryFun!(Tuple!(string, "key", long, "value"), Tuple!(string, "key", long, "value")) error instantiating
/dlang/dmd/linux/bin64/../../src/phobos/std/algorithm/searching.d(1314): instantiated from here: extremum!(__lambda2, "kv.value", MapResult!(__lambda2, Result), Tuple!(string, "key", long, "value"))
/dlang/dmd/linux/bin64/../../src/phobos/std/algorithm/searching.d(1398): instantiated from here: extremum!((a) => a, "kv.value", MapResult!(__lambda2, Result))
/dlang/dmd/linux/bin64/../../src/phobos/std/algorithm/searching.d(3550): instantiated from here: extremum!("kv.value", MapResult!(__lambda2, Result))
onlineapp.d(12): instantiated from here: minElement!("kv.value", MapResult!(__lambda2, Result))
But compiles fine with just "a.value" argument. What does this a mean?
minElement uses unaryFun to turn the passed string into a function. However, to do this it uses string mixins. The downside to this is the generated function doesn't have access to the context in which the string is created, and thus can't access the variables there.
As unaryFun's documentation says, the parameter name in the string must be a. This explains why kv fails.
Of course, as Adam D. Ruppe says, you should instead use the newer lambda syntax kv => kv.value - this allows you to us whatever parameter names you want, and allows access to the context, letting you do things like minElement!(kv => kv.value + aa["foo"]), which is simply impossible with the string functions.
Lastly, one of the possibly best reasons not to use the string functions is, as you've noticed, the error messages. Since the conversion from string to functions happens deep inside a stack of templates, you get a list of unrelated locations when the actual error is in your own code, while a lambda would show you exactly what's wrong in an easy-to-grok error message.
String parameters as functions appear as examples everywhere in std documentation, but when and how they work isn't documented very well. As you have noticed, std templates that take a function alias parameter can receive a string instead of an actual function.
This string is then converted to a "real" function using unaryFun or binaryFun which use mixin or some other magic. They name the parameters a and b, which you can use.
As Adam D. Ruppe has noted, you can also pass "normal" functions/delegates like minElement!(a => a.value)() or minElement!((a){ return a.value; }), of course parameter names are up to you then.

How do I access an object's eigenclass in Crystal?

In Ruby, it's possible to access the eigenclass (or "singleton class") of an object by reopening it. This is particularly useful for defining "private class methods":
class Foo
class << self
private
def declarative_method_name
end
end
declarative_method_name
end
# Foo.declarative_method_name => ERROR!
However, in Crystal this is not syntax:
Syntax error in ./test.cr:2: expecting token 'CONST', not '<<'
class << self
^
Is there another (or indeed any) way to achieve this in Crystal currently?
There's no eigenclass, or more commonly called singleton class in Ruby these days (given there's Object#singleton_class), in Crystal.
However defining class methods and calling them on the class level is supported:
class Foo
private def self.declarative_method_name
puts "hey"
end
declarative_method_name
end
https://carc.in/#/r/1316
The def self. construct here is specialized by the compiler and there's no more general concept beneath it, yet.
How would you make a super classes' new method private while still allowing it's subclasses' to be public?
class Foo
private self.new; end
end
class Bar < Foo
end
Bar.new #=> error: private method 'new' called for Foo:Class
It's also worth noting here that unlike in Ruby, class variables don't transcend inheritance. In Ruby the following code has a strange side effect...
class Foo
##var = 'foo'
def var
##var
end
end
class Bar < Foo
##var = 'bar'
end
puts Foo.new.var
It'll return 'bar' despite the fact that we modified the class variable on Bar. In crystal it returns 'foo' meaning that another reason we'd access the eiganclass, to store and read class level state safely, isn't necessary in crystal, we can just use class variables.

Use rspec to test class methods are calling scopes

I have created rspec tests for my scopes (scope1, scope2 and scope3) and they pass as expected but I would also like to add some tests for a class method that I have which is what is actually called from my controller (the controller calls the scopes indirectly via this class method):
def self.my_class_method(arg1, arg2)
scoped = self.all
if arg1.present?
scoped = scoped.scope1(arg1)
end
if arg2.present?
scoped = scoped.scope2(arg2)
elsif arg1.present?
scoped = scoped.scope3(arg1)
end
scoped
end
It seems a bit redundant to run the same scope tests for each scenario in this class method when I know they already pass so I assume I really only need to ensure that different scopes are called/applied dependant on the args being passed into this class method.
Can someone advise on what this rspec test would look like.
I thought it might be something along the lines of
expect_any_instance_of(MyModel.my_class_method(arg1, nil)).to receive(:scope1).with(arg1, nil)
but that doesn't work.
I would also appreciate confirmation that this is all that's necessary to test in this situation when I've already tested the scopes anyway would be reassurring.
The Rspec code you wrote is really testing the internal implementation of your method. You should test that the method returns what you want it to return given the arguments, not that it does it in a certain way. That way, your tests will be less brittle. For example if you change what scope1 is called, you won't have to rewrite your my_class_method tests.
I would do that by creating a number of instances of the class and then call the method with various arguments and check that the results are what you expect.
I don't know what scope1 and scope2 do, so I made an example where the arguments are a name attribute for you model and the scope methods simply retrieve all models except those with that name. Obviously, whatever your real arguments and scope methods do you should put that in your tests, and you should modify the expected results accordingly.
I used the to_ary method for the expected results since the self.all call actually returns an ActiveRecord association and therefore wouldn't otherwise match the expected array. You could probably use includes and does_not_includes instead of eq, but perhaps you care about the order or something.
describe MyModel do
describe ".my_class_method" do
# Could be helpful to use FactoryGirl here
# Also note the bang (!) version of let
let!(:my_model_1) { MyModel.create(name: "alex") }
let!(:my_model_2) { MyModel.create(name: "bob") }
let!(:my_model_3) { MyModel.create(name: "chris") }
context "with nil arguments" do
let(:arg1) { nil }
let(:arg2) { nil }
it "returns all" do
expected = [my_model_1, my_model_2, my_model_3]
expect_my_class_method_to_return expected
end
end
context "with a first argument equal to a model's name" do
let(:arg1) { my_model_1.name }
let(:arg2) { nil }
it "returns all except models with name matching the argument" do
expected = [my_model_2, my_model_3]
expect_my_class_method_to_return expected
end
context "with a second argument equal to another model's name" do
let(:arg1) { my_model_1.name }
let(:arg2) { my_model_2.name }
it "returns all except models with name matching either argument" do
expected = [my_model_3]
expect_my_class_method_to_return expected
end
end
end
end
private
def expect_my_class_method_to_return(expected)
actual = described_class.my_class_method(arg1, arg2).to_ary
expect(actual).to eq expected
end
end

scope not working on Mongoid (undefined method `to_criteria')

I invoke ReleaseSchedule.next_release in other controller
and got the following error
NoMethodError (undefined method `to_criteria' for #<ReleaseSchedule:0x007f9cfafbfe70>):
app/controllers/weekly_query_controller.rb:15:in `next_release'
releae_schedule.rb
class ReleaseSchedule
scope :next_release, ->(){ ReleaseSchedule.where(:release_date.gte => Time.now).without(:_id, :created_at, :updated_at).first }
end
That's not really a scope at all, that's just a class method wrapped up to look like a scope. There are two problems:
You're saying ReleaseSchedule.where(...) so you can't chain the "scope" (i.e. ReleaseSchedule.where(...).next_release won't do what it is supposed to do).
Your "scope" ends in first so it won't return a query, it just returns a single instance.
2 is probably where your NoMethodError comes from.
If you really want it to be a scope for some reason then you'd say:
# No `first` or explicit class reference in here.
scope :next_release, -> { where(:release_date.gte => Time.now).without(:_id, :created_at, :updated_at) }
and use it as:
# The `first` goes here instead.
r = ReleaseSchedule.next_release.first
But really, you just want a class method:
def self.next_release
where(:release_date.gte => Time.now).without(:_id, :created_at, :updated_at).first
end
The scope macro is, after all, just a fancy way to build class methods. The only reason we have scope is to express an intent (i.e. to build queries piece by piece) and what you're doing doesn't match that intent.

How to call function from hashmap in Scala

I'm pretty new to scala and basically I want to have a couple of functions coupled to a string in a hashmap.
However I get an error at subscribers.get(e.key)(e.EventArgs); stating Option[EventArgs => Unit] does not take parameters...
Example code:
object Monitor {
val subscribers = HashMap.empty[String, (EventArgs) => Unit ]
def trigger(e : Event){
subscribers.get(e.key)(e.EventArgs);
}
def subscribe(key: String, e: (EventArgs) => Unit) {
subscribers += key -> e;
}
}
The get method of a Map gives you an Option of the value, not the value. Thus, if the key if found in the map, you get Some(value), if not, you get None. So you need to first "unroll" that option to make sure there is actually a value of a function which you can invoke (call apply on):
def trigger(e: Event): Unit =
subscribers.get(e.key).foreach(_.apply(e.EventArgs))
or
def trigger(e: Event): Unit =
subscribers.get(e.key) match {
case Some(value) => value(e.EventArgs)
case None =>
}
There are many posts around explaining Scala's Option type. For example this one or this one.
Also note Luigi's remark about using an immutable map (the default Map) with a var instead.
Since the get method returns Option, you can use 'map' on that:
subscribers.get(e.key).map(f => f(e.EventArgs))
or even shorter:
subscribers.get(e.key) map (_(e.EventArgs))
get only takes one argument. So subscribers.get(e.key) returns an Option, and you're trying to feed (e.EventArgs) to that Option's apply method (which doesn't exist).
Also, try making the subscribers a var (or choosing a mutable collection type). At the moment you have an immutable collection and an immutable variable, so your map cannot change. A more idiomatic way to declare it would be
var subscribers = Map[String, EventArgs => Unit]()
HashMap.get() in Scala works in a bit different way, than in Java. Instead of returning value itself, get() returns Option. Option is a special type, that can have 2 values - Some(x) and None. In first case it tells "there's some value with such a key in a map". In second case it tells "nope, there's nothing (none) for this key in a map". This is done to force programmers check whether map actually has an object or not and avoid NullPointerException, which appears so frequently in Java code.
So you need something like this:
def trigger(e: Event) {
val value = subscribers.get(e.key)
value match {
case None => throw new Exception("Oops, no such subscriber...")
case Some(f) => f(e.EventArgs)
}
}
You can find more info about Option type and pattern matching in Scala here.