Scope variables/guards - crystal-lang

Is it possible to have a variable that is guaranteed to be finalized on the scope exit.
Specifically, I want a guard: something that calls a specific function on initialization, and calls another specific function on scope exit.

This is best done explicitly:
class Guard
def initialize
# Init things here
end
def close
# clean up things here
end
end
def my_method
guard = Guard.new
# do things here
ensure
guard.close
end
Then a common pattern is to provide a nicer interface using a yielding method. You'll notice you see this a lot in the standard library when working with external resources:
class Guard
def self.obtain
guard = new
yield guard
ensure
guard.close
end
# ...
end
Guard.obtain do |guard|
# do things
end
Of course yielding the actual resource object is optional if the consumer should never interact with it.

Related

Python class with class method

I have a python class such as the following
def function1():
return=1+1
class Collection()
...
#classmethod
def get_allobjects(cls):
..logic here..
retval = function1()
function1 is encapsulated from the outerlying class but needs to be available for get_allobjects. What would be the best way to define this function? Should it be defined as a class method or can it be left as a stand alone function inside of the same python file (defined before the class?)? Any advice or pointers on style would be appreciated.
Thanks,
Jeff
It really depends on the context. Your options:
a global function
As in your example code. This would make the function visible to all who import your module, just as the class itself is.
def function1():
return=1+1
function1()
a global, but 'hidden' function
Use a name that starts with an _ to indicate that it's not supposed to be used externally. People still can, but it would generate warnings.
def _function1():
return=1+1
_function1()
a class method
This really only makes sense if the method need access to something on the class, which is why it is passed a reference to the class.
#classmethod
def function1(cls):
return=1+cls.some_class_attribute
self.function1()
a static method
This may be what you're looking for, as it limits access to the function to whoever has access to the class (or an instance), but it does not allow access to the class or instances itself.
#staticmethod
def function1():
return=1+1
self.function1()
and finally, you may want to hide the class or static method
For the same reason as before, perhaps you only want methods in the class itself to have access, not everyone who has a hold of the class or an instance.
#staticmethod
def _function1():
return=1+1
self._function1()
It all depends on how visible you want the function to be and whether or not the logic of the function is really independent of the class, or whether it only makes sense in context of the class.
There's more options still, for example defining the function as a sub-function of the method itself, but that's only really sensible if the construction of the function itself somehow depends on the state of the class/object when it is constructed.

Examine method metadata (i.e. arity, arg types, etc)

In Crystal, is it possible to view metadata on a type's method at compile time? For example, to determine the number of arguments the method accepts, what the type restrictions on the arguments are, etc.
Looking through the API, the compiler's Def and Arg macros have methods which supposedly return this meta information, but I can't see a way of accessing them. I suspect the meta information is only accessible by the compiler.
I found out how to do it. I was looking in the wrong place of the API. The Crystal::Macros::TypeNode has a methods macro which returns an array of method Def (which is how you can access them). It looks like the TypeNode class is the entry point to a lot of good macros.
Usage example
class Person
def name(a)
"John"
end
def test
{{#type.methods.map(&.name).join(', ')}}
end
end
Or
{{#type.methods.first.args.first.name}}
Simply returning the argument name posses an interesting problem, because, after the macro interpreter pastes it into the program, the compiler interprets the name as a variable (which makes sense).
But the real value happens in being able to see the type restrictions of the method arguments
class Public < Person
def name(a : String)
a
end
def max
{{#type.methods.first.args.first.restriction}}
end
end
Person.new.max # => String
I suspect the meta information is only accessible by the compiler.
Exactly. Crystal does not have runtime reflection. You can do a lot with macros at compile time, but once it is compiled, type and method information is no longer available.
But, since everything in a program is known a compile time, you shouldn't really need runtime reflection.

How to test a class with a very complex constructor?

I have inherited some code which has some very complicated initialization in the constructor that depends on a very complex environment.
I only want to be able to test some functionality therefore just need an empty object, for example one which would have been generated by the default constructor, however the default constructor has been overwritten by some very complex stuff.
I do not have the ability to touch the source code therefore I just need the empty object to be able to call it's functions and test with.
How would I do this? I've looked at mocking but I can't seem to get the actual functionality of the class into the mock object.
UPDATE #1: Example to try to clarify what I'm asking
class Foo(object):
def __init__(self, lots_of_stuff):
lotsofthingsbeingdone()
class Bar(Foo):
def core_functionality(self, something):
val = does_something_important(something)
return val
I want to test core_functionality(). I want to feed it "something" and ensure that the val meets my expectation of what it should be.
Use this wisely. Don't make the legacy mess bigger:
# No constructors executed.
empty_object = object.__new__(YourClass)

Why use Abstract Base Class instead of a regular class with NotImplemented methods?

I've used ABC's in Python to enforce coding to a particular interface. However, it seems that I can achieve essentially the same thing by just creating a class whose methods are not implemented and then inheriting and overriding in actual implementations. Is there a more Pythonic reason for why ABC's were added, or was this just to make this coding pattern more prominent in the language?
Below is a code snippet I wrote that implements my "NotImplemented" scheme to define an abstract "optimizer" interface:
class AbstractOptimizer(object):
'''
Abstract class for building specialized optimizer objects for each use case.
Optimizer object will need to internally track previous results and other data so that it can determine the
truth value of stopConditionMet().
The constructor will require a reference argument to the useCaseObject, which will allow the optimizer
to set itself up internally using fields from the use case as needed. There will need to be a close coupling between
the optimizer code and the use case code, so it is recommended to place the optimizer class definition in the same module
as the use case class definition.
Interface includes a constructor and four public methods with no arguments:
0) AbstractOptimizer(useCaseObject) returns an instance of the optimizer
1) getNextPoint() returns the current point to be evaluated
2) evaluatePoint(point) returns the modeling outputs "modelOutputs" evaluated at "point"
3) scorePoint(evaluationResults,metric) returns a scalar "score" for the results output by evaluatePoint according to metric. NOTE: Optimization is set up as a MINIMIZATION problem, so adjust metrics accordingly.
4) stopConditionMet(score) returns True or False based on whatever past result are needed for this decision and the current score. If score = None, it is asumed to be the start of the optimization.
5) getCurrentOptimalPoint() returns currently optimal point along with its iterationID and score
The calling workflow will typically be: getNextPoint() -> evaluatePoint() -> scorePoint -> stopConiditionMet -> repeat or stop
'''
def __init__(self, useCaseObject):
'''
useCaseObject is a reference to use case instance associated with the optimizer.
This optimizer will be "coupled" to this use case object.
'''
return NotImplemented
def stopConditionMet(self,score = None):
'''
Returns True or False based on whether the optimization should continue.
'''
return NotImplemented
def getNextPoint(self):
'''
If stopConditionMet() evaluates to False, then getNextPoint() should return a point; otherwise, it should return None
'''
return NotImplemented
def evaluatePoint(self,point):
'''
Returns the modeling outputs "modelOutputs" evaluated at the current point.
Will utilize the linked
'''
return NotImplemented
def scorePoint(self,evaluationResults,metric):
'''
Returns a scalar "score" for the current results evaluated in at the current point (from evaluatePoint) based on the function "metric"
Note that "metric" will likely need to be a closure with the evaluation results as bound objects.
'''
return NotImplemented
def getCurrentOptimum(self):
'''
returns currently optimal point and it's score as a dictionary: optimalPoint = {"point":point,"score":score}
'''
return NotImplemented
PEP 3119 discusses the rational behind all of this
...In addition, the ABCs define a minimal set of methods that establish the characteristic behavior of the type. Code that discriminates objects based on their ABC type can trust that those methods will always be present. Each of these methods are accompanied by an generalized abstract semantic definition that is described in the documentation for the ABC. These standard semantic definitions are not enforced, but are strongly recommended..
This above statement appears to be a key piece of information. An AbstractBaseClass provides guarantees that cannot be guaranteed by a standard object. These guarantees are paramount to paradigms in OOP style development. Particularly inspection:
On the other hand, one of the criticisms of inspection by classic OOP theorists is the lack of formalisms and the ad hoc nature of what is being inspected. In a language such as Python, in which almost any aspect of an object can be reflected and directly accessed by external code, there are many different ways to test whether an object conforms to a particular protocol or not. For example, if asking 'is this object a mutable sequence container?', one can look for a base class of 'list', or one can look for a method named 'getitem'. But note that although these tests may seem obvious, neither of them are correct, as one generates false negatives, and the other false positives.
The generally agreed-upon remedy is to standardize the tests, and group them into a formal arrangement. This is most easily done by associating with each class a set of standard testable properties, either via the inheritance mechanism or some other means. Each test carries with it a set of promises: it contains a promise about the general behavior of the class, and a promise as to what other class methods will be available.
This PEP proposes a particular strategy for organizing these tests known as Abstract Base Classes, or ABC. ABCs are simply Python classes that are added into an object's inheritance tree to signal certain features of that object to an external inspector. Tests are done using isinstance() , and the presence of a particular ABC means that the test has passed.
So while yes you can use an object and mimic the behavior of an ABCMeta class, the point of the ABC meta class it reduce boilerplate, false positives, false negatives, and provide guarantees about the code that otherwise would not be possible to guarantee without heavy lifting by you, the developer.

TDD - Test if one method calls another

If method A's single role is to call method B, should I write a test that verifies method B is called when I call method A? Or is this a waste?
EDIT: I am editing to add some context. Here is my class:
module PaidGigs
class UserValue
def initialize(user)
#user = user
end
def default_bid(multiplier = 3.5)
PaidGigs::UserValue.cpm_value(#user.instagram_follower_count, multiplier)
end
def bid_value_including_markup(user_bid, multiplier = 3)
user_bid + PaidGigs::UserValue.cpm_value(#user.instagram_follower_count, multiplier, 0)
end
def self.cpm_value(base_count, multiplier, rounder = -1)
((base_count.to_f / 1000) * multiplier).round(rounder)
end
end
end
Should I write a test that verifies '#default_bid' calls '.cpm_value' with the proper arguments? Is this a waste of time, or is there value in this?
Don't test that one method within a method calls another. This will lead to a more fragile test, and create a barrier (although a small one) to refactoring. Your test should not care about the internals that produce the result, only that it's correct.
The answer changes if the method is delegating to another class - then you absolutely need to test, in a unit test, that the delegation occurs corectly by mocking the delegate.
There's value in methods that call others within the class: the name of the methods communicates the purpose of the method (if it's named well) and that's good for the human readers of your class. Well-named methods (and everything else) is hugely powerful, and largely underutilized. The names of low-level methods within a class can create a mini-DSL (Domain Specific Language) within the class, helping the reader to quickly understand what a high-level method is doing without taking the time to dig into the details.
What you're asking about is does it make sense to explicitly test everything your object does, or do you want to implicitly test it.
This is mostly a matter of opinion.
Personally, I see no value in writing that test, because eventually you should be writing a test that mocks out the return value of the method and verifying that function your testing actually mutates the mocked value in such a way. This way you implicitly test that you're calling the function - if you weren't using the method to get your end result, then the value would not align to your expected result.
Edit: A code example in something a bit more readable:
Calculator.cs
public class Calculator {
private IAddingService _addingService;
public Calculator(IAddingService addingService) {
_addingService = addingService;
}
public AddNumbers(int valueOne, int valueTwo) {
return _addingService(valueOne, valueTwo);
}
}
CalculatorTests.cs
public class CalculatorTests {
public void test_adding_numbers() {
var addingService = new Mock<IAddingService>()
addingService.Setup(service => service.Add(1,2)).Returns(2);
var calculator = new Calculator(addingService.Object);
var result = calculator.Add(1,2);
Assert.That(result, Is.EqualTo(2));
}
}
In this example, I've implicitly tested that adding service is the way that we add things because there's no action that allows calculator to determine this on its own.
I don't see why you wouldn't test it. It's easy to do and partially prevents further refactoring of this method to break something.