So "idempotence" can be defined as:
An action, that if performed N times has the same effect as performing the action only once.
Got it, easy enough.
My question is about the subtlety of this definition -is an action considered idempotent by itself, or must you also consider the data being passed into the action?
Let me clarify with an example:
Suppose I have a PUT method that updates some resource, we'll call it f(x)
Obviously, f(3) is idempotent, as long as I supply 3 as the input. And equally obvious, f(5) will change the value of the resource (i.e., it will no longer be 3 or whatever value was there previously)
So when we talk about idempotence, are we referring to the generalization of the action/function like (i.e., f(x)), or are we referring to action/function + the data being passed into it (i.e., f(3))?
Suppose I have a PUT method that updates some resource, we'll call it
f(x)
Obviously, f(3) is idempotent, as long as I supply 3 as the input. And
equally obvious, f(5) will change the value of the resource (i.e., it
will no longer be 3 or whatever value was there previously).
This is only obvious is the server implementation is such that PUT respects this idempotent property. In the context of HTTP, RFC 2616 says:
Methods can also have the property of "idempotence" in that (aside
from error or expiration issues) the side-effects of N > 0 identical
requests is the same as for a single request.
Going a bit off topic...
In a distributed system like the web, you may also want to consider commutativity and concurrent requests. For example N+1 of the same PUT(x1) request should have the same effect, but you don't know if another client made a different PUT(x2) request in between yours, so while nPUT(x1)=PUT(x1) and mPUT(x2)=PUT(x2), the two sets of requests could be interleaved.
Idempotence requires that the action holds for all values over its domain, i.e., f(f(x)) = f(x) for all x. Another way to think about it is that an operation is idempotent if the composition of the operation with itself is just that operation.
You're assuming idempotence means that the state of the server will be changed at most once by a series of invocations. Most of the time, people use this term to mean that the state on the server won't be changed at all by any number of invocations. Under these circumstances, the distinction between your two cases is immaterial.
This is not quite the definition of idempotence. A function is idempotent if for any item x, f(f(x)) == f(x).
PUT is a side effect of your f() function here, not the result of it.
Related
I've came across completing function on clojuredocs but there is no doc at the moment.
Could you provide some examples?
completing is used to augment a binary reducing function that may not have a unary overload with a unary "completion" arity.
The official transducers reference page hosted # clojure.org explains the purpose of the nullary, unary and binary overloads of transducing functions and includes a good example of when the unary "completion" arity is useful in the section called "Creating Transducers" (the example used is partition-all which uses completion to produce the final block of output).
In short, the completion arity is used after all input is consumed and gives the transducing function the opportunity to perform any work to flush any buffers that the transducing process might maintain (as in the case of partition-all), apply any final transformations to the output (see below for an example of that) etc. Here by "transducing function" I mean the transducing function actually passed to transduce (or eduction or any similar function that sets up a transducing process) together with all the wrapping transducers.
For an interesting example of completing used with a non-trivial completion function, have a look at Christophe Grand's xforms: net.cgrand.xforms.rfs/str is a transduce-friendly version of clojure.core/str that will build up a string in linear time when used in a transduce call. (In contrast, clojure.core/str, if used in reduce/transduce, would create a new string at each step, and consequently run in O(n²) time.1) It uses completing to convert the StringBuilder that it uses under to hood to a string once it consumes all input. Here's a stable link to its definition as of the current tip of the master branch.
1 Note, however, that clojure.core/str runs in linear time if used with apply – it uses a StringBuilder under the hood just like net.cgrand.xforms.rfs/str. It's still convenient from time to time to have a transduce-friendly version (for use with transducers, or in higher-order contexts, or possibly for performance reasons when dealing with a large collection that can be reduced be efficiently than via first/next looping that str uses).
I would like to check for an arbitrary fact and do something if it is in the knowledge base and something else if it not, but without the ( I -> T ; E)syntax.
I have some facts in my knowledge base:
unexplored(1,1).
unexplored(2,1).
safe(1,1).
given an incomplete rule
foo:- safe(A,B),
% do something if unexplored(A,B) is in the knowledge base
% do something else if unexplored(A,B) is not in the knowledge base
What is the correct way to handle this, without doing it like this?
foo:-
safe(A,B),
( unexplored(A,B) -> something ; something_else ).
Not an answer but too long for a comment.
"Flow control" is by definition not declarative. Changing the predicate database (the defined rules and facts) at run time is also not declarative: it introduces state to your program.
You should really consider very carefully if your "data" belongs to the database, or if you can keep it a data structure. But your question doesn't provide enough detail to be able to suggest anything.
You can however see this example of finding paths through a maze. In this solution, the database contains information about the problem that does not change. The search itself uses the simplest data structure, a list. The "flow control" if you want to call it this is implicit: it is just a side effect of Prolog looking for a proof. More importantly, you can argue about the program and what it does without taking into consideration the exact control flow (but you do take into consideration Prolog's resolution strategy).
The fundamental problem with this requirement is that it is non-monotonic:
Things that hold without this fact may suddenly fail to hold after adding such a fact.
This inherently runs counter to the important and desirable declarative property of monotonicity.
Declaratively, from adding facts, we expect to obtain at most an increase, never a decrease of the things that hold.
For this reason, your requirement is inherently linked to non-monotonic constructs like if-then-else, !/0 and setof/3.
A declarative way to reason about this is to entirely avoid checking properties of the knowledge base. Instead, focus on a clear description of the things that hold, using Prolog clauses to encode the knowledge.
In your case, it looks like you need to reason about states of some search problem. A declarative way to solve such tasks is to represent the state as a Prolog term, and write pure monotonic rules involving the state.
For example, let us say that a state S0 is related to state S if we explore a certain position Pos that was previously not explored:
state0_state(S0, S) :-
select(Pos-unexplored, S0, S1),
S = [Pos-explored|S1].
or shorter:
state0_state(S0, [Pos-explored|S1) :-
select(Pos-unexplored, S0, S1).
I leave figuring out the state representation I am using here as an easy exercise. Notice the convenient naming convention of using S0, S1, ..., S to chain the different states.
This way, you encode explicit relations about Prolog terms that represent the state. Pure, monotonic, and works in all directions.
I'm trying to figure out if there is a macro similar to delay in clojure to get a lazy expression/ variable that can be evaluated later.
The use case is a default value for Map.get/3, since the default value comes from a database call, I'd prefer it to be called only when it's needed.
Elixir's macro could be used for writing simple wrapper function for conditional evaluation. I've put one gist in the following, though it may be better/smarter way.
https://gist.github.com/parroty/98a68f2e8a735434bd60
"Generic" laziness is a bit of a tough nut to crack because it's a fairly broad question. Streams allow laziness for enumerables but I'm not sure what laziness for an expression would mean. For example what would a lazy form of x = 1 + 2 be? When would it be evaluated?
The thought that comes to mind for a lazy form of an expression is a procedure expression:
def x, do: 1 + 2
Because the value of x wouldn't be calculated until the expression is actually invoked (as far as I know). I'm sure others will correct me if I'm wrong on that point. But I don't think that's what you want.
Maybe you want to rephrase your question--leaving out streams and lazy evaluation of enumerated values.
One way to do this would be using processes. For example the map could be wrapped in a process like a GenServer or an Agent where the default value will be evaluated lazy.
The default value can be a function which makes the expensive call. If Map.get/3 isn't being used to return functions you can check if the value is a function and invoke it if it is returned. Like so:
def default_value()
expensive_db_call()
end
def get_something(dict, key) do
case Map.get(dict, key, default_value) do
value when is_fun(value) ->
value.() # invoke the default function and return the result of the call
value ->
value # key must have existed, return value
end
end
Of course if the map contains functions this type of solution probably won't work.
Also check Elixir's Stream module. While I don't know that it would help solve your particular problem it does allow for lazy evaluation. From the documentation:
Streams are composable, lazy enumerables. Any enumerable that generates items one by one during enumeration is called a stream. For example, Elixir’s Range is a stream:
More information is available in the Stream documentation.
Map.get_lazy and Keyword.get_lazy hold off on generating the default until needed, links the documentation below
https://hexdocs.pm/elixir/Map.html#get_lazy/3
https://hexdocs.pm/elixir/Keyword.html#get_lazy/3
You can wrap it in an anonymous function, then it will be evaluated when the function is called:
iex()> lazy = fn -> :os.list_env_vars() end
#Function<45.79398840/0 in :erl_eval.expr/5>
iex()> lazy.()
Is there a function in clojure that checks whether data contains some lazy part?
Background:
I'm building a small server in clojure. Each connection has a state, an input-stream and an output-stream
The server reads a byte from an input-stream, and based on the value calls one of several functions (with the state and the input and output stream as parameters). The functions can decide to read more from the input-stream, write a reply to the output stream, and return a state. This part loops.
This will all work fine, as long as the state doesn't contain any lazy parts. If there is some lazy part in the state, that may, when it gets evaluated (later, during another function), start reading from the input-stream and writing to the output-stream.
So basically I want to add a post-condition to all of these functions, stating that no part of the returned state may be lazy. Is there any function that checks for lazy sequences. I think it would be easy to check whether the state itself is a lazy sequence, but I want to check for instance whether the state has a vector that contains a hash-map, one of whose values is lazy.
it's easier to ensure that it is not lazy by forcing evaluation with doall
I had this problem in a stream processing crypto app a couple years back and tried several ways until I finally accepted my lazy side and wrapped the input streams in a lazy sequence that closed in input streams when no more data was available. effectively separating concern for closing the streams from the concern over what the streams contained. The state you are tracking sounds a little more sophisticated than open vs closed though you may be able to separate it in a similar manner.
You could certainly force evaluation with doall as Arther wisely suggests.
However I would recommend instead refactoring to solve the real problem, which is that your handler function has side effects (reading from input, writing to output).
You could instead turn this into a pure function if you did the following:
Wrap the input stream as a lazy sequence
use [input-sequence state] as input to your handler function
use [list-of-writes new-state rest-of-input-sequence] as output, where list of writes is whatever needs to be subsequently written to the output stream
If you do this, your handler function is pure, you just need to run it in a simple loop (sending the list-of-writes to the output stream on each iteration) until all input is consumed and/or some other termination condition is reached.
I am trying to use TDD for my coding practice. I would like to ask should I test with a data that should not happen in a function BUT this data may possibly break your program.
Here is one of a easy example to illustrate to what I ask :
a ROBOT function that has a one INT parameter. In this function I know that the valid range would only be 0-100. If -1, 101 is used, the function will be break.
function ROBOT (int num){
...
...
...
return result;
}
So I decided some automated test cases for this function...
1. function ROBOT with input argument 0
2. function ROBOT with input argument 1
3. function ROBOT with input argument 10
4. function ROBOT with input argument 100
But should I write test cases with input argument -1 or 101 for this ROBOT function IF I would guard that in my other function that call function ROBOT???
5. function ROBOT with input argument -1
6. function ROBOT with input argument 101
I don't know if it is necessary cause I think it is redundancy to test -1 and 101. And If it is really necessary to cover all the cases, I have to write more code to guard -1 and 101.
So in Common practice of TDD, will you write test case on -1 and 101 as well???
Yes, you should test those invalid inputs. BUT, if your language has accessibility modifiers and ROBOT() is private you shouldn't be testing it; you should only test public functions/methods.
The functional testing technique is called Boundary Value Analysis.
If your range is 0-100, your boundary values are 0 and 100. You should test, at least:
below the boundary value
the boundary value
above the boundary value
In this case:
-1,0,1,
99,100,101
You assume everything below -1 to -infinity behaves the same, everything between 1-99 behaves the same and everything above 101 behaves the same. This is called Equivalence Partitioning. The ranges outside and between the boundary values are called partitions and you assume that they will have equivalent behaviour.
You should always consider using -1 as a test case to make sure nothing funny happens with negative numbers and a text string if the parameter is not strongly typed.
If the expected outcome is that an exception is thrown with invalid input values, then a test that the exceptions get properly thrown would be appropriate.
Edit:
As I noted in my comment below, if these cases will break your application, you should throw an exception. If it really is logically impossible for these cases to occur, then I would say no, you don't need to throw an exception, and you don't need test cases to cover it.
Note that if your system is well componentized, and this function is one component, the fact that it is logically impossible now doesn't mean it will always be logically impossible. It may be used differently down the road.
In short, if it can break, then you should test it. Also validate data at the earliest point possible.
The answer depends on whether you control the inputs passed to Robot. If Robot is an internal class (C#) ; values only flow in from RobotClientX which is a public type. Then I'd put the guard checks in RobotClientX, write tests for it. I'd not write tests for Robot, because invalid values cannot materialize in-between.
e.g. if I put my validations in the GUI such that all invalid values are filtered off at the source, then I don't check for invalid values in all classes below the GUI (Unless I've also exposed a public API which bypasses the GUI).
On the other hand, if Robot is publicly visible i.e. Anyone can call Robot with any value that they please, then I need tests that document it's behavior given specific kinds of input.. invalid being one of them. e.g. if you pass an out-of-range value, it'd throw an ArgumentException.
You said your method will raise an exception if the argument is not valid.
So, yes you should, because you should test that the exception gets raised.
If other code guards against calling that method incorrectly, and no one else will be writing code to call that method, then I don't see a reason to test with invalid values. To me, it would seem a waste of time.
The programming by contract style of design and implementation draws attention to the fact that a single function (method) should be responsible for only some things, not for everything. The other functions that it calls (delegates to) and which call it also have responsibilities. This partition of responsibilities is at the heart of dividing the task of programming into smaller tasks that can be performed separately. The contract part of programming by contract is that the specification of a function says what a function must do if and only if the caller of the function fulfills the responsibilities placed on the caller by that specification. The requirement that the input integer is within the range [0,100] is that kind of requirement.
Now, unit tests should not test implementation details. They should test that the function conforms to its specification. This enables the implementation to change without the tests breaking. It makes refactoring possible.
Combining those two ideas, how can we write a test for a function that is given some particular invalid input? We should check that the function behaves according to the specification. But the specification does not say what the function must do in this case. So we can not write any checks of the program state after the invalid function call; the behaviour is undefined. So we can not write such a test at all.
My answer is that, no, you don't want exceptions, you don't want to have to have ROBOT() check for out of range input. The clients should be so well behaved that they don't pass garbage values in.
You might want to document this - Just say that clients must be careful about the values they pass in.
Besides where are you going to get invalid values from? Well, user input or by converting strings to numbers. But in those cases it should be the conversion routines that perform the checks and give feedback about whether the values are valid or not. The values should be guaranteed to be valid long before they get anywhere near ROBOT()!