Monad: Why does Identity matter, what's going to happen if there's no such special member in a set? - monads

I'm trying to learn the concept of monad, I'm watching this excellent video Brian Beckend trying to explain what is monad.
When he talks about monoid, it's a collection of types, it has a rule of composition, and this composition has to obey 2 rules:
associative: x # (y # z ) = (x # y) # z
a special member in the collection: x # id = x and id # x = x
I'm using # symbol representing composition. id means the special member.
The second point is what I'm trying to understand. why does this matter ? what if there's no such special member ?
When I learn new concept, I always try to relate these abstract concept to some other concrete things, so that I can fully understand and learn them by heart.
So what I'm trying to relate monad and monoid to is lego. So all the building blocks in a lego set forms a collection. and the composition rule is composite them into new shape of building blocks. and it's obvious the composition obey the first rule: associative. But there's no special building block which can composite with other building block and get the same back. So it fails to obey the second rule.
But lego is still highly composable. What has been missing or lack when lego fails to obey the second rule ? What is the consequence ?
Or put it this way, comparing to other monoid which obey all those rules. What feature does other monoid has but lego doesn't ?

A monoid without an identity element is called a semigroup and its still a fine and useful construct. It just gives us something different. Consider, for example, a fold on a list. We can do this by mapping every element of a list to a monoid and then composing them all. But if you only have a semigroup, you can't fold on a possibly empty list.
Consider another example -- the integers greater than zero, versus the integers greater than or equal to zero. In the latter case we have a monoid, since zero is literally our zero element. So I can solve for example, the equation "5 + x = 5". In the former case, with a semigroup, I can't solve that equation. Or I can say "you have no apples, I then give you five apples, how many do you have?" In a world without zero, we have to assume everyone starts with some apples to begin with! So, for the same reasons having a zero lying around is important with numbers, it is handy to have a "generalized zero" hanging around with more abstract algebraic structures.
(Note this doesn't mean one or the other is "better" -- just that they are different, and the extra structure, when available, can come in handy. Also note that there is a universal way to turn a semigroup into a monoid by adding a zero element, so since all semigroup results lift into the 'completed' results on monoids, it tends to be more convenient, typically, to just treat things in terms of the latter.)

The empty Lego could be considered as id but then you will have to accept that empty space is Lego. But yes if you don't want id like #sclv wrote, it would be a semigroup.

Related

Haskell - Why is Alternative implemented for List

I have read some of this post Meaning of Alternative (it's long)
What lead me to that post was learning about Alternative in general. The post gives a good answer to why it is implemented the way it is for List.
My question is:
Why is Alternative implemented for List at all?
Is there perhaps an algorithm that uses Alternative and a List might be passed to it so define it to hold generality?
I thought because Alternative by default defines some and many, that may be part of it but What are some and many useful for contains the comment:
To clarify, the definitions of some and many for the most basic types such as [] and Maybe just loop. So although the definition of some and many for them is valid, it has no meaning.
In the "What are some and many useful for" link above, Will gives an answer to the OP that may contain the answer to my question, but at this point in my Haskelling, the forest is a bit thick to see the trees.
Thanks
There's something of a convention in the Haskell library ecology that if a thing can be an instance of a class, then it should be an instance of the class. I suspect the honest answer to "why is [] an Alternative?" is "because it can be".
...okay, but why does that convention exist? The short answer there is that instances are sort of the one part of Haskell that succumbs only to whole-program analysis. They are global, and if there are two parts of the program that both try to make a particular class/type pairing, that conflict prevents the program from working right. To deal with that, there's a rule of thumb that any instance you write should live in the same module either as the class it's associated with or as the type it's associated with.
Since instances are expected to live in specific modules, it's polite to define those instances whenever you can -- since it's not really reasonable for another library to try to fix up the fact that you haven't provided the instance.
Alternative is useful when viewing [] as the nondeterminism-monad. In that case, <|> represents a choice between two programs and empty represents "no valid choice". This is the same interpretation as for e.g. parsers.
some and many does indeed not make sense for lists, since they try iterating through all possible lists of elements from the given options greedily, starting from the infinite list of just the first option. The list monad isn't lazy enough to do even that, since it might always need to abort if it was given an empty list. There is however one case when both terminates: When given an empty list.
Prelude Control.Applicative> many []
[[]]
Prelude Control.Applicative> some []
[]
If some and many were defined as lazy (in the regex sense), meaning they prefer short lists, you would get out results, but not very useful, since it starts by generating all the infinite number of lists with just the first option:
Prelude Control.Applicative> some' v = liftA2 (:) v (many' v); many' v = pure [] <|> some' v
Prelude Control.Applicative> take 100 . show $ (some' [1,2])
"[[1],[1,1],[1,1,1],[1,1,1,1],[1,1,1,1,1],[1,1,1,1,1,1],[1,1,1,1,1,1,1],[1,1,1,1,1,1,1,1],[1,1,1,1,1,"
Edit: I believe the some and many functions corresponds to a star-semiring while <|> and empty corresponds to plus and zero in a semiring. So mathematically (I think), it would make sense to split those operations out into a separate typeclass, but it would also be kind of silly, since they can be implemented in terms of the other operators in Alternative.
Consider a function like this:
fallback :: Alternative f => a -> (a -> f b) -> (a -> f e) -> f (Either e b)
fallback x f g = (Right <$> f x) <|> (Left <$> g x)
Not spectacularly meaningful, but you can imagine it being used in, say, a parser: try one thing, falling back to another if that doesn't work.
Does this function have a meaning when f ~ []? Sure, why not. If you think of a list's "effects" as being a search through some space, this function seems to represent some kind of biased choice, where you prefer the first option to the second, and while you're willing to try either, you also tag which way you went.
Could a function like this be part of some algorithm which is polymorphic in the Alternative it computes in? Again I don't see why not. It doesn't seem unreasonable for [] to have an Alternative instance, since there is an implementation that satisfies the Alternative laws.
As to the answer linked to by Will Ness that you pointed out: it covers that some and many don't "just loop" for lists. They loop for non-empty lists. For empty lists, they immediately return a value. How useful is this? Probably not very, I must admit. But that functionality comes along with (<|>) and empty, which can be useful.

What is the name of the data structure for and-or-lists (or and-or-trees) and where can I read about it?

I recently needed to make a data structure which was a nested list of and/or questions. Since most every interesting thing has been discovered by someone else previously, I’m looking for the name of this data structure. it looks something like this.
‘((a b c) (b d e) (c (a b) (f a)))
The interpretation is I want to find abc or bde or caf or caa or cbf or cba and the list encapsulates that. At the top level each item is or’ed together and sub-lists of the top level are and’ed together and sub-lists of sub-lists are or’ed again sub-lists of those are and’ed and sub-lists of those or’ed ad infinitum. Note that in my example, all the lists are the same length, in my real application the lists vary in length.
The code to walk such a “tree” is relatively simple, but I’m assuming that there is a name for that type of tree and there is stuff I can read about it.
These lists are equivalent to fixed length regular expressions (which I've seen referred to as "network expressions", but I am particularly interested in this data structure and representation thereof.
In general (in the very high level of abstraction) it is:
Context free grammar -Wiki
If you allow it to be infinitely nested, then it is not a regular expression because of presence of parentheses (left and right should match).
If you consider, that expressions inside parentheses are ordered. I mean that a and b and c is equivalent to (a and b) and c. You get then Binary expression tree -Wiki
But for your particular case, it is probably: Disjunctive normal form -Wiki
I am not sure, but my intuition says that it is regular expression again because you have only 2 levels of nesting (1st - for 'or-ed' and 2nd - for 'and-ed' parts)
The trees are also a subset of DAWGS - directed acyclic word graphs and one could construct them the same way.
In my case, I have a very small set that I have built by hand and I don't worry about getting the minimal set, but instead just want something that I can easily write down but deals with the types of simple variations I see. Basically, I have different ways of finding where I keep my .el files based upon the different directory structures of various OSes I use. (E.g. when I was working at Google, the /usr/local/emacs/site-lisp directory was actually more like /usr/local/Google/emacs/site-lisp.)
I don't need a full regex, but there are about a dozen variations, some having quite long lists of nested sub-directories (c:\users\cfclark\appData\roaming\emacs.emacs.d or some other awful thing) that I wanted to write down (and then have emacs make an automated search to find the one that is appropriate to this particular installation). And every time I go to a new job, I can simply add to the list a description of where they are in that setup.
Anyway, as that code has evolved, I found that I had I was doing (nested or's and and's and realized that the structure generalized to the alternating or/and/or/and/... case). So, my assumption is that someone must have discovered this before. I had hints of it myself several years ago, but didn't set down to implement it. The Disjunctive Normal Form link mpasko256 gave is also particularly relevant. I don't normalize to that level, I still keep nested and's and or's rather than flattening to 2, but I do have a distinct structure, or's at the top, then and's, then or's....

Why would I ever want to use Maybe instead of a List?

Seeing as the Maybe type is isomorphic to the set of null and singleton lists, why would anyone ever want to use the Maybe type when I could just use lists to accomodate absence?
Because if you match a list against the patterns [] and [x] that's not an exhaustive match and you'll get a warning about that, forcing you to either add another case that'll never get called or to ignore the warning.
Matching a Maybe against Nothing and Just x however is exhaustive. So you'll only get a warning if you fail to match one of those cases.
If you choose your types such that they can only represent values that you may actually produce, you can rely on non-exhaustiveness warnings to tell you about bugs in your code where you forget to check for a given a case. If you choose more "permissive" types, you'll always have to think about whether a warning represents an actual bug or just an impossible case.
You should strive to have accurate types. Maybe expresses that there is exactly one value or that there is none. Many imperative languages represent the "none" case by the value null.
If you chose a list instead of Maybe, all your functions would be faced with the possibility that they get a list with more than one member. Probably many of them would only be defined for one value, and would have to fail on a pattern match. By using Maybe, you avoid a class of runtime errors entirely.
Building on existing (and correct) answers, I'll mention a typeclass based answer.
Different types convey different intentions - returning a Maybe a represents a computation with the possiblity of failing while [a] could represent non-determinism (or, in simpler terms, multiple possible return values).
This plays into the fact that different types have different instances for typeclasses - and these instances cater to the underlying essence the type conveys. Take Alternative and its operator (<|>) which represents what it means to combine (or choose) between arguments given.
Maybe a Combining computations that can fail just means taking the first that is not Nothing
[a] Combining two computations that each had multiple return values just means concatenating together all possible values.
Then, depending on which types your functions use, (<|>) would behave differently. Of course, you could argue that you don't need (<|>) or anything like that, but then you are missing out on one of Haskell's main strengths: it's many high-level combinator libraries.
As a general rule, we like our types to be as snug fitting and intuitive as possible. That way, we are not fighting the standard libraries and our code is more readable.
Lisp, Scheme, Python, Ruby, JavaScript, etc., manage to get along with just one type each, which you could represent in Haskell with a big sum type. Every function handling a JavaScript (or whatever) value must be prepared to receive a number, a string, a function, a piece of the document object model, etc., and throw an exception if it gets something unexpected. People who program in typed languages like Haskell prefer to limit the number of unexpected things that can occur. They also like to express ideas using types, making types useful (and machine-checked) documentation. The closer the types come to representing the intended meaning, the more useful they are.
Because there are an infinite number of possible lists, and a finite number of possible values for the Maybe type. It perfectly represents one thing or the absence of something without any other possibility.
Several answers have mentioned exhaustiveness as a factor here. I think it is a factor, but not the biggest one, because there is a way to consistently treat lists as if they were Maybes, which the listToMaybe function illustrates:
listToMaybe :: [a] -> Maybe a
listToMaybe [] = Nothing
listToMaybe (a:_) = Just a
That's an exhaustive pattern match, which rules out any straightforward errors.
The factor I'd highlight as bigger is that by using the type that more precisely models the behavior of your code, you eliminate potential behaviors that would be possible if you used a more general alternative. Say for example you have some context in your code where you uses a type of the form a -> [b], though the only correct alternatives (given your program's specification) are empty or singleton lists. Try as hard as you may to enforce the convention that this context should obey that rule, it's still possible that you'll mess up and:
Somehow a function used in that context will produce a list of two or more items;
And somehow a function that uses the results produced in that context will observe whether the lists have two or more items, and behave incorrectly in that case.
Example: some code that expects there to be no more than one value will blindly print the contents of the list and thus print multiple items when only one was supposed to be.
But if you use Maybe, then there really must be either one value or none, and the compiler enforces this.
Even though isomorphic, e.g. QuickCheck will run slower because of the increase in search space.

Proper flow control in Prolog without using the non-declarative if-then-else syntax

I would like to check for an arbitrary fact and do something if it is in the knowledge base and something else if it not, but without the ( I -> T ; E)syntax.
I have some facts in my knowledge base:
unexplored(1,1).
unexplored(2,1).
safe(1,1).
given an incomplete rule
foo:- safe(A,B),
% do something if unexplored(A,B) is in the knowledge base
% do something else if unexplored(A,B) is not in the knowledge base
What is the correct way to handle this, without doing it like this?
foo:-
safe(A,B),
( unexplored(A,B) -> something ; something_else ).
Not an answer but too long for a comment.
"Flow control" is by definition not declarative. Changing the predicate database (the defined rules and facts) at run time is also not declarative: it introduces state to your program.
You should really consider very carefully if your "data" belongs to the database, or if you can keep it a data structure. But your question doesn't provide enough detail to be able to suggest anything.
You can however see this example of finding paths through a maze. In this solution, the database contains information about the problem that does not change. The search itself uses the simplest data structure, a list. The "flow control" if you want to call it this is implicit: it is just a side effect of Prolog looking for a proof. More importantly, you can argue about the program and what it does without taking into consideration the exact control flow (but you do take into consideration Prolog's resolution strategy).
The fundamental problem with this requirement is that it is non-monotonic:
Things that hold without this fact may suddenly fail to hold after adding such a fact.
This inherently runs counter to the important and desirable declarative property of monotonicity.
Declaratively, from adding facts, we expect to obtain at most an increase, never a decrease of the things that hold.
For this reason, your requirement is inherently linked to non-monotonic constructs like if-then-else, !/0 and setof/3.
A declarative way to reason about this is to entirely avoid checking properties of the knowledge base. Instead, focus on a clear description of the things that hold, using Prolog clauses to encode the knowledge.
In your case, it looks like you need to reason about states of some search problem. A declarative way to solve such tasks is to represent the state as a Prolog term, and write pure monotonic rules involving the state.
For example, let us say that a state S0 is related to state S if we explore a certain position Pos that was previously not explored:
state0_state(S0, S) :-
select(Pos-unexplored, S0, S1),
S = [Pos-explored|S1].
or shorter:
state0_state(S0, [Pos-explored|S1) :-
select(Pos-unexplored, S0, S1).
I leave figuring out the state representation I am using here as an easy exercise. Notice the convenient naming convention of using S0, S1, ..., S to chain the different states.
This way, you encode explicit relations about Prolog terms that represent the state. Pure, monotonic, and works in all directions.

What are some alternative forms of if > then relationships?

Traditional if > then relationship in pseudo code:
if (x>y) {
then print "x is greater than y."
}
There are also relational databases.
Or just visual if>then tables. A visual table representation.
There are also tree or hierarchical structure if>then programming aids.
I'm looking for any and all alternatives and flavors of if>then constructs, but preferably practical ones. Since most humans are better at using and remembering visual constructs (tables vs raw code) than symbolic constructs, I'm looking for the most intuitive way to theoretically construct an if>then rule engine, graphically.
Note: I'm not trying to implement this, I'm just trying to get an idea of what could theoretically be done.
I hope I've interpreted the question correctly.
Everything eventually boils down to comparisons, its just a matter of breaking up these comparisons in manageable chunks for humans. There are many techniques to reduce if-thens, or at least transform them into something easier to understand.
One example would be polymorphism. This frees the programmer from one instance of if/then (basically a switch statement). Another example is maps. The implementation of a map uses if/thens, but one might pre-populate the map with all the data and use one logical piece of code instead of using if/then to differentiate. This moves to a data-driven approach. Another example is SQL; it is just a language, a higher level construct, that enables us to express conditions and constraints differently. How you choose to express these conditions is dependent on the problem domain. Some problems work well with traditional procedural programming, some with logic programming, declarative programming etc. If there are many levels of nested if-thens, a state machine approach might work well. Aspect-oriented programming tries to solve the problem of duplicated code in modules that doesn't belong specifically to any one module; a concern that "cross-cuts".
I would do some reading on Programming Paradigms. Do lots of research and if you run into a recurring problem, see if another approach allows you to reduce the amount of if-thens. Most times someone else has run into the same problem and come up with a solution.
Your question is a bit broad and we could ramble from logical gates to mathematical functions. I'm going to focus on this particular bit:
"I'm looking for the most intuitive way to theoretically construct an if>then rule engine, graphically".
First, two caveats:
The best representation depends on the number of possible rules. What works for 3-4 rules probably won't work for 30-40.
I'm going to pretend that else conditions don't exist.
If "X then Y" boils down to: one condition and one instruction whose execution depends on the condition. Let's pretend X -> Y means that "If X is true then Y is executed". Let's create two sets: one is C that contains all the possible conditions. The other one is I which contains all the possible instructions.
With this is mind, X ∈ C and Y ∈ I. In your specific case, can Y ∈ C (can Y be a condition)? If so, you have nested ifs.
Nested ifs can be represented as chains of conditions joined by and operators:
if (x > 3) {
if (y > 5) {
# do something
}
}
Can be written as:
if (x > 3 and y > 5) {
# do something
}
If you're only thinking about code then the latter can become problematic when you have many nested conditions, but when you go graphical, nesting (probably using tree-like structures) can look cluttered while chaining usually looks like a sequence of instructions (which I think is better).
If you don't consider nesting (chaining) in your rules, then connecting elements (boxes, circles, etc) from X -> Y is trivial way to work. The representation of this depends on how graphical you want to get (see the links below for some examples).
If you're considering nesting then three random ideas come to my mind:
Venn Diagrams: Visually attractive, useless for more than 3-4 conditions. They have a good fit with database representations. See: http://share.mheroin.com/image/3i3l1y0S2F39
Flowcharts: Highly functional and easy to read, not too cumbersome to create. Can get out of hand with 10+ elements. See: http://share.mheroin.com/image/2g071j3U1u29
Tables: As you mentioned, tables are a decent way to represent conditionals as long as you can restrain the set of applicable rules. This is an example taken from iTunes: http://share.mheroin.com/image/390y2G18123q. The "Match [all/any] of the following rules" works as a replacement for if/else.