When to use XOR and when to use IFF? - if-statement

Since ((NOT A) XOR B) and A→B ("iff....then") (~A→~B) are logically same (e.g. login can not happen unless authentication happens) does that have any practical use or is just a logic tautology and the programmer can decide arbitrarily where to use XOR and where to use if...then?
An example rewrite is payment→delivery ("iff payment() == 'complete' then delivery()") rewritten to ((NOT payment()) XOR delivery())
EDIT: Truth table
A: payment
B: delivery
A B A' (A→B) (A' XOR B) (A' OR B)
T T F T T T
T F F F F F
Where the failure is
A B A' (A→B) (A' XOR B) (A' OR B)
T F F F F F
And that a case where A is false is simply a false B and all of the below just means that B is false (since NOT A).
A B A' (A→B) (A' XOR B) (A' OR B)
F T T T F T
F F T T T T
or goal → score ^ ¬goal→¬score so that there is determinism and no other way (you can't score if you don't make a goal).

A→B is equivalent to (A' OR B). These are logically the same and can be used interchangeably.
Construct truth tables for A→B and (A' XOR B) and you'll see they are not logically equivalent.
EDIT: Here's the truth table
EDIT2: Updated truth table with (B iff A), which yes, is logically equivalent to (A' XOR B) and can be used interchangeably.
A B A' (A→B) (A' XOR B) (A' OR B) (B iff A)
T T F T T T T
T F F F F F F
F T T T F T F
F F T T T T T

Related

How is "a monoid on applicative functors" different than "a monoid in the category of endofunctors"?

Perhaps neither of these statements are categorically precise, but a monad is often defined as "a monoid in the category of endofunctors"; a Haskell Alternative is defined as "a monoid on applicative functors", where an applicative functor is a "strong lax monoidal functor". Now these two definitions sound pretty similar to the ignorant (me), but work out significantly differently. The neutral element for alternative has type f a and is thus "empty", and for monad has type a -> m a and thus has the sense "non-empty"; the operation for alternative has type f a -> f a -> f a, and the operation for monad has type (a -> f b) -> (b -> f c) -> (a -> f c). It seems to me that the real important detail is in the category of endofunctors versus over endofunctors, though perhaps the "strong lax" detail in alternative is important; but that's where I get confused because within Haskell at least, monads end up being alternatives: and I see that I do not yet have a precise categorical understanding of all the details here.
How can it be precisely expresseed what the difference is between alternative and monad, such that they are both monoids relating to endofunctors, and yet the one has an "empty" neutral and the other has a "non-empty" neutral element?
In general, a monoid is defined in a monoidal category, which is a category that defines some kind of (tensor) product of objects and a unit object.
Most importantly, the category of types is monoidal: the product of types a and b is just a type of pairs (a, b), and the unit type is ().
A monoid is then defined as an object m with two morphisms:
eta :: () -> m
mu :: (m, m) -> m
Notice that eta just picks an element of m, so it's equivalent to mempty, and curried mu becomes mappend of the usual Haskell Monoid class.
So that's a category of types and functions, but there is also a separate category of endofunctors and natural transformations. It's also a monoidal category. A tensor product of two functors is defined as their composition Compose f g, and unit is the identity functor Id. A monoid in that category is a monad. As before we pick an object m, but now it's an endofunctor; and two morphism, which now are natural transformations:
eta :: Id ~> m
mu :: Compose m m ~> m
In components, these two natural transformations become:
return :: a -> m a
join :: m (m a) -> m a
An applicative functor may also be defined as a monoid in the functor category, but with a more sophisticated tensor product called Day convolution. Or, equivalently, it can be defined as a functor that (laxly) preserves monoidal structure.
Alternative is a family of monoids in the category of types (not endofunctors). This family is generated by an applicative functor f. For every type a we have a monoid whose mempty is an element of f a and whose mappend maps pairs of f a to elements of f a. These polymorphic functions are called empty and <|>.
In particular, empty must be a polymorphic value, meaning one value per every type a. This is, for instance, possible for the list functor, where an empty list is polymorphic in a, or for Maybe with the polymorphic value Nothing. Notice that these are all polymorphic data types that have a constructor that doesn't depend on the type parameter. The intuition is that, if you think of a functor as a container, this constructor creates and empty container. An empty container is automatically polymorphic.
Both concepts are tied to the idea of a "monoidal category", which is a category you can define the concept of a monoid in (and certain other kinds of algebraic structures). You can think of monoidal categories as: a category defines an abstract notion of functions of one argument; a monoidal category defines an abstract notion of functions of zero arguments or multiple arguments.
A monad is a monoid in the category of endofunctors; in other words, it's a monoid where the product (a function of 2 arguments) and the identity (a function of 0 arguments) use the concept of multi-argument function defined by a particular (bizarre) monoidal category (the monoidal category of endofunctors and composition).
An applicative functor is a monoidal functor. In other words, it's a functor that preserves all the structure of a monoidal category, not just the part that makes it a category. It should be obvious that that means it has mapN functions for functions with any number of arguments, not just functions of one argument (like a normal functor has).
So a monad exists within a particular monoidal category (which happens to be a category of endofunctors), while an applicative functor maps between two monoidal categories (which happen to be the same category, hence it's a kind of endofunctor).
To supplement the other answers with some Haskell code, here is how we might represent the Day convolution monoidal structure #Bartosz Milewski refers to:
data Day f g a = forall x y. Day (x -> y -> a) (f x) (g y)
With the unit object being the functor Identity.
Then we can reformulate the applicative class as a monoid object with respect to this monoidal structure:
type f ~> g = forall x. f x -> g x
class Functor f => Applicative' f
where
dappend :: Day f f ~> f
dempty :: Identity ~> f
You might notice how this rhymes with other familiar monoid objects, such as:
class Functor f => Monad f
where
join :: Compose f f ~> f
return :: Identity ~> f
or:
class Monoid m
where
mappend :: (,) m m -> m
mempty :: () -> m
With some squinting, you might also be able to see how dappend is just a wrapped version of liftA2, and likewise dempty of pure.

Variable assignment and the comma operator [duplicate]

This question already has answers here:
How does comma operator work during assignment?
(5 answers)
Closed 6 years ago.
Can anybody explain for me:
int a, b, c, d;
a = 2;
b = 4;
c = a, b;
d = (a, b);
Why c == 2 and d == 4 ???
The two statements are both evaluated as
c = a;
d = b;
due to how the comma operator (which has the lowest precedence of any operator) works in C and C++.
For the first one, c = a is evaluated first (as = has higher precedence than the comma operator) then b (which is a no-op) is evaluated. The entire expression has a value b but that's not assigned to anything.
For d = (a, b);, (a, b) is first evaluated due to the parentheses. This has a value b, and that is assigned to d.

What does float->float mean in OCaml?

I have this type, which defines an expression. I know that * symbol lets me add pairs, but what is the -> for?
# type expression = Value of float
| Sum of (expr*expr)
| Subtraction of (expr*expr)
| Fc1 of ((float->float)*expr)
The -> operator is for function types. a -> b means "a in, b out", so float -> float is the type of functions that take a float as their argument and produce a float as their result.
What about float -> float -> float
-> is right-associative, so a -> b -> c is the same as a -> (b -> c) meaning a function that takes an a and produces another function of type b -> c. Functions like this are often used to simulate multi-arguments functions (you can use f x y to apply f to x and then apply the resulting function to y, which effectively calls the inner function with two arguments) as an alternative to tuples. This way of simulating multi-argument functions is called currying.

Types of OCaml functions

I started learning functional programming (OCaml), but I don't understand one important topic about functional programming: types.
Can anyone explain me this solution please?
Have a test this week and can't get reach the resolution..
let f a b c = a (a b c) 0;;
f: ('a -> int -> 'a) -> 'a -> int -> 'a
let f a b c = a (a b c) 0;;
Your confusion involves types and type inference, i.e., when you define a function or binding, you don't need to give explicit types for its parameters, nor the function/binding itself, OCaml will figure it out if your definition of function/binding is correct.
So, let's do some manual inferences ourselves. If a human can do, then the compiler can also do.
1.
let x = 1
1 is integer, so x must be an integer. So you don't need to do int x = 1 as in other languages, right?
2.
let f x = 1
If there are multiple variable names between let and =, then it must be a function definition, right? Otherwise, it won't make sense. In Java like language, it also makes no sense to say int x y = 1, right?
So f is a function and x is must be a parameter. Since the righthand side of = is an integer, then we know f will return an integer. For x, we don't know, so x will be thought as a polymorphic type 'a.
So f: 'a -> int = <fun>.
3.
let f a b c = a (a b c) 0;;
f is a function, with parameters a, b, c.
a must be a function, because on the righthand side of =, it is a function application.
a takes two arguments: (a b c) and 0. 0 is an integer, so the 2nd parameter of a must be an integer type.
Look inside (a b c), c is the 2nd arguement, so c must be integer.
We can't infer on b. So b is 'a.
Since (a b c) can be the 1st argument of a, and (a b c) itself is an application on function a, the return type of a will have the same type of b which is 'a.
Combine information above together, you get f: ('a -> int -> 'a) -> 'a -> int -> 'a.
If you want to learn it formally, https://realworldocaml.org/ is your friend.

Is partial-order, in contrast to total-order, enough to build a heap?

C++ std::priority_queue just need a partial order. But if its implementation is a binary heap, how does it works?
For example: assume we have a partially ordered set ( {a, b, c, x}, {c < b, b < a, c < a} ), x has nothing to do with a, b, c. Then a max-heap is:
layer 1: x
layer 2: b x
layer 3: x x a c
After a pop operation, in a way commonly seen in text books, i.e. replace the root with c and decrease the size by 1. Then we need to heapify the tree below, at the root:
layer 1: c
layer 2: b x
layer 3: x x a
We will swap c and b as c < b, won't we? And what? We still don't have a valid heap since b < a. But b cannot "see" a.
The requirement for priority_queue is (§23.6.4 of the C++ Standard) that the comparator defines a strict, weak ordering. The latter is defined in §25.4/4 as follows:
The term strict refers to the requirement of an irreflexive relation (!comp(x, x) for all x), and the term weak to requirements that are not as strong as those for a total ordering, but stronger than those for a partial ordering. If we define equiv(a, b) as !comp(a, b) && !comp(b, a), then the requirements are that comp and equiv both be transitive relations:
— comp(a, b) && comp(b, c) implies comp(a, c)
— equiv(a, b) && equiv(b, c) implies equiv(a, c) [ Note: Under these conditions, it can be shown that
i) equiv is an equivalence relation
ii) comp induces a well-defined relation on the equivalence classes determined by equiv
iii) The induced relation is a strict total ordering. — end note ]
In other words, the comparator-defined relation does not have to be total, but it must be total with respect to the equivalence classes defined by a hypothetical relation equiv, which defines all elements as equal that are not less-than or greater-than each other.
To put it in even simpler terms, any elements not covered by the comparator relation will be treated as equal.