Replace scalaz ListT with semantically equivalent cats functionality - list

cats does not provide ListT monad transformer so how could we rewrite the following snippet which uses scalaz ListT in a for-comprehension to a semantically equivalent snippet in cats
import scalaz._
import ListT._
import scalaz.std.option._
val seeds: Option[List[String]] = Some(List("apple", "orange", "tomato"))
def grow(seed: String): Option[List[String]] = Some(List(seed.toUpperCase))
def family(seed: String, plant: String): Option[List[(String, String)]] = Some(List(seed -> plant))
(for {
seed <- listT(seeds)
plant <- listT(grow(seed))
result <- listT(family(seed, plant))
} yield result).run
Here is my attempt utilising flatMap and flatTraverse
import cats.implicits._
seeds
.flatMap {
_.flatTraverse { seed =>
grow(seed)
.flatMap {
_.flatTraverse { plant =>
family(seed, plant)
}
}
}
}
This refactoring seems to satisfy the typechecker however I am unsure if happy compiler ensures 100% semantic equivalence.

Cats does not provide ListT because it breaks the associativity Monad law. See Cats FAQ and associated proof using scalaz ListT.
Still the following ListT implementation based on .flatTraverse as you suggest passes all cats-core law tests (a bug?).
I have no experience with software proving but you might find the successful tests good enough to consider the 2 implementations as equivalent.
ListT implementation
case class ListT[M[_], A](value: M[List[A]])
implicit def listTMonad[M[_]: Monad] = new Monad[ListT[M, *]] {
override def flatMap[A, B](fa: ListT[M, A])(f: A => ListT[M, B]): ListT[M, B] =
ListT(
Monad[M].flatMap[List[A], List[B]](fa.value)(
list => Traverse[List].flatTraverse[M, A, B](list)(a => f(a).value)
)
)
override def pure[A](a: A): ListT[M, A] = ListT(Monad[M].pure(List(a)))
// unsafe impl, can be ignored for this question
override def tailRecM[A, B](a: A)(f: A => ListT[M, Either[A, B]]): ListT[M, B] =
flatMap(f(a)) {
case Right(b) => pure(b)
case Left(nextA) => tailRecM(nextA)(f)
}
}
sbt
name := "listT_tests"
version := "0.1"
scalaVersion := "2.11.12"
scalacOptions += "-Ypartial-unification"
libraryDependencies ++= Seq(
"org.typelevel" %% "cats-core" % "2.0.0",
"org.scalaz" %% "scalaz-core" % "7.2.30",
"org.scalacheck" %% "scalacheck" % "1.14.1" % "test",
"org.scalatest" %% "scalatest" % "2.2.6" % "test",
"org.typelevel" %% "discipline-scalatest" % "1.0.1",
"org.typelevel" %% "discipline-core" % "1.0.2",
"org.typelevel" %% "cats-laws" % "2.0.0" % Test,
"com.github.alexarchambault" %% "scalacheck-shapeless_1.14" % "1.2.3" % Test
)
addCompilerPlugin("org.typelevel" %% "kind-projector" % "0.11.0" cross CrossVersion.full)
law tests
class TreeLawTests extends AnyFunSpec with Checkers with FunSpecDiscipline {
implicit def listTEq[M[_], A] = Eq.fromUniversalEquals[ListT[M, A]]
checkAll("ListT Monad Laws", MonadTests[ListT[Option, *]].stackUnsafeMonad[Int, Int, String])
}
law tests results
- monad (stack-unsafe).ap consistent with product + map
- monad (stack-unsafe).applicative homomorphism
- monad (stack-unsafe).applicative identity
- monad (stack-unsafe).applicative interchange
- monad (stack-unsafe).applicative map
- monad (stack-unsafe).applicative unit
- monad (stack-unsafe).apply composition
- monad (stack-unsafe).covariant composition
- monad (stack-unsafe).covariant identity
- monad (stack-unsafe).flatMap associativity
- monad (stack-unsafe).flatMap consistent apply
- monad (stack-unsafe).flatMap from tailRecM consistency
- monad (stack-unsafe).invariant composition
- monad (stack-unsafe).invariant identity
- monad (stack-unsafe).map flatMap coherence
- monad (stack-unsafe).map2/map2Eval consistency
- monad (stack-unsafe).map2/product-map consistency
- monad (stack-unsafe).monad left identity
- monad (stack-unsafe).monad right identity
- monad (stack-unsafe).monoidal left identity
- monad (stack-unsafe).monoidal right identity
- monad (stack-unsafe).mproduct consistent flatMap
- monad (stack-unsafe).productL consistent map2
- monad (stack-unsafe).productR consistent map2
- monad (stack-unsafe).semigroupal associativity
- monad (stack-unsafe).tailRecM consistent flatMap

Related

Proving equivalence of sequence definitions from Applicative and Monad

How can I properly prove that
sequenceA :: (Traversable t, Applicative f) => t (f a) -> f (t a)
sequenceA [] = pure []
sequenceA (x:xs) = pure (:) <*> x <*> sequenceA xs
is essentially the same to monad inputs as
sequenceA' :: Monad m => [m a] -> m [a]
sequenceA' [] = return []
sequenceA' (x:xs) = do
x' <- x
xs' <- sequenceA' xs
return (x':xs')
In spite of the constraint Applicative and Monad of course.
Here's a proof sketch:
Show that
do
x' <- x
xs' <- sequenceA' xs
return (x' : xs')
is equivalent to
do
f1 <- do
cons <- return (:)
x' <- x
return (cons x')
xs' <- sequenceA' xs
return (f1 xs')
This involves desugaring (and resugaring) do notation and applying the Monad laws.
Use the definition of ap:
ap m1 m2 = do { x1 <- m1; x2 <- m2; return (x1 x2) }
to turn the above code into
do
f1 <- return (:) `ap` x
xs' <- sequenceA' xs
return (f1 xs')
and then
return (:) `ap` x `ap` sequenceA' xs
Now you have
sequenceA' [] = return []
sequenceA' (x:xs) = return (:) `ap` x `ap` sequenceA' xs
Assume that pure and <*> are essentially the same as return and `ap`, respectively, and you're done.
This latter property is also stated in the Applicative documentation:
If f is also a Monad, it should satisfy
pure = return
(<*>) = ap
Since the Functor-Applicative-Monad proposal, implemented in GHC 7.10, Applicative is a superclass of Monad. So even though your two functions can't be strictly equivalent, since sequenceA's domain includes sequenceA''s domain, we can look at what happens in this common domain (the Monad typeclass).
This paper shows an interesting demonstration of desugaring do notation to applicative and functor operations (<$>, pure and <*>). If the expressions on the right hand side of your left-pointing arrows (<-) don't depend on each other, as is the case in your question, you can always use applicative operations, and therefore show that your hypothesis is correct (for the Monad domain).
Also have a look at the ApplicativeDo language extension proposal, which contains an example that's just like yours:
do
x <- a
y <- b
return (f x y)
which translates to:
(\x y -> f x y) <$> a <*> b
Substituting f for (:), we get:
do
x <- a
y <- b
return (x : y)
... which translates to...
(\x y -> x : y) <$> a <*> b
--And by eta reduction
(:) <$> a <*> b
--Which is equivalent to the code in your question (albeit more general):
pure (:) <*> a <*> b
Alternatively, you can make GHC's desugarer work for you by using the ApplicativeDo language extension and by following this answer to the SO question "haskell - Desugaring do-notation for Monads". I'll leave this exercise up to you (as it honestly goes beyond my capacities!).
My own two cents
There is no do notation for applicatives in Haskell. It can be seen specifically in this segment.
return and pure do exactly the same, but with different constraints, right?, so this part pure (:) and this part return (x:xs) are essentially the same.
Then, here x <- act you are getting the value of act, and then the value of the recursion xs <- seqn acts, to finally wrap it with the return.
And that's what pure (:) <*> x <*> sequenceA xs is essentially doing.

Discussing implementation of list flattener function in scala

The flatten function is a function which take a list of list and return a list which is the concatenation of all the lists. As an exercise for functional programming in scala, we have to implement that function with a linear complexity. My solution is :
def flatten[A](l: List[List[A]]): List[A] = {
def outer(ll: List[List[A]]):List[A] = {
ll match {
case Nil => Nil
case Cons(h,t) => inner(t, h)
}
}
def inner(atEnd: List[List[A]], ll: List[A]): List[A] = {
ll match {
case Nil => outer(atEnd)
case Cons(h,t) => Cons(h, inner(atEnd, t))
}
}
outer(l)
}
It works. Now I looked at the solution proposed :
def append[A](a1: List[A], a2: List[A]): List[A] =
a1 match {
case Nil => a2
case Cons(h,t) => Cons(h, append(t, a2))
}
def flatten2[A](l: List[List[A]]): List[A] =
foldRight(l, Nil:List[A])(append)
I am suspicious that flatten2 is really linear. At each iteration of foldLeft, the function append is called. This function will parse all the nodes of the accumulator. The first time, the accumulator is Nil, the second it is l.get(1) then l.get(1) + l.get(2)... So the first list in l won't be crossed only once, but l.length - 1 until the end of the function. Am I right ?
While my implementation really cross each list only once. Is my implementation really faster ?
Consider for example flatten2 (List(List(1,2,3), List(4,5), List(6))), which expands to:
append(List(1,2,3),
append(List(4,5),
append(List(6),
Nil)))
As a comment in the link says, "append takes time proportional to its first argument" and therefore "this function is linear in the total length of all lists". (On the other hand, neither flatten2 nor flatten is tail-recursive, though.)

ZipList with Scalaz

Suppose I have a list of numbers and list of functions:
val xs: List[Int] = List(1, 2, 3)
val fs: List[Int => Int] = List(f1, f2, f3)
Now I would like to use an Applicative to apply f1 to 1, f2 to 2, etc.
val ys: List[Int] = xs <*> fs // expect List(f1(1), f2(2), f3(3))
How can I do it with Scalaz ?
pure for zip lists repeats the value forever, so it's not possible to define a zippy applicative instance for Scala's List (or for anything like lists). Scalaz does provide a Zip tag for Stream and the appropriate zippy applicative instance, but as far as I know it's still pretty broken. For example, this won't work (but should):
import scalaz._, Scalaz._
val xs = Tags.Zip(Stream(1, 2, 3))
val fs = Tags.Zip(Stream[Int => Int](_ + 3, _ + 2, _ + 1))
xs <*> fs
You can use the applicative instance directly (as in the other answer), but it's nice to have the syntax, and it's not too hard to write a "real" (i.e. not tagged) wrapper. Here's the workaround I've used, for example:
case class ZipList[A](s: Stream[A])
import scalaz._, Scalaz._, Isomorphism._
implicit val zipListApplicative: Applicative[ZipList] =
new IsomorphismApplicative[ZipList, ({ type L[x] = Stream[x] ## Tags.Zip })#L] {
val iso =
new IsoFunctorTemplate[ZipList, ({ type L[x] = Stream[x] ## Tags.Zip })#L] {
def to[A](fa: ZipList[A]) = Tags.Zip(fa.s)
def from[A](ga: Stream[A] ## Tags.Zip) = ZipList(Tag.unwrap(ga))
}
val G = streamZipApplicative
}
And then:
scala> val xs = ZipList(Stream(1, 2, 3))
xs: ZipList[Int] = ZipList(Stream(1, ?))
scala> val fs = ZipList(Stream[Int => Int](_ + 10, _ + 11, _ + 12))
fs: ZipList[Int => Int] = ZipList(Stream(<function1>, ?))
scala> xs <*> fs
res0: ZipList[Int] = ZipList(Stream(11, ?))
scala> res0.s.toList
res1: List[Int] = List(11, 13, 15)
For what it's worth, it looks like this has been broken for at least a couple of years.
I see a solution with streamZipApplicative :
import scalaz.std.stream._
import scalaz.Tags
val xs: List[Int] = List(1, 2, 3)
val fs: List[Int => Int] = List(f1, f2, f3)
val zippedLists = streamZipApplicative.ap(Tags.Zip(xs.toStream)) (Tags.Zip(fs.toStream))
val result = Tag.unwrap(zippedLists).toList
Learning Scalaz spends a few paragraphs on this topic in their introduction to Applicatives. They quote LYAHFGG:
However, [(+3),(2)] <> [1,2] could also work in such a way that the first function in the left list gets applied to the first value in the right one, the second function gets applied to the second value, and so on. That would result in a list with two values, namely [4,4]. You could look at it as [1 + 3, 2 * 2].
But then adds:
This can be done in Scalaz, but not easily.
The "not easily" part uses streamZipApplicative like in #n1r3's answer:
scala> streamZipApplicative.ap(Tags.Zip(Stream(1, 2)))(Tags.Zip(Stream({(_: Int) + 3}, {(_: Int) * 2})))
res32: scala.collection.immutable.Stream[Int] with Object{type Tag = scalaz.Tags.Zip} = Stream(4, ?)
scala> res32.toList
res33: List[Int] = List(4, 4)
The "not easily" is the part that bothers me. I'd like to borrow from #Travis Brown fantastic answer. He is comparing the use of monads and applicatives (i.e. why use applicatives when you have a monad?):
Second (and relatedly), it's just a solid development practice to use the least powerful abstraction that will get the job done.
So, I would say that until a framework provides an applicative that works like your first use-case:
val ys: List[Int] = xs <*> fs
To use zip and map here instead:
xs.zip(fs).map(p=>p._2.apply(p._1))
To me, this code is much clearer and simpler than the alternatives in scalaz. This is the least powerful abstraction that gets the job done.

How to make a nested toSet in scala in an idiomatic way?

Is there a more idiomatic way to change a nested sequence of sequences into a nested set of sets?
def toNestedSet[T](tsss: Seq[Seq[Seq[T]]]): Set[Set[Set[T]]] =
tsss.map(_.map(_.toSet).toSet).toSet
Is it possible to implement a function which would work with lists of any depth?
This actually isn't too bad at all (see my answer here to a similar question for some additional discussion of this approach):
trait Setsifier[I, O] { def apply(i: I): O }
object Setsifier {
def apply[I, O](f: I => O) = new Setsifier[I, O] { def apply(i: I) = f(i) }
implicit def base[I](implicit ev: I <:!< Seq[_]) = apply((_: Seq[I]).toSet)
implicit def rec[I, O](implicit s: Setsifier[I, O]) =
apply((_: Seq[I]).map(s(_)).toSet)
}
def setsify[I, O](i: I)(implicit s: Setsifier[I, O]) = s(i)
And then:
scala> println(setsify(Seq(Seq(Seq(Seq(1)), Seq(Seq(2, 3))))))
Set(Set(Set(Set(1)), Set(Set(2, 3))))
Statically typed as a Set[Set[Set[Set[[Int]]]] and all.
Well, I lied a little bit. The <:!< above isn't actually in the standard library. It is in Shapeless, though, or you can very, very easily define it yourself:
trait <:!<[A, B]
implicit def nsub[A, B] : A <:!< B = new <:!<[A, B] {}
implicit def nsubAmbig1[A, B >: A] : A <:!< B = sys.error("Don't call this!")
implicit def nsubAmbig2[A, B >: A] : A <:!< B = sys.error("Don't call this!")
And that's really all.
To address the second part of your question (processing a list of arbitrary depth), something like this would work (type erasure gets in the way a bit):
def toNestedSet(ts: Seq[Any]): Set[Any] = {
ts.foldLeft[Set[Any]](Set())((acc, b) => b match {
case s: Seq[_] => acc + toNestedSet(s)
case x => acc + x
})
}
Note: quick and dirty -- it works, but fairly easy to break :)
Edit: The cast was redundant

How to replace(fill) None entries on List of Options from another List using idiomatic Scala?

I have a List[Option[MyClass]] with None in random positions and I need to 'fill' that list again, from a List[MyClass], maintaining the order.
Here are sample lists and expected result:
val listA = List(Some(3),None,Some(5),None,None)
val listB = List(7,8,9)
val expectedList = List(Some(3), Some(7), Some(5), Some(8), Some(9))
So, how would be a idiomatic Scala to process that list?
def fillL[T](a:List[Option[T]], b:List[T]) = {
val iterB = b.iterator
a.map(_.orElse(Some(iterB.next)))
}
The iterator solution is arguably idiomatic Scala, and is definitely concise and easy to understand, but it's not functional—any time you call next on an iterator you're firmly in the land of side effects.
A more functional approach would be to use a fold:
def fillGaps[A](gappy: List[Option[A]], filler: List[A]) =
gappy.foldLeft((List.empty[Option[A]], filler)) {
case ((current, fs), Some(item)) => (current :+ Some(item), fs)
case ((current, f :: fs), None) => (current :+ Some(f), fs)
case ((current, Nil), None) => (current :+ None, Nil)
}._1
Here we move through the gappy list while maintaining two other lists: one for the items we've processed, and the other for the remaining filler elements.
This kind of solution isn't necessarily better than the other—Scala is designed to allow you to mix functional and imperative constructions in that way—but it does have potential advantages.
I'd just write it in the straightforward way, matching on the heads of the lists and handling each case appropriately:
def fill[A](l1: List[Option[A]], l2: List[A]) = (l1, l2) match {
case (Nil, _) => Nil
case (_, Nil) => l1
case (Some(x) :: xs, _) => Some(x) :: fill(xs, l2)
case (None :: xs, y :: ys) => Some(y) :: fill(xs, ys)
}
Presumably once you run out of things to fill it with, you just leave the rest of the Nones in there.