I hope this is the right place for this question.
I wrote some code to improve collision detection on a 2D simulation. Part of this algorithm is to associate a list of objects to the specific 2D area where they belong. A function f: (Seq[Box], Seq[T]) => Map[Box, Seq[T]].
I first implemented it with native Scala functions like map, groupBy, ... but then I needed it to return also Boxes with empty Seq[T] and groupBy doesn't do that, so I wrote a recursive function that in my view should be faster that the previous one (this might be me not knowing a better way but this is not the main question).
It turns out that my tail recursive implementation is computationally slower than the one using Scala libraries, its execution time grows faster than the other (benchmarked). I don't understand why, can someone point me to the reason?
def spreadAcrossFast[T](
nodes: Seq[Box],
objects: Seq[T]
) = {
// I create the pairs (reference box, objects that intersects the box)
// intersects() is called nodes.size * object.size times
val assigned = for (
b ← nodes;
s ← objects if intersects( b, s )
) yield ( b, s )
// Group by Box and clean the format of the association above
assigned
// Should be O(n*Log n)
.groupBy( _._1 )
// Maximum nodes.size iterations
.map { x ⇒
// Can consider objects.size / nodes.size iterations
( x._1, x._2.map( _._2 ) )
}
}
def spreadAcrossSlow[T](
nodes: Seq[Box],
objects: Seq[T],
compact: Boolean = true // If compact == true, don't include in the output Box that
) = { // have empty content
#tailrec
def loop( boxes: Seq[Box], acc: Map[Box, Seq[T]] ): Map[Box, Seq[T]] = boxes match {
// End of the boxes, return the accumulator
case Nil ⇒ acc
// Get the objects that intersect the box and add them to the accumulator
case b +: bs ⇒
// Every call it goes through objects.size items
// intersects() is called nodes.size * object.size times
val objInBox = objects.filter( intersects( b, _ ) )
val newAcc = if ( objInBox.isEmpty && compact ) acc else acc + ( ( b, objInBox ) )
loop( bs, newAcc )
}
// nodes.size iterations
loop( nodes, Map.empty[Box, Seq[T]] )
}
Seq is an instance of List.
From my point of view spreadAcrossFast has more iterations than spreadAcrossSlow and the most expensive operation, intersects(), is called the same number of times, nodes.size * objects.size
UPDATE: No luck even with (but the code is cleaner):
def spreadAcrossSlow[T: SpatialIndexable](
nodes: Seq[Box], objects: Seq[T], compact: Boolean
): Map[Box, Seq[T]] = {
val acc = scala.collection.mutable.HashMap[Box, Seq[T]]()
for ( b ← nodes ) {
val objInBox = objects.filter( intersects( b, _ ) )
if ( objInBox.nonEmpty || !compact ) acc += ( ( b, objInBox ) )
}
acc.toMap
}
Related
I would like to subtract two consecutive element in a list with numbers in Scala.
For example : I have this list :
val sortedList = List(4,5,6)
I would like to have an output list like diffList =(1, 1) where 5-4 = 1 and 6-5 = 1.
I tried the following code:
var sortedList = List[Int]()
var diffList = List[Int]()
for (i <- 0 to (sortedList.length - 1) ;j <- i + 1 to sortedList.length - 1)
{
val diff = (sortedList(j) - sortedList(i))
diffList = diffList :+ diff
}
I have the following result for diffList =(1, 2, 1) but I want diffList = (1,1).
It's because of the for loop. it does not iterate over the two variables (i and j) at once.
You do not mutability nor imperative programming to solve this problem, functional programming got you covered.
def consecutiveDifferences(data: List[Int]): List[Int] =
if (data.isEmpty) List.empty
else data.lazyZip(data.tail).map {
case (x, y) => y - x
}
As I always say, the Scaladoc is your friend.
(Also, as an advice, the best way to learn functional programming is to forbid yourself from mutability)
You can use the sliding method, which according to the docs:
/** Groups elements in fixed size blocks by passing a "sliding window"
* over them (as opposed to partitioning them, as is done in `grouped`.)
*
* An empty collection returns an empty iterator, and a non-empty
* collection containing fewer elements than the window size returns
* an iterator that will produce the original collection as its only
* element.
* #see [[scala.collection.Iterator]], method `sliding`
*
* #param size the number of elements per group
* #return An iterator producing ${coll}s of size `size`, except for a
* non-empty collection with less than `size` elements, which
* returns an iterator that produces the source collection itself
* as its only element.
* #example `List().sliding(2) = empty iterator`
* #example `List(1).sliding(2) = Iterator(List(1))`
* #example `List(1, 2).sliding(2) = Iterator(List(1, 2))`
* #example `List(1, 2, 3).sliding(2) = Iterator(List(1, 2), List(2, 3))`
*/
Then, solving your query is pretty straight forward:
diffList = sortedList.sliding(2).collect {
case Seq(a, b) =>
b - a
}.toList
Which results in List(1,1)
Code run at Scastie.
for(i <- 0 until (sortedList.size - 1)) yield sortedList(i + 1) - sortedList(i)
yield Vector(1,1) which can be converted to list with toList
That's can be also achieved with the following function:
val sortedList = List(4,5,7)
#tailrec
def findDiffs(xs: List[Int])(seed: List[Int]): List[Int] = {
if(xs.isEmpty || xs.size == 1) seed.reverse
else {
val currDiff = xs(1) - xs(0)
findDiffs(xs.tail)(currDiff :: seed)
}
}
val res = findDiffs(sortedList)(Nil)
println(res)
Or just easily with zip:
sortedList.drop(1) zip sortedList map { case (x,y) => x - y }
Sliding (see answer by #Tomer Shetah) over a list delivers an iterator, which may prove convenient for very large collections to avoid/reduce the amount of intermediate structures in the processing. Another approach includes the zipping of the list with itself shifted by one (see answers by #Luis Miguel Mejía Suárez and #Zvi Mints); in this regard another approach to shifting and then zipping is by dropping the first element as in
xs.drop(1) zip xs map {case(a,b) => b-a}
This can be generalised by dropping any number n so that we subtract the first and the n-th elements, then the second and the n+1-th elements, and so forth.
I've come across this operator in a Scala implementation of a graph (that you can find the example here, where two lists are interponed the operator -- and also two Maps.
abstract class GraphBase[T, U] {
case class Edge(n1: Node, n2: Node, value: U) {
def toTuple = (n1.value, n2.value, value)
}
case class Node(value: T) {
var adj: List[Edge] = Nil
// neighbors are all nodes adjacent to this node.
def neighbors: List[Node] = adj.map(edgeTarget(_, this).get)
}
var nodes: Map[T, Node] = Map()
var edges: List[Edge] = Nil
// If the edge E connects N to another node, returns the other node,
// otherwise returns None.
def edgeTarget(e: Edge, n: Node): Option[Node]
override def equals(o: Any) = o match {
case g: GraphBase[_,_] => (nodes.keys.toList -- g.nodes.keys.toList == Nil &&
edges.map(_.toTuple) -- g.edges.map(_.toTuple) == Nil)
case _ => false
}
def addNode(value: T) = {
val n = new Node(value)
nodes = Map(value -> n) ++ nodes
n
}
}
My current interpreter does not recognize it, so I am wondering where is this operator coming from? Does it mean list subtraction?
Is it valid Scala code?
You can alter an implementation of equals method to avoid usage of --:
override def equals(o: Any) = o match {
case g: GraphBase[_,_] => (nodes.keySet == g.nodes.keySet &&
edges.map(_.toTuple).toSet == g.edges.map(_.toTuple).toSet)
case _ => false
}
I want to access elements of a list within one list and check whether the elements are greater than a minimum value.
Example: List[([1,2],0.3), ([1.5,6],0.35), ([4,10],0.25), ([7,15],0.1)]
Let the minimum value: 1
The result should be: List[([1,6],0.65), ([4,10],0.25), ([7,15],0.1)]
As 1.5-1 is less than minimum value 1, it will merge the elements [1,2],0.3) and ([1.5,6],0.35) as [1, 6], 0.65, meaning it will take the 1st element of the inside list and last element of the 2nd element of the outside list and the 2nd element of the outside list will be added (0.3+0.35). This will be done for all elements of the outside list.
The code I tried is written below:
def reduce (d1:List[(Interval, Rational)]): List[(Interval, Rational)] =
{
var z = new ListBuffer[(Interval, Rational)]()
def recurse (list: List[(Interval, Rational)]): Unit = list match {
case List(x, y, _*) if ((y._1_1 - x._1_1) < min_val) =>
val i = x._1_1; y._1_2
val w = x._2 + y._2
z += (i,w)
else
z += x
recurse(list.tail)
case Nil =>
}
z.toList
}
But this is not working. Please help me to fix this.
OK, what you've written really isn't Scala code, and I had to make a few modifications just to get a compilable example, but see if this works for you.
type Interval = (Double,Double)
type Rational = Double
def reduce (lir:List[(Interval, Rational)]): List[(Interval, Rational)] = {
val minVal = 1.0
lir.foldLeft(List.empty[(Interval, Rational)]){
case (a, b) if a.isEmpty => List(b)
case (acc, ((i2a, i2b), r2)) =>
val ((i1a, _), r1) = acc.head
if (i2a - i1a < minVal) ((i1a, i2b), r1 + r2) :: acc.tail
else ((i2a, i2b), r2) :: acc
}.reverse
}
Test case:
reduce(List( ((1.0,2.0),0.3), ((1.5,6.0),0.35), ((4.0,10.0),0.25), ((7.0,15.0),0.1) ))
// result: List[(Interval, Rational)] = List(((1.0,6.0),0.6499999999999999), ((4.0,10.0),0.25), ((7.0,15.0),0.1))
I have a List of certain type that I want to reduce based on a condition. I have a type where the Interval is a DateTime interval with a start and an end:
case class MyType(a: Interval, value: Double)
I have got a List[MyType] entries that I want to reduce to a List[MyType] based on MyType that contains same DateTime and value. I do not want to go over the List twice which I already do.
Say I have:
val a = MyType(interval1, 2)
val b = MyType(interval2, 2)
val c = MyType(interval3, 1)
val d = MyType(interval4, 6)
val e = MyType(interval5, 2)
val original = List(a, b, c, d, e)
I have to now reduce the original List based on the following conditions:
1. interval should be continuous, then take the start of the first entry and the end of the second entry
2. the double value should be the same
So assuming that interval1, interval2 are continuous, the result should look like:
val result = Seq(MyType(new Interval(a.interval.start, b.interval.end),2), c, d, e)
Is there a much more elegant solution or an idea?
In the reduce function, check if the condition is true, and if it is, return the current accumulator instead of what would you otherwise compute.
Here's how you would sum only even numbers:
Seq(1,4,6,3).foldLeft(0)( (acc, a) =>
if (a % 2 == 0) acc + a else acc
)
res5: Int = 10
Response to the edited question: It appears you have some conditions that have to hold about the consecuitve elements. Then you can apply the function .sliding.
Seq(a,b,c,d,e).sliding(2).foldLeft(0)(
case (acc, Seq(MyType(ai, a), MyType(bi, b))) =>
if (ai.max == bi.min) acc + a else acc
)
Buuut... You have probably guessed it would not be as performant as you would like. I hope you are not doing any premature optimization, because you know, that's the root of all evil. But if you really need performance, rewrite the code in terms of while loops (fall back to Java).
This should work:
def reduce(xs: List[MyType]) = {
xs match {
case a :: b :: tail =>
if(a.interval.end == b.interval.start && a.value == b.value)
reduce(MyType(new Interval(a.interval.start, b.interval.end) a.value) :: tail)
else
a :: reduce(b :: tail)
case _ => xs
}
}
The if condition might need minor tweaking depending on your exact needs, but the algorithm should work.
Given a list xs
If the first two items a and b can be merged into c, merge them and go back to step 1 with xs = c :: tail
If a and b cannot be merged, try reducing all elements but the first, and append the result to a
Otherwise (list has 1 element or is empty), return xs
Pay attantion that your task could result in multiple distinct solutions, which cannot be further reduced.
So as result you will get a set of solutions: Set[Set[MyType]]
I use Set[MyType] instead of proposed List[MyType] and Seq[MyType] because order is not important and my answer needs possibility to compare different solutions (in order to avoid duplicates).
My answer doesn't make assumptions about order of items, any order is OK.
Besides that in order to simplify the code I have replaced Interval with 2 fields from and to, which can be easily converted.
Here is the code for reduction:
case class MyType(from: Long, to: Long, value: Double)
object MyType {
//Returns all possible varians of reduced source.
//If reduction is not possible, returns empty set.
private def strictReduce(source: Set[MyType]): Set[Set[MyType]] = {
if (source.size <= 1) {Set.empty} else {
val active = source.head //get some item
val otherItems = source.tail //all other items
val reducedWithActive: Set[Set[MyType]] = otherItems.flatMap {
case after if active.to == after.from =>
//we have already found a reduction (active->after),
// so further reductions are not strictly required
reduce(otherItems - after + MyType(active.from, after.to, active.value))
case before if before.to == active.from =>
//we have already found a reduction (before->active),
// so further reductions are not strictly required
reduce(otherItems - before + MyType(before.from, active.to, active.value))
case notContinuos => Set.empty[Set[MyType]]
}
//check if we can reduce items without active
val reducedIgnoringActive = strictReduce(otherItems).
//if so, re-insert active and try to reduce it further, but not strictly anymore
flatMap (reducedOther => reduce(reducedOther + active))
reducedWithActive ++ reducedIgnoringActive
}
}
//Returns all possible varians of reduced source.
//If reduction is not possible, returns source as single result.
private def reduce(source: Set[MyType]): Set[Set[MyType]] = strictReduce(source) match {
case empty if empty.isEmpty => Set(source)
case reduced => reduced
}
//Reduces source, which contains items with different values
def reduceAll(source: Set[MyType]): Set[Set[MyType]] = source.
groupBy(_.value). //divide by values, because they are not merge-able
mapValues(reduce). //reduce for every group
values.reduceLeft((solutionSetForValueA, solutionSetForValueB) =>
//merge solutions for different groups
for(subSolutionForValueA <- solutionSetForValueA;
subSolutionForValueB <- solutionSetForValueB)
yield (subSolutionForValueA ++ subSolutionForValueB) //merge subSolutions
)
}
And here is the sample, which uses it:
object Example extends App {
val source = Set(
MyType(0L, 1L, 1.0),
MyType(1L, 2L, 2.0), //different value
MyType(1L, 3L, 1.0), //competing with next
MyType(1L, 4L, 1.0), //competing with prev
MyType(3L, 5L, 1.0), //joinable with pre-prev
MyType(2L, 4L, 2.0), //joinable with second
MyType(0L, 4L, 3.0) //lonely
)
val solutions: Set[Set[MyType]] = MyType.reduceAll(source)
//here you could choose the best solution (for example by size)
//printing out
solutions.foreach(solution => println(solution.toList.sortBy(_.from).sortBy(_.value).
map(item => s"${item.from}->${item.to}(${item.value})").mkString(", ")))
}
My result is:
0->5(1.0), 1->4(1.0), 1->4(2.0), 0->4(3.0)
0->4(1.0), 1->5(1.0), 1->4(2.0), 0->4(3.0)
Here is what I came up with:
def reduce(accumulator: Seq[MyType], original: Seq[MyType]): Seq[MyType] = original match {
case Nil => accumulator
case head :: xs => {
val found = xs.find(_.timeSpan.getStart().equals(head.timeSpan.getEnd))
if (found.isDefined && found.get.value == head.value) {
reduce(
accumulator :+ (MyType(new Interval(head.timeSpan.getStart, found.get.timeSpan.getEnd), head.value)),
original.diff(Seq(found.get, head))
)
}
else
reduce(
accumulator :+ head,
xs
)
}
}
What is the most efficient way to iterate over two lists (of differing length) backwards in Scala.
So for two lists
List(a,b,c) and List(1,2)
the pairs would be
(c,2) and (b,1)
Note: I would rather not do a reverse of each list.
A simple way is :
List('a','b','c').reverse zip List(1,2).reverse
Reversing the list is O(n) however, if you're worried about efficiency.
According to List's scaladoc, using reverseIterator might be more efficient. That way you don't creat a new list like with reverse, but traverse it as you keep iterating. That'd be :
val it = list1.reverseIterator zip list2.reverseIterator //returns an Iterator you can force
it.toList // List((c,2), (b,1))
Using parallel collections,
def parRevZip (a: List[String], b: List[Int]) = {
val max = Math.max(a.size, b.size)
val n = Math.abs(a.size - b.size)
if (a.size > b.size)
(max to n+1 by -1).par.map { i => (a(i-1), b(i-n-1)) }
else
(max to n+1 by -1).par.map { i => (a(i-n-1), b(i-1)) }
}
Taking into account different index values for possibly different sized lists, this approach fetches and pairs the same number of elements starting from the end of each list.
Performance needs careful evaluation; for small lists, a plain reverse and zipping may prove much simpler and efficient; for large lists, on the contrary, this parallel approach may be of interest.
Code Refinement
def parRevZip[A,B] (a: List[A], b: List[B]) = {
val aSize = a.size
val bSize = b.size
val max = Math.max(aSize, bSize)
val n = Math.abs(aSize - bSize)
if (aSize > bSize)
(max-1 to n by -1).par.map { i => (a(i), b(i-n)) }
else
(max-1 to n by -1).par.map { i => (a(i-n), b(i)) }
}
Using non recursive collections
Convenient immutable collections here where the computation of size is O(1) (or quasi-constant) (see Recursive collections in Scala such as List) include for instance Array.
Hence,
def parRevZip[A,B] (a: Array[A], b: Array[B])
which does not follow any further the requirement of processing lists.
I think you mean this:
val a = List(1, 2, 3)
val b = List(8, 9)
val result = a.reverse zip b.reverse
Here is my attempt at this problem. The original lists a and b are not duplicated. The operation is O(N) due to List.size:
object test extends App {
val a = List("a", "b", "c") //> a : List[String] = List(a, b, c)
val b = List(1, 2) //> b : List[Int] = List(1, 2)
val aSize = a.size //> aSize : Int = 3
val bSize = b.size //> bSize : Int = 2
// find which is smaller and which is bigger list
val (smaller, bigger) = if (aSize < bSize) (a, b) else (b, a)
//> smaller : List[Any] = List(1, 2)
//| bigger : List[Any] = List(a, b, c)
// skip the extra entries from head of bigger list
val truncated = bigger.drop(Math.abs(aSize - bSize))
//> truncated : List[Any] = List(b, c)
val result = if (a == smaller)
smaller.zip(truncated).reverse
else
truncated.zip(smaller).reverse //> result : List[(Any, Any)] = List((c,2), (b,1))
}