I am programming in Kotlin and have a MutableList from which I would like to remove the first n elements from that specific list instance. This means that functions like MutableList.drop(n) are out of the question.
One solution would of course be to loop and call MutableList.removeFirst() n times, but this feels inefficient, being O(n). Another way would be to choose another data type, but I would prefer not to clutter my project by implementing my own data type for this, if I can avoid it.
Is there a faster way to do this with a MutableList? If not, is there another built-in data type that can achieve this in less than O(n)?
In my opinion the best way to achieve this is
abstract fun subList(fromIndex: Int, toIndex: Int): List<E>.
https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/-list/sub-list.html
Under the hood it creates a new instance of list(SubList class for AbstractClass) with elements between the selected indexes.
Using:
val yourList = listOf<YourType>(...)
val yourNewList = yourList.subList(5, yourList.size)
// return list from 6th elem to last
One method which seems to be faster if n is sufficiently large seems to be the following:
Store the last listSize - n bytes to keep in a temporary list,
Clear original list instance
Add temporary list to original list
Here is a quick benchmark for some example values that happen to fit my use case:
val numRepetitions = 15_000
val listSize = 1_000
val maxRemove = listSize
val rnd0 = Random(0)
val rnd1 = Random(0)
// 1. Store the last `listSize - n` bytes to keep in a temporary list,
// 2. Clear original list
// 3. Add temporary list to original list
var accumulatedMsClearAddAll = 0L
for (i in 0 until numRepetitions) {
val l = Random.nextBytes(listSize).toMutableList()
val numRemove = rnd0.nextInt(maxRemove)
val numKeep = listSize - numRemove
val startTime = System.currentTimeMillis()
val expectedOutput = l.takeLast(numKeep)
l.clear()
l.addAll(expectedOutput)
val endTime = System.currentTimeMillis()
assert(l == expectedOutput)
accumulatedMsClearAddAll += endTime - startTime
}
// Iteratively remove the first byte `n` times.
var accumulatedMsIterative = 0L
for (i in 0 until numRepetitions) {
val numRemove = rnd1.nextInt(maxRemove)
val l = Random.nextBytes(listSize).toMutableList()
val expectedOutput = l.takeLast(listSize - numRemove)
val startTime = System.currentTimeMillis()
for (ii in 0 until numRemove) {
l.removeFirst()
}
val endTime = System.currentTimeMillis()
assert(l == expectedOutput)
accumulatedMsIterative += endTime - startTime
}
println("clear+addAll removal: $accumulatedMsClearAddAll ms")
println("Iterative removal: $accumulatedMsIterative ms")
Output:
Clear+addAll removal: 478 ms
Iterative removal: 12683 ms
I do not know how to code and I am trying to learn Pinescript but it really makes no sense to me so i googled how to set a backtest range and used some code someone else wrote but it doesn't seem to be actually testing the area i would like, it tests the entirety of the chart. I'd like to test from 1/1/2018 to present. I'm trying to do this for multiple strategies so I can better tailor them to the current market. here is wat I have for one of them and if you are willing to help with the others I would very much appreciate it!!! feel free to DM me.
//#version=5
strategy("Bollinger Bands BACKTEST", overlay=true)
source = close
length = input.int(20, minval=1)
mult = input.float(2.0, minval=0.001, maxval=50)
basis = ta.sma(source, length)
dev = mult * ta.stdev(source, length)
upper = basis + dev
lower = basis - dev
buyEntry = ta.crossover(source, lower)
sellEntry = ta.crossunder(source, upper)
if (ta.crossover(source, lower))
strategy.entry("BBandLE", strategy.long, stop=lower, oca_name="BollingerBands", oca_type=strategy.oca.cancel, comment="BBandLE")
else
strategy.cancel(id="BBandLE")
if (ta.crossunder(source, upper))
strategy.entry("BBandSE", strategy.short, stop=upper, oca_name="BollingerBands", oca_type=strategy.oca.cancel, comment="BBandSE")
else
strategy.cancel(id="BBandSE")
//plot(strategy.equity, title="equity", color=color.red, linewidth=2, style=plot.style_areabr)
// === INPUT BACKTEST RANGE ===
fromMonth = input.int(defval = 1, title = "From Month", minval = 1, maxval = 12)
fromDay = input.int(defval = 1, title = "From Day", minval = 1, maxval = 31)
fromYear = input.int(defval = 2018, title = "From Year", minval = 1970)
I'm trying to code the fast Non Dominated Sorting algorithm (NDS) of Deb used in NSGA2 in immutable way using Scala.
But the problem seems more difficult than i think, so i simplify here the problem to make a MWE.
Imagine a population of Seq[A], and each A element is decoratedA with a list which contains pointers to other elements of the population Seq[A].
A function evalA(a:decoratedA) take the list of linkedA it contains, and decrement value of each.
Next i take a subset list decoratedAPopulation of population A, and call evalA on each. I have a problem, because between each iteration on element on this subset list decoratedAPopulation, i need to update my population of A with the new decoratedA and the new updated linkedA it contain ...
More problematic, each element of population need an update of 'linkedA' to replace the linked element if it change ...
Hum as you can see, it seem complicated to maintain all linked list synchronized in this way. I propose another solution bottom, which probably need recursion to return after each EvalA a new Population with element replaced.
How can i do that correctly in an immutable way ?
It's easy to code in a mutable way, but i don't find a good way to do this in an immutable way, do you have a path or an idea to do that ?
object test extends App{
case class A(value:Int) {def decrement()= new A(value - 1)}
case class decoratedA(oneAdecorated:A, listOfLinkedA:Seq[A])
// We start algorithm loop with A element with value = 0
val population = Seq(new A(0), new A(0), new A(8), new A(1))
val decoratedApopulation = Seq(new decoratedA(population(1),Seq(population(2),population(3))),
new decoratedA(population(2),Seq(population(1),population(3))))
def evalA(a:decoratedA) = {
val newListOfLinked = a.listOfLinkedA.map{e => e.decrement()
new decoratedA(a.oneAdecorated,newListOfLinked)}
}
def run()= {
//decoratedApopulation.map{
// ?
//}
}
}
Update 1:
About the input / output of the initial algorithm.
The first part of Deb algorithm (Step 1 to Step 3) analyse a list of Individual, and compute for each A : (a) domination count, the number of A which dominate me (the value attribute of A) (b) a list of A i dominate (listOfLinkedA).
So it return a Population of decoratedA totally initialized, and for the entry of Step 4 (my problem) i take the first non dominated front, cf. the subset of elements of decoratedA with A value = 0.
My problem start here, with a list of decoratedA with A value = 0; and i search the next front into this list by computing each listOfLinkedA of each of this A
At each iteration between step 4 to step 6, i need to compute a new B subset list of decoratedA with A value = 0. For each , i decrementing first the domination count attribute of each element into listOfLinkedA, then i filter to get the element equal to 0. A the end of step 6, B is saved to a list List[Seq[DecoratedA]], then i restart to step 4 with B, and compute a new C, etc.
Something like that in my code, i call explore() for each element of B, with Q equal at the end to new subset of decoratedA with value (fitness here) = 0 :
case class PopulationElement(popElement:Seq[Double]){
implicit def poptodouble():Seq[Double] = {
popElement
}
}
class SolutionElement(values: PopulationElement, fitness:Double, dominates: Seq[SolutionElement]) {
def decrement()= if (fitness == 0) this else new SolutionElement(values,fitness - 1, dominates)
def explore(Q:Seq[SolutionElement]):(SolutionElement, Seq[SolutionElement])={
// return all dominates elements with fitness - 1
val newSolutionSet = dominates.map{_.decrement()}
val filteredSolution:Seq[SolutionElement] = newSolutionSet.filter{s => s.fitness == 0.0}.diff{Q}
filteredSolution
}
}
A the end of algorithm, i have a final list of seq of decoratedA List[Seq[DecoratedA]] which contain all my fronts computed.
Update 2
A sample of value extracted from this example.
I take only the pareto front (red) and the {f,h,l} next front with dominated count = 1.
case class p(x: Int, y: Int)
val a = A(p(3.5, 1.0),0)
val b = A(p(3.0, 1.5),0)
val c = A(p(2.0, 2.0),0)
val d = A(p(1.0, 3.0),0)
val e = A(p(0.5, 4.0),0)
val f = A(p(0.5, 4.5),1)
val h = A(p(1.5, 3.5),1)
val l = A(p(4.5, 1.0),1)
case class A(XY:p, value:Int) {def decrement()= new A(XY, value - 1)}
case class ARoot(node:A, children:Seq[A])
val population = Seq(
ARoot(a,Seq(f,h,l),
ARoot(b,Seq(f,h,l)),
ARoot(c,Seq(f,h,l)),
ARoot(d,Seq(f,h,l)),
ARoot(e,Seq(f,h,l)),
ARoot(f,Nil),
ARoot(h,Nil),
ARoot(l,Nil))
Algorithm return List(List(a,b,c,d,e), List(f,h,l))
Update 3
After 2 hour, and some pattern matching problems (Ahum...) i'm comming back with complete example which compute automaticaly the dominated counter, and the children of each ARoot.
But i have the same problem, my children list computation is not totally correct, because each element A is possibly a shared member of another ARoot children list, so i need to think about your answer to modify it :/ At this time i only compute children list of Seq[p], and i need list of seq[A]
case class p(x: Double, y: Double){
def toSeq():Seq[Double] = Seq(x,y)
}
case class A(XY:p, dominatedCounter:Int) {def decrement()= new A(XY, dominatedCounter - 1)}
case class ARoot(node:A, children:Seq[A])
case class ARootRaw(node:A, children:Seq[p])
object test_stackoverflow extends App {
val a = new p(3.5, 1.0)
val b = new p(3.0, 1.5)
val c = new p(2.0, 2.0)
val d = new p(1.0, 3.0)
val e = new p(0.5, 4.0)
val f = new p(0.5, 4.5)
val g = new p(1.5, 4.5)
val h = new p(1.5, 3.5)
val i = new p(2.0, 3.5)
val j = new p(2.5, 3.0)
val k = new p(3.5, 2.0)
val l = new p(4.5, 1.0)
val m = new p(4.5, 2.5)
val n = new p(4.0, 4.0)
val o = new p(3.0, 4.0)
val p = new p(5.0, 4.5)
def isStriclyDominated(p1: p, p2: p): Boolean = {
(p1.toSeq zip p2.toSeq).exists { case (g1, g2) => g1 < g2 }
}
def sortedByRank(population: Seq[p]) = {
def paretoRanking(values: Set[p]) = {
//comment from #dk14: I suppose order of values isn't matter here, otherwise use SortedSet
values.map { v1 =>
val t = (values - v1).filter(isStriclyDominated(v1, _)).toSeq
val a = new A(v1, values.size - t.size - 1)
val root = new ARootRaw(a, t)
println("Root value ", root)
root
}
}
val listOfARootRaw = paretoRanking(population.toSet)
//From #dk14: Here is convertion from Seq[p] to Seq[A]
val dominations: Map[p, Int] = listOfARootRaw.map(a => a.node.XY -> a.node.dominatedCounter) //From #dk14: It's a map with dominatedCounter for each point
val listOfARoot = listOfARootRaw.map(raw => ARoot(raw.node, raw.children.map(p => A(p, dominations.getOrElse(p, 0)))))
listOfARoot.groupBy(_.node.dominatedCounter)
}
//Get the first front, a subset of ARoot, and start the step 4
println(sortedByRank(Seq(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p)).head)
}
Talking about your problem with distinguishing fronts (after update 2):
val (left,right) = population.partition(_.node.value == 0)
List(left, right.map(_.copy(node = node.copy(value = node.value - 1))))
No need for mutating anything here. copy will copy everything but fields you specified with new values. Talking about the code, the new copy will be linked to the same list of children, but new value = value - 1.
P.S. I have a feeling you may actually want to do something like this:
case class A(id: String, level: Int)
val a = A("a", 1)
val b = A("b", 2)
val c = A("c", 2)
val d = A("d", 3)
clusterize(List(a,b,c,d)) === List(List(a), List(b,c), List(d))
It's simple to implement:
def clusterize(list: List[A]) =
list.groupBy(_.level).toList.sortBy(_._1).map(_._2)
Test:
scala> clusterize(List(A("a", 1), A("b", 2), A("c", 2), A("d", 3)))
res2: List[List[A]] = List(List(A(a,1)), List(A(b,2), A(c,2)), List(A(d,3)))
P.S.2. Please consider better naming conventions, like here.
Talking about "mutating" elements in some complex structure:
The idea of "immutable mutating" some shared (between parts of a structure) value is to separate your "mutation" from the structure. Or simply saying, divide and conquerror:
calculate changes in advance
apply them
The code:
case class A(v: Int)
case class AA(a: A, seq: Seq[A]) //decoratedA
def update(input: Seq[AA]) = {
//shows how to decrement each value wherever it is:
val stats = input.map(_.a).groupBy(identity).mapValues(_.size) //domination count for each A
def upd(a: A) = A(a.v - stats.getOrElse(a, 0)) //apply decrement
input.map(aa => aa.copy(aa = aa.seq.map(upd))) //traverse and "update" original structure
}
So, I've introduced new Map[A, Int] structure, that shows how to modify the original one. This approach is based on highly simplified version of Applicative Functor concept. In general case, it should be Map[A, A => A] or even Map[K, A => B] or even Map[K, Zipper[A] => B] as applicative functor (input <*> map). *Zipper (see 1, 2) actually could give you information about current element's context.
Notes:
I assumed that As with same value are same; that's default behaviour for case classess, otherwise you need to provide some additional id's (or redefine hashCode/equals).
If you need more levels - like AA(AA(AA(...)))) - just make stats and upd recursive, if dеcrement's weight depends on nesting level - just add nesting level as parameter to your recursive function.
If decrement depends on parent node (like decrement only A(3)'s, which belongs to A(3)) - add parent node(s) as part of stats's key and analise it during upd.
If there is some dependency between stats calculation (how much to decrement) of let's say input(1) from input(0) - you should use foldLeft with partial stats as accumulator: val stats = input.foldLeft(Map[A, Int]())((partialStats, elem) => partialStats ++ analize(partialStats, elem))
Btw, it takes O(N) here (linear memory and cpu usage)
Example:
scala> val population = Seq(A(3), A(6), A(8), A(3))
population: Seq[A] = List(A(3), A(6), A(8), A(3))
scala> val input = Seq(AA(population(1),Seq(population(2),population(3))), AA(population(2),Seq(population(1),population(3))))
input: Seq[AA] = List(AA(A(6),List(A(8), A(3))), AA(A(8),List(A(6), A(3))))
scala> update(input)
res34: Seq[AA] = List(AA(A(5),List(A(7), A(3))), AA(A(7),List(A(5), A(3))))
Hi everyone I'm newbie in prolog and I have such a list: (actually It is output of my predicate not a list )
P = [1/1, 1/3] ;
P = [1/1, 2/3] ;
P = [1/3, 1/1] ;
P = [1/3, 2/1] ;
P = [2/1, 1/3] ;
P = [2/1, 2/3] ;
P = [2/3, 1/1] ;
P = [2/3, 2/1] ;
and I need to remove dublicete terms.For example [1/1,2/3] and [2/3,1/1]is same and I should remove one of them , which one is not important ,How could I do that in prolog ?? Thanks in advance
NOTE I LEARNT THAT findALL should be good way for this but still dont know the answer please help me .
Unless you actually show us your code, it's never going to be possible to give you precise answers.
I assume you have a predicate f/1 such that:
?- f(P).
produces the interactive result you show above. A simple solution is to change your query:
?- f([X,Y]), X < Y.
This will produce the following result:
X = 1/3, Y = 1/1 ;
X = 1/3, Y = 2/1 ;
X = 2/3, Y = 1/1 ;
X = 2/3, Y = 2/1 ;
findall/3 isn't sufficient to solve this particular situation, because you've defined uniqueness in a way that ignores the position in the list. In Prolog (and everything else) [X,Y] and [Y,X] are not equal, so you'd have to find a trick to get this to give you "unique" results.