So I know how the reduce, accumulator and how fold works in C++, Python, etc... But for some reason in Scade Suite it's kinda confusing to me.
Scade Suite Example That is confusing me
What I'm not understanding from the example is are the two arrays multiplying with each other without the accumulator value? How are both arrays being stepped through, how are they being multiplied by each other and if that can happen what's the point of having an accumulator value in the first place. Can someone break this down for me. I'm dumb.
In the way the fold in Scade is defined, the output of the operator contained in the fold construct (here, Output1 of mult_scalar) from the previous iteration is fed as input (here, Acc) for the next iteration as we progress through the array.
There is an accumulator, defined implicitly as the "+"-operation in mult_scalar. Its initial value is defined as "a"-marked input of fold. Note that in each iteration mult_scalar adds the product of two elements from the input arrays to the Acc producing Output1.
Fold calls an operator (here, mult_scalar) iteratively supplying it with the following arguments: accumulator values (it may be more than one, there is just one in this case) and respective elements of the input arrays. After each iteration the accumulator value is updated and fed into the next iteration.
Is there a more concise way of writing this relatively common type of loop,
70 M=NTOC-N
L=0
DO 100 I=M,NTOC
L=L+1
X(L)=XI(I)
100 Y(L)=YI(I)
Without going into definitions of indexes, what it does is it copies the contents of arrays XI, YI from index M to NTOC to arrays X, Y indexes 1 to ... (NTOC-M) ... how many is needed.
While restructuring some older code, I noticed I had a large number of this kind of loops, and while I probably didn't know better at the time, I was wondering is there now a more concise way of writing this to aide code legibility / readability? While depending a lot on loops, I know Fortran nowadays has excellent support for all kinds of array operations, so if someone knows a way which they believe could be more legible, I would be very grateful on all suggestions!
Assuming n is positive, over the course of the loop i takes the values m, m+1, ..., ntoc and so the elements of xi chosen are, in order, xi(m), xi(m+1), ..., xi(ntoc). The elements of yi are similar.
In terms of an array section, xi(m:ntoc) represents the same selection of elements.
Similarly, the elements of x on the left-hand side are x(1), x(2), ..., x(ntoc-m+1) (=x(n+1)). As an array section, x(1:n+1) represents the same elements.
That means:
x(1:n+1)=xi(ntoc-n:ntoc) ! Replacing m with its value
y(1:n+1)=yi(ntoc-n:ntoc)
And if the bounds are x and y are 1 and n+1, or the arrays are allocatable, then the whole arrays x and y could be used on the left-hand sides.
For n zero or negative the array section will safely select the same elements as the loop (one or none).
If you're going to use i and l outside that fragment then you'll of course have to set those manually (and don't forgot that i will take the value ntoc+1).
Finally, if you want more incentive to get rid of that loop: note that non-block do constructs like this one are deleted by Fortran 2015.
The problem
I'm looking for a container that is used to save partial results of n - 1 problems in order to calculate the nth one. This means that the size of the container, at the end, will always be n.
Each element, i, of the container depends on at least 2 and up to 4 previous results.
The container have to provide:
constant time insertions at either beginning or end (one of the two, not necessarily both)
constant time indexing in the middle
or alternatively (given a O(n) initialization):
constant time single element edits
constant time indexing in the middle
What is std::vector and why is it relevant
For those of you who don't know C++, std::vector is a dynamically sized array. It is a perfect fit for this problem because it is able to:
reserve space at construction
offer constant time indexing in the middle
offer constant time insertion at the end (with a reserved space)
Therefore this problem is solvable in O(n) complexity, in C++.
Why Data.Vector is not std::vector
Data.Vector, together with Data.Array, provide similar functionality to std::vector, but not quite the same. Both, of course, offer constant time indexing in the middle, but they offer neither constant time modification ((//) for example is at least O(n)) nor constant time insertion at either beginning of end.
Conclusion
What container really mimics std::vector in Haskell? Alternatively, what is my best shot?
From reddit comes the suggestion to use Data.Vector.constructN:
O(n) Construct a vector with n elements by repeatedly applying the generator function to the already constructed part of the vector.
constructN 3 f = let a = f <> ; b = f <a> ; c = f <a,b> in f <a,b,c>
For example:
λ import qualified Data.Vector as V
λ V.constructN 10 V.length
fromList [0,1,2,3,4,5,6,7,8,9]
λ V.constructN 10 $ (1+) . V.sum
fromList [1,2,4,8,16,32,64,128,256,512]
λ V.constructN 10 $ \v -> let n = V.length v in if n <= 1 then 1 else (v V.! (n - 1)) + (v V.! (n - 2))
fromList [1,1,2,3,5,8,13,21,34,55]
This certainly seems to qualify to solve the problem as you've described it above.
The first data structures that come to my mind are either Maps from Data.Map or Sequences from Data.Sequence.
Update
Data.Sequence
Sequences are persistent data structures that allow most operations efficient, while allowing only finite sequences. Their implementation is based on finger-trees, if you are interested. But which qualities does it have?
O(1) calculation of the length
O(1) insert at front/back with the operators <| and |> respectively.
O(n) creation from a list with fromlist
O(log(min(n1,n2))) concatenation for sequences of length n1 and n2.
O(log(min(i,n-i))) indexing for an element at position i in a sequence of length n.
Furthermore this structure supports a lot of the known and handy functions you'd expect from a list-like structure: replicate, zip, null, scans, sort, take, drop, splitAt and many more. Due to these similarities you have to do either qualified import or hide the functions in Prelude, that have the same name.
Data.Map
Maps are the standard workhorse for realizing a correspondence between "things", what you might call a Hashmap or associave array in other programming languages are called Maps in Haskell; other than in say Python Maps are pure - so an update gives you back a new Map and does not modify the original instance.
Maps come in two flavors - strict and lazy.
Quoting from the Documentation
Strict
API of this module is strict in both the keys and the values.
Lazy
API of this module is strict in the keys, but lazy in the values.
So you need to choose what fits best for your application. You can try both versions and benchmark with criterion.
Instead of listing the features of Data.Map I want to pass on to
Data.IntMap.Strict
Which can leverage the fact that the keys are integers to squeeze out a better performance
Quoting from the documentation we first note:
Many operations have a worst-case complexity of O(min(n,W)). This means that the operation can become linear in the number of elements with a maximum of W -- the number of bits in an Int (32 or 64).
So what are the characteristics for IntMaps
O(min(n,W)) for (unsafe) indexing (!), unsafe in the sense that you will get an error if the key/index does not exist. This is the same behavior as Data.Sequence.
O(n) calculation of size
O(min(n,W)) for safe indexing lookup, which returns a Nothing if the key is not found and Just a otherwise.
O(min(n,W)) for insert, delete, adjust and update
So you see that this structure is less efficient than Sequences, but provide a bit more safety and a big benefit if you actually don't need all entries, such the representation of a sparse graph, where the nodes are integers.
For completeness I'd like to mention a package called persistent-vector, which implements clojure-style vectors, but seems to be abandoned as the last upload is from (2012).
Conclusion
So for your use case I'd strongly recommend Data.Sequence or Data.Vector, unfortunately I don't have any experience with the latter, so you need to try it for yourself. From the stuff I know it provides a powerful thing called stream fusion, that optimizes to execute multiple functions in one tight "loop" instead of running a loop for each function. A tutorial for Vector can be found here.
When looking for functional containers with particular asymptotic run times, I always pull out Edison.
Note that there's a result that in a strict language with immutable data structures, there's always a logarithmic slowdown to implementing mutable data structure on top of them. It's an open problem whether the limited mutation hidden behind laziness can avoid that slowdown. There also the issue of persistent vs. transient...
Okasaki is still a good read for background, but finger trees or something more complex like an RRB-tree should be available "off-the-shelf" and solve your problem.
I'm looking for a container that is used to save partial results of n - 1 problems in order to calculate the nth one.
Each element, i, of the container depends on at least 2 and up to 4 previous results.
Lets consider a very small program. that calculates fibonacci numbers.
fib 1 = 1
fib 2 = 1
fib n = fib (n-1) + fib (n-2)
This is great for small N, but horrible for n > 10. At this point, you stumble across this gem:
fib n = fibs !! n where fibs = 1 : 1 : zipWith (+) fibs (tail fibs)
You may be tempted to exclaim that this is dark magic (infinite, self referential list building and zipping? wth!) but it is really a great example of tying the knot, and using lazyness to ensure that values are calcuated as-needed.
Similarly, we can use an array to tie the knot too.
import Data.Array
fib n = arr ! 10
where arr :: Arr Int Int
arr = listArray (1,n) (map fib' [1..n])
fib' 1 = 1
fib' 2 = 1
fib' n = arr!(n-1) + arr!(n-2)
Each element of the array is a thunk that uses other elements of the array to calculate it's value. In this way, we can build a single array, never having to perform concatenation, and call out values from the array at will, only paying for the calculation up to that point.
The beauty of this method is that you don't only have to look behind you, you can look in front of you as well.
I have arrays defined in Fortran as follows:
integer,dimension(100)::a
integer,dimension(100)::partial_sum_a
I wanted to use MPI_REDUCE to sum only the values of a from indices 5 to 10 (i.e. a(5),...,a(10)) on root. How would I do that? Will the usage of:
MPI_Reduce(a(5:),partial_sum_a(5:),6,...)
be fine? Or do I have to use MPI_TYPE_VECTOR?
Yes, given that an array slice with more than one element is an array as well, the usual usage of MPI_Reduce will work. Obviously, you need to make sure that all the arguments in the MPI_Reduce call are correct, i.e. count matching the number of elements in send buffer etc. Most often you can try these things yourself faster than it takes you to get an answer from people on the internet.
My sincere apologies for such a naive question. I know this is simple. But nothing comes to my mind now.
I am using C++. I'm a bit concerned about efficiency since this is targeted for an embedded hardware with very less processing power and RAM.
I have 2 integer arrays with 50 members local to a function. I need to determine what is the corresponding number in the second array when an element in the first array is specified and vice versa. I have the information that the element provided to me for look-up belongs to which array i.e. array 1 or array 2.
Ex : Array1 => 500 200 1000 300 .....
Array2 => 250 170 500 400 .....
Input 500 , Output will be 250
Input 400 , Output will be 300
input 200 , Output will be 170 and so on
I think an array look-up will be least efficient. Is stl::map the best option or do i have to look for any efficient search algorithms? I would like to know if you have to do this, which option you will be choosing.
Any thoughts?
You can use std::map for readability and a little efficiency as well, though in your case efficiency is of small matter
std::map<int,int> mapping;
.... //populate
cout <<mapping[200]; //170
This is only 1 way (Array 1 -> Array 2) though. Im not sure if any easier way to do the other way, but create a second map.
To support reverse lookup, or going from (Array 2 -> Array 1), Reverse map lookup suggests using Boost.Bimap
According to me there are 2 ways of doing it both have already been suggested;
put the both arrays in a map as key pair value and traverse map to find the corresponding value or key.
Traverse the array for which the input is there and calculate the index. Get the value for that index int he other array.
I would go for the second solution as it easier. Moreover with only 50 elements in a static array you don't need to worry about performance.