Yesod: Is it possible to to iterate a haskell list in Julius? - yesod

I have a list of coordinates that I need to put on map. Is it possible in julius to iterate the list ? Right now I am creating an hidden table in hamlet and accessing that table in julius which does not seem to be an ideal solution.
Could some one point to a better solution ? Thanks.
edit: Passing a JSON string for the list (which can be read by julius) seems to solve my problem.

As far as I know, you can't directly iterate over a list in julius. However, you can use the Monoid instance for the Javascript type to accomplish a similar effect. For example:
import Text.Julius
import Data.Monoid
rows :: [Int] -> t -> Javascript
rows xs = mconcat $ map row xs
where
row x = [julius|v[#{show x}] = #{show x};
|]
Then you can use rows xs wherever you'd normally put a julius block. For example, in ghci:
> renderJavascript $ rows [1..5] ()
"v[1] = 1;\nv[2] = 2;\nv[3] = 3;\nv[4] = 4;\nv[5] = 5;\n"

Related

Problem with a list in the form of [(key, [..]) ; ...]

I'm trying to learn OCaml since I'm new to the language and I stumbled across this problem where I can't seem to find a way to see, in a function where I need to merge 2 kinds of these lists, if there is already an element with a key, and if so how to join the elements that come after. Would appreciate any guidance.
For example if I have:
l1: [(k, [e]); (ka, [])]
l2: [(k, [f; g])]
How can I end up with:
fl: [(k, [e; f; g]); (ka, [])]
Basically, how can I filter the key k from both lists while making their elements combine.
There are functions in the standard OCaml library for dealing with lists of pairs where the first element of each pair is a key. You will find them described here: https://ocaml.org/releases/4.12/api/List.html under Association lists.
I will repeat what #ivg says. This is not how you want to solve your problem if you have more than just a few pairs to work with.
First of all, using lists as mappings is a bad idea. It is much better to use dedicated data structures such as maps and hash tables.
Answering your question directly, you can concatenate two lists using the (#) operator, e.g.,
# [1;2;3] # [4;5;6];;
- : int list = [1; 2; 3; 4; 5; 6]
If you don't want repetitive elements when you merge then, and I feel like I repeat myself, it is bad to use lists for sets, it is better to use dedicated data structures such as sets and hash sets. But if you want to continue, then you can merge two lists without repetitions by checking if an element is already in the list before prepending to it. Easy to implement but hard to run, in a sense that it takes quadratic time to merge two lists this way.
If you still want to stick with the list of pairs, then you will find that the List.assoc function is useful, as it finds a value by key. The overall algorithm would be, given two lists, xs and ys, fold over elements of ys using xs as the initial value acc, and for each (ky,y) in ys if ky is already in acc, find the associated with ky value x and remove (List.remove_assoc) it, then merge x and y and prepend the merged value with the acc list, otherwise (if it is not in acc) just prepend (ky,y) to acc`. Note that this algorithm doesn't preserve order, so if it matters you need something more complex. Also, if your keys are sorted you can make it a little bit more efficient and easier to implement.
I guess you're doing this to practice with list.
What I would do is store the already found keys in an accumulator
let mergePairs yourList =
let rec aux accKeys = function
| [] -> []
| x :: xs -> let k,v = x in if (* k in accKeys *) then aux accKeys xs (*we suppress already
existing keys*)
else (k, v # (* get all the list of the other pairs with key = k in xs*))
:: aux (k::accKeys) xs
in aux [] yourList;;

Different result in OCaml and ReasonML

There is a case mapping two vectors into a single vector. I expected that the result of both ML should be same. Unfortunately, the result of ReasonML is different. Please help and comment how to fix it.
OCaml
List.map2 (fun x y -> x+y) [1;2;3] [10;20;30];;
[11;;22;;33]
ReasonML
Js.log(List.map2 ( (fun (x,y) => x+y), [1,2,3], [10,20,30]))
[11,[22,[33,0]]]
This is the same result. If you run:
Js.log([11,22,33]);
You'll get:
[11,[22,[33,0]]]
The result is the same, but you're using different methods of printing them. If instead of Js.log you use rtop or sketch.sh, you'll get the output you expect:
- : list(int) = [11, 22, 33]
Js.log prints it differently because it is a BuckleScript binding to console.log, which will print the JavaScript-representation of the value you give to it. And lists don't exist in JavaScript, only arrays do.
The way BuckleScript represents lists is pretty much the same way it is done natively. A list in OCaml and Reason is a "cons-cell", which is essentially a tuple or a 2-element array, where the first item is the value of that cell and the last item is a pointer to the next cell. The list type is essentially defined like this:
type list('a) =
| Node('a, list('a))
| Empty;
And with this definition could have been constructed with:
Node(11, Node(22, Node(33, Empty)))
which is represented in JavaScript like this:
[11,[22,[33,0]]]
^ ^ ^ ^
| | | The Empty list
| | Third value
| Second value
First value
Lists are defined this way because immutability makes this representation very efficient. Because we can add or remove values without copying all the items of the old list into a new one. To add an item we only need to create one new "cons-cell". Using the JavaScript representation with imagined immutability:
const old = [11,[22,[33,0]]];
const new = [99, old];
And to remove an item from the front we don't have to create anything. We can just get a reference to and re-use a sub-list, because we know it won't change.
const old = [11,[22,[33,0]]];
const new = old[1];
The downside of lists is that adding and removing items to the end is relatively expensive. But in practice, if you structure your code in a functional way, using recursion, the list will be very natural to work with. And very efficient.
#Igor Kapkov, thank you for your help. Base on your comment, I found a pipeline statement in the link, there is a summary.
let a = List.map2 ( (fun (x,y) => x+y), [1,2,3], [10,20,30] )
let logl = l => l |> Array.of_list |> Js.log;
a |> logl
[11,22,33]

Haskell : Filter list for elements of second list

I am currently trying to filter a list , that consist of tuples of the form (String , Double), for a List that consists of Strings. If the tuple doesn't contain a String of the second list, it should be removed from the list of tuples. So far I came up with this:
test :: [ExamScore] -> String -> [ExamScore]
test a b = filter ((== b).fst) a
My current problem is to replace the String that is filtered for by a list of Strings. Thanks for your help! Please go easy on me, I'm a first year informatics student that hasn't coded anytime before.
It's almost the same, just another filter function using elem:
test a b :: [ExamScore] -> [String] -> [ExamScore]
test a b = filter (\(s, _) -> elem s b) a
Or, if you're more into the composition style:
test a b = filter (flip elem b . fst) a
(It's worth noting that it's not the most efficient way, since elem is O(N) for lists, so depending on your case you may want to find a better structure for storing the keys.)

Haskell: Scan Through a List and Apply A Different Function for Each Element

I need to scan through a document and accumulate the output of different functions for each string in the file. The function run on any given line of the file depends on what is in that line.
I could do this very inefficiently by making a complete pass through the file for every list I wanted to collect. Example pseudo-code:
at :: B.ByteString -> Maybe Atom
at line
| line == ATOM record = do stuff to return Just Atom
| otherwise = Nothing
ot :: B.ByteString -> Maybe Sheet
ot line
| line == SHEET record = do other stuff to return Just Sheet
| otherwise = Nothing
Then, I would map each of these functions over the entire list of lines in the file to get a complete list of Atoms and Sheets:
mapper :: [B.ByteString] -> IO ()
mapper lines = do
let atoms = mapMaybe at lines
let sheets = mapMaybe to lines
-- Do stuff with my atoms and sheets
However, this is inefficient because I am maping through the entire list of strings for every list I am trying to create. Instead, I want to map through the list of line strings only once, identify each line as I am moving through it, and then apply the appropriate function and store these values in different lists.
My C mentality wants to do this (pseudo code):
mapper' :: [B.ByteString] -> IO ()
mapper' lines = do
let atoms = []
let sheets = []
for line in lines:
| line == ATOM record = (atoms = atoms ++ at line)
| line == SHEET record = (sheets = sheets ++ ot line)
-- Now 'atoms' is a complete list of all the ATOM records
-- and 'sheets' is a complete list of all the SHEET records
What is the Haskell way of doing this? I simply can't get my functional-programming mindset to come up with a solution.
First of all, I think that the answers others have supplied will work at least 95% of the time. It's always good practice to code for the problem at hand by using appropriate data types (or tuples in some cases). However, sometimes you really don't know in advance what you're looking for in the list, and in these cases trying to enumerate all possibilities is difficult/time-consuming/error-prone. Or, you're writing multiple variants of the same sort of thing (manually inlining multiple folds into one) and you'd like to capture the abstraction.
Fortunately, there are a few techniques that can help.
The framework solution
(somewhat self-evangelizing)
First, the various "iteratee/enumerator" packages often provide functions to deal with this sort of problem. I'm most familiar with iteratee, which would let you do the following:
import Data.Iteratee as I
import Data.Iteratee.Char
import Data.Maybe
-- first, you'll need some way to process the Atoms/Sheets/etc. you're getting
-- if you want to just return them as a list, you can use the built-in
-- stream2list function
-- next, create stream transformers
-- given at :: B.ByteString -> Maybe Atom
-- create a stream transformer from ByteString lines to Atoms
atIter :: Enumeratee [B.ByteString] [Atom] m a
atIter = I.mapChunks (catMaybes . map at)
otIter :: Enumeratee [B.ByteString] [Sheet] m a
otIter = I.mapChunks (catMaybes . map ot)
-- finally, combine multiple processors into one
-- if you have more than one processor, you can use zip3, zip4, etc.
procFile :: Iteratee [B.ByteString] m ([Atom],[Sheet])
procFile = I.zip (atIter =$ stream2list) (otIter =$ stream2list)
-- and run it on some data
runner :: FilePath -> IO ([Atom],[Sheet])
runner filename = do
resultIter <- enumFile defaultBufSize filename $= enumLinesBS $ procFile
run resultIter
One benefit this gives you is extra composability. You can create transformers as you like, and just combine them with zip. You can even run the consumers in parallel if you like (although only if you're working in the IO monad, and probably not worth it unless the consumers do a lot of work) by changing to this:
import Data.Iteratee.Parallel
parProcFile = I.zip (parI $ atIter =$ stream2list) (parI $ otIter =$ stream2list)
The result of doing so isn't the same as a single for-loop - this will still perform multiple traversals of the data. However, the traversal pattern has changed. This will load a certain amount of data at once (defaultBufSize bytes) and traverse that chunk multiple times, storing partial results as necessary. After a chunk has been entirely consumed, the next chunk is loaded and the old one can be garbage collected.
Hopefully this will demonstrate the difference:
Data.List.zip:
x1 x2 x3 .. x_n
x1 x2 x3 .. x_n
Data.Iteratee.zip:
x1 x2 x3 x4 x_n-1 x_n
x1 x2 x3 x4 x_n-1 x_n
If you're doing enough work that parallelism makes sense this isn't a problem at all. Due to memory locality, the performance is much better than multiple traversals over the entire input as Data.List.zip would make.
The beautiful solution
If a single-traversal solution really does make the most sense, you might be interested in Max Rabkin's Beautiful Folding post, and Conal Elliott's followup work (this too). The essential idea is that you can create data structures to represent folds and zips, and combining these lets you create a new, combined fold/zip function that only needs one traversal. It's maybe a little advanced for a Haskell beginner, but since you're thinking about the problem you may find it interesting or useful. Max's post is probably the best starting point.
I show a solution for two types of line, but it is easily extended to five types of line by using a five-tuple instead of a two-tuple.
import Data.Monoid
eachLine :: B.ByteString -> ([Atom], [Sheet])
eachLine bs | isAnAtom bs = ([ {- calculate an Atom -} ], [])
| isASheet bs = ([], [ {- calculate a Sheet -} ])
| otherwise = error "eachLine"
allLines :: [B.ByteString] -> ([Atom], [Sheet])
allLines bss = mconcat (map eachLine bss)
The magic is done by mconcat from Data.Monoid (included with GHC).
(On a point of style: personally I would define a Line type, a parseLine :: B.ByteString -> Line function and write eachLine bs = case parseLine bs of .... But this is peripheral to your question.)
It is a good idea to introduce a new ADT, e.g. "Summary" instead of tuples.
Then, since you want to accumulate the values of Summary you came make it an istance of Data.Monoid. Then you classify each of your lines with the help of classifier functions (e.g. isAtom, isSheet, etc.) and concatenate them together using Monoid's mconcat function (as suggested by #dave4420).
Here is the code (it uses String instead of ByteString, but it is quite easy to change):
module Classifier where
import Data.List
import Data.Monoid
data Summary = Summary
{ atoms :: [String]
, sheets :: [String]
, digits :: [String]
} deriving (Show)
instance Monoid Summary where
mempty = Summary [] [] []
Summary as1 ss1 ds1 `mappend` Summary as2 ss2 ds2 =
Summary (as1 `mappend` as2)
(ss1 `mappend` ss2)
(ds1 `mappend` ds2)
classify :: [String] -> Summary
classify = mconcat . map classifyLine
classifyLine :: String -> Summary
classifyLine line
| isAtom line = Summary [line] [] [] -- or "mempty { atoms = [line] }"
| isSheet line = Summary [] [line] []
| isDigit line = Summary [] [] [line]
| otherwise = mempty -- or "error" if you need this
isAtom, isSheet, isDigit :: String -> Bool
isAtom = isPrefixOf "atom"
isSheet = isPrefixOf "sheet"
isDigit = isPrefixOf "digits"
input :: [String]
input = ["atom1", "sheet1", "sheet2", "digits1"]
test :: Summary
test = classify input
If you have only 2 alternatives, using Either might be a good idea. In that case combine your functions, map the list, and use lefts and rights to get the results:
import Data.Either
-- first sample function, returning String
f1 x = show $ x `div` 2
-- second sample function, returning Int
f2 x = 3*x+1
-- combined function returning Either String Int
hotpo x = if even x then Left (f1 x) else Right (f2 x)
xs = map hotpo [1..10]
-- [Right 4,Left "1",Right 10,Left "2",Right 16,Left "3",Right 22,Left "4",Right 28,Left "5"]
lefts xs
-- ["1","2","3","4","5"]
rights xs
-- [4,10,16,22,28]

haskell list of tuples, with unique tuples

I come from a Python and Java background so Haskell is quite different for me. I'm trying little activities to learn but I am stuck on this .
I have an ordered list of tuples, [(name, studentNumber)], and I want to filter this list so that each student and each studentNumber appears only once. Since the tuples are ordered, I want to keep the first instance of a name or studentNumber and remove any others that may show up.
I tried doing a list comphrenshion, but I'm not sure how to check if a name or number has already been added to the list.
It sounds as if you'd want (as a first, inefficient approximation) something like this:
import Data.List (nubBy)
import Data.Function (on)
filt = nubBy ((==) `on` snd) . nubBy ((==) `on` fst)
The first call to nubBy will result in a list in which each name appears only once, and that will then be passed to the second, resulting in a list in which each number appears only once.
Just using nub will result in a list in which each (name,number) pair occurs only once; there might still be repetitions of names with different numbers and numbers with different names.
(Of course something custom with an accumulator would be faster.)
You can spy on Data.List sources and write your extended nub function:
type Student = (Name, Number)
type Name = String
type Number = Int
unique :: [Student] -> [Student]
unique = go [] []
where
go unames unumbers (s#(name, number):ss)
| name `elem` unames || number `elem` unumbers = go unames unumbers ss
| otherwise = s : go (name:unames) (number:unumbers) ss
go _ _ [] = []
Should do what you want.
To unique-ify a list there's always the nub function from the prelude, I think that should do exactly what you need!