how to generate a list quickly iterating a file - list

Me coming from a c# and python background, feels there must be a better way to read a file and populate a classic F# list. But then I know that a f# list is immutable. There must be an alternative using a List<string> object and calling its Add method.
So far what I have at hand:
let ptr = new StreamReader("stop-words.txt")
let lst = new List<string>()
let ProcessLine line =
match line with
| null -> false
| s ->
lst.Add(s)
true
while ProcessLine (ptr.ReadLine()) do ()
If I were to write the similar stuff in python I'd do something like:
[x[:-1] for x in open('stop-words.txt')]

Simple solution
System.IO.File.ReadAllLines(filename) |> List.ofArray
Although you can write a recursive function
let processline fname =
let file = new System.IO.StreamReader("stop-words.txt")
let rec dowork() =
match file.ReadLine() with
|null -> []
|t -> t::(dowork())
dowork()

If you want to read all lines from a file, you can just use ReadAllLines. The method returns the data as an array, but you can easily turn that into F# list using List.ofArray or process it using the functions in the Seq module:
open System.IO
File.ReadAllLines("stop-words.txt")
Alternatively, if you do not want to read all the contents into memory, you can use File.ReadLines which reads the lines lazily.

Related

How to list files of a given extension in OCaml

I want to retrieve the list of direct files (i.e. no recursive search) of a given directory and a given extension in OCaml.
I tried the following but:
It does not look OCaml-spirit
It does not work (error of import)
let list_osc2 =
let list_files = Sys.readdir "tests/osc2/expected/pp" in
List.filter (fun x -> Str.last_chars x 4 = ".osc2") (Array.to_list list_files)
I got the error (I am using OCamlPro):
Required module `Str' is unavailable
Thanks
You can use Filename.extension instead of Str.last_chars:
let list_osc2 =
let list_files = Sys.readdir "tests/osc2/expected/pp" in
List.filter (fun x -> Filename.extension x = ".osc2") (Array.to_list list_files)
and then use the pipe operator to make it a bit more readable:
let list_osc2 =
Sys.readdir "tests/osc2/expected/pp"
|> Array.to_list
|> List.filter (fun x -> Filename.extension x = "osc2")
I don't know how you expect this to work in OCamlPro though, as it doesn't have a filesystem as far as I'm aware.
To use the Str module, you need to link with the str library. For example, with ocamlc, you need to pass str.cma, and with ocamlopt, you need to pass str.cmxa. I don't know how to do that with OcamlPro.
In any case, Str.last_chars is not particularly useful here. It doesn't work if the file name is shorter than the suffix. By the way, your code would never match because ".osc2" is 5 characters, which is never equal to last_chars x 4.
The Filename module from the standard library has functions to extract and check a file's extension. You don't need to do any string manipulation.
I don't know what you consider “ugly as hell”, but apart from the mistake with string manipulation, I don't see any problem with your code. Enumerating the matches and filtering them is perfectly idiomatic.
let list_osc2 =
let list_files = Sys.readdir "tests/osc2/expected/pp" in
List.filter (fun name -> check_suffix name ".osc2") (Array.to_list list_files)

F# append to list in a loop functiuonally

I am looking to convert this code to use F# list instead of the C# list implementation.
I am connecting to a database and running a query usually with C# would create a list of a type and keep adding the list while the datareader has values. How would I go about converting this to use an F# list
let queryDatabase (connection: NpgsqlConnection) (queryString: string) =
let transactions = new List<string>()
let command = new NpgsqlCommand(queryString, connection)
let dataReader = command.ExecuteReader()
while dataReader.Read() do
let json = dataReader.GetString(1)
transactions.Add(json)
transactions
The tricky thing here is that the input data source is inherently imperative (you have to call Read which mutates the internal state). So, you're crossing from imperative to a functional world - and so you cannot avoid all mutation.
I would probably write the code using a list comprehension, which keeps a similar familiar structure, but removes explicit mutation:
let queryDatabase (connection: NpgsqlConnection) (queryString: string) =
[ let command = new NpgsqlCommand(queryString, connection)
let dataReader = command.ExecuteReader()
while dataReader.Read() do
yield dataReader.GetString(1) ]
Tomas' answer is a solution to use in product code. But for sake of learning F# and functional programming I present my snippet with tail recursion and cons operator:
let drToList (dr:DataReader) =
let rec toList acc =
if not dr.Read then acc
else toList <| dr.GetString(1) :: acc
toList []
This tail recursion function is compiled into imperative-like code, thus no stack overflow and fast execution.
Also I advice you look at this C# thread and this F# documentation to see how properly dispose your command. Basically, you need to use smth like this:
let queryDb (conn: NpgsqlConnection) (qStr: string) =
use cmd = new NpgsqlCommand(qStr, conn)
cmd.ExecuteReader() |> drToList
And if we go deeper, you should also think about exception handling.

Have Trouble Understanding OCaml Code

I need to modify an OCaml function:
let removeDuplicates l =
let rec helper (seen,rest) =
match rest with
[] -> seen
| h::t ->
let seen' = failwith "to be written" in
let rest' = failwith "to be written" in
helper (seen',rest')
in
List.rev (helper ([],l));;
The function needs to take a list l and return the list with all duplicates removed. The failwith "to be written" parts is where I'm supposed to write my code. I understand how the helper function works but am having trouble understanding this part helper (seen',rest'). I'm not exactly sure how the function is supposed to flow with this part or how it works when you include a bunch of in's all together. We are allowed to use List.rev which reverses a list and list.mem which returns true if a certain element is in a list. Can someone please explain to me how the flow of the function is supposed to work so I can start to write a solution.
That line is confusing because it's indented incorrectly, or so I would claim. The proper indentation looks like this:
let seen' = failwith "to be written" in
let rest' = failwith "to be written" in
helper (seen',rest')
What it's saying is: calculate a new value for seen and a new value for rest, then call yourself recursively with the two new values.

Yesod: Is it possible to to iterate a haskell list in Julius?

I have a list of coordinates that I need to put on map. Is it possible in julius to iterate the list ? Right now I am creating an hidden table in hamlet and accessing that table in julius which does not seem to be an ideal solution.
Could some one point to a better solution ? Thanks.
edit: Passing a JSON string for the list (which can be read by julius) seems to solve my problem.
As far as I know, you can't directly iterate over a list in julius. However, you can use the Monoid instance for the Javascript type to accomplish a similar effect. For example:
import Text.Julius
import Data.Monoid
rows :: [Int] -> t -> Javascript
rows xs = mconcat $ map row xs
where
row x = [julius|v[#{show x}] = #{show x};
|]
Then you can use rows xs wherever you'd normally put a julius block. For example, in ghci:
> renderJavascript $ rows [1..5] ()
"v[1] = 1;\nv[2] = 2;\nv[3] = 3;\nv[4] = 4;\nv[5] = 5;\n"

Haskell: Scan Through a List and Apply A Different Function for Each Element

I need to scan through a document and accumulate the output of different functions for each string in the file. The function run on any given line of the file depends on what is in that line.
I could do this very inefficiently by making a complete pass through the file for every list I wanted to collect. Example pseudo-code:
at :: B.ByteString -> Maybe Atom
at line
| line == ATOM record = do stuff to return Just Atom
| otherwise = Nothing
ot :: B.ByteString -> Maybe Sheet
ot line
| line == SHEET record = do other stuff to return Just Sheet
| otherwise = Nothing
Then, I would map each of these functions over the entire list of lines in the file to get a complete list of Atoms and Sheets:
mapper :: [B.ByteString] -> IO ()
mapper lines = do
let atoms = mapMaybe at lines
let sheets = mapMaybe to lines
-- Do stuff with my atoms and sheets
However, this is inefficient because I am maping through the entire list of strings for every list I am trying to create. Instead, I want to map through the list of line strings only once, identify each line as I am moving through it, and then apply the appropriate function and store these values in different lists.
My C mentality wants to do this (pseudo code):
mapper' :: [B.ByteString] -> IO ()
mapper' lines = do
let atoms = []
let sheets = []
for line in lines:
| line == ATOM record = (atoms = atoms ++ at line)
| line == SHEET record = (sheets = sheets ++ ot line)
-- Now 'atoms' is a complete list of all the ATOM records
-- and 'sheets' is a complete list of all the SHEET records
What is the Haskell way of doing this? I simply can't get my functional-programming mindset to come up with a solution.
First of all, I think that the answers others have supplied will work at least 95% of the time. It's always good practice to code for the problem at hand by using appropriate data types (or tuples in some cases). However, sometimes you really don't know in advance what you're looking for in the list, and in these cases trying to enumerate all possibilities is difficult/time-consuming/error-prone. Or, you're writing multiple variants of the same sort of thing (manually inlining multiple folds into one) and you'd like to capture the abstraction.
Fortunately, there are a few techniques that can help.
The framework solution
(somewhat self-evangelizing)
First, the various "iteratee/enumerator" packages often provide functions to deal with this sort of problem. I'm most familiar with iteratee, which would let you do the following:
import Data.Iteratee as I
import Data.Iteratee.Char
import Data.Maybe
-- first, you'll need some way to process the Atoms/Sheets/etc. you're getting
-- if you want to just return them as a list, you can use the built-in
-- stream2list function
-- next, create stream transformers
-- given at :: B.ByteString -> Maybe Atom
-- create a stream transformer from ByteString lines to Atoms
atIter :: Enumeratee [B.ByteString] [Atom] m a
atIter = I.mapChunks (catMaybes . map at)
otIter :: Enumeratee [B.ByteString] [Sheet] m a
otIter = I.mapChunks (catMaybes . map ot)
-- finally, combine multiple processors into one
-- if you have more than one processor, you can use zip3, zip4, etc.
procFile :: Iteratee [B.ByteString] m ([Atom],[Sheet])
procFile = I.zip (atIter =$ stream2list) (otIter =$ stream2list)
-- and run it on some data
runner :: FilePath -> IO ([Atom],[Sheet])
runner filename = do
resultIter <- enumFile defaultBufSize filename $= enumLinesBS $ procFile
run resultIter
One benefit this gives you is extra composability. You can create transformers as you like, and just combine them with zip. You can even run the consumers in parallel if you like (although only if you're working in the IO monad, and probably not worth it unless the consumers do a lot of work) by changing to this:
import Data.Iteratee.Parallel
parProcFile = I.zip (parI $ atIter =$ stream2list) (parI $ otIter =$ stream2list)
The result of doing so isn't the same as a single for-loop - this will still perform multiple traversals of the data. However, the traversal pattern has changed. This will load a certain amount of data at once (defaultBufSize bytes) and traverse that chunk multiple times, storing partial results as necessary. After a chunk has been entirely consumed, the next chunk is loaded and the old one can be garbage collected.
Hopefully this will demonstrate the difference:
Data.List.zip:
x1 x2 x3 .. x_n
x1 x2 x3 .. x_n
Data.Iteratee.zip:
x1 x2 x3 x4 x_n-1 x_n
x1 x2 x3 x4 x_n-1 x_n
If you're doing enough work that parallelism makes sense this isn't a problem at all. Due to memory locality, the performance is much better than multiple traversals over the entire input as Data.List.zip would make.
The beautiful solution
If a single-traversal solution really does make the most sense, you might be interested in Max Rabkin's Beautiful Folding post, and Conal Elliott's followup work (this too). The essential idea is that you can create data structures to represent folds and zips, and combining these lets you create a new, combined fold/zip function that only needs one traversal. It's maybe a little advanced for a Haskell beginner, but since you're thinking about the problem you may find it interesting or useful. Max's post is probably the best starting point.
I show a solution for two types of line, but it is easily extended to five types of line by using a five-tuple instead of a two-tuple.
import Data.Monoid
eachLine :: B.ByteString -> ([Atom], [Sheet])
eachLine bs | isAnAtom bs = ([ {- calculate an Atom -} ], [])
| isASheet bs = ([], [ {- calculate a Sheet -} ])
| otherwise = error "eachLine"
allLines :: [B.ByteString] -> ([Atom], [Sheet])
allLines bss = mconcat (map eachLine bss)
The magic is done by mconcat from Data.Monoid (included with GHC).
(On a point of style: personally I would define a Line type, a parseLine :: B.ByteString -> Line function and write eachLine bs = case parseLine bs of .... But this is peripheral to your question.)
It is a good idea to introduce a new ADT, e.g. "Summary" instead of tuples.
Then, since you want to accumulate the values of Summary you came make it an istance of Data.Monoid. Then you classify each of your lines with the help of classifier functions (e.g. isAtom, isSheet, etc.) and concatenate them together using Monoid's mconcat function (as suggested by #dave4420).
Here is the code (it uses String instead of ByteString, but it is quite easy to change):
module Classifier where
import Data.List
import Data.Monoid
data Summary = Summary
{ atoms :: [String]
, sheets :: [String]
, digits :: [String]
} deriving (Show)
instance Monoid Summary where
mempty = Summary [] [] []
Summary as1 ss1 ds1 `mappend` Summary as2 ss2 ds2 =
Summary (as1 `mappend` as2)
(ss1 `mappend` ss2)
(ds1 `mappend` ds2)
classify :: [String] -> Summary
classify = mconcat . map classifyLine
classifyLine :: String -> Summary
classifyLine line
| isAtom line = Summary [line] [] [] -- or "mempty { atoms = [line] }"
| isSheet line = Summary [] [line] []
| isDigit line = Summary [] [] [line]
| otherwise = mempty -- or "error" if you need this
isAtom, isSheet, isDigit :: String -> Bool
isAtom = isPrefixOf "atom"
isSheet = isPrefixOf "sheet"
isDigit = isPrefixOf "digits"
input :: [String]
input = ["atom1", "sheet1", "sheet2", "digits1"]
test :: Summary
test = classify input
If you have only 2 alternatives, using Either might be a good idea. In that case combine your functions, map the list, and use lefts and rights to get the results:
import Data.Either
-- first sample function, returning String
f1 x = show $ x `div` 2
-- second sample function, returning Int
f2 x = 3*x+1
-- combined function returning Either String Int
hotpo x = if even x then Left (f1 x) else Right (f2 x)
xs = map hotpo [1..10]
-- [Right 4,Left "1",Right 10,Left "2",Right 16,Left "3",Right 22,Left "4",Right 28,Left "5"]
lefts xs
-- ["1","2","3","4","5"]
rights xs
-- [4,10,16,22,28]