Calling an IO Monad inside an Arrow - opengl

Perhaps I'm going about this the wrong way, but I'm using HXT to read in some vertex data that I'd like to use in an array in HOpenGL. Vertex arrays need to be a Ptr which is created by calling newArray. Unfortunately newArray returns an IO Ptr, so I'm not sure how to go about using it inside an Arrow. I think I need something with a type declaration similar to IO a -> Arrow a?

The type IO a -> Arrow a doesn't make sense; Arrow is a type class, not a specific type, much like Monad or Num. Specifically, an instance of Arrow is a type constructor taking two parameters that describes things that can be composed like functions, matching types end-to-end. So, converting IO a to an arrow could perhaps be called a conceptual type error.
I'm not sure exactly what you're trying to do, but if you really want to be using IO operations as part of an Arrow, you need your Arrow instance to include that. The simplest form of that is to observe that functions with types like a -> m b for any Monad instance can be composed in the obvious way. The hxt package seems to provide a more complicated type:
newtype IOSLA s a b = IOSLA { runIOSLA :: s -> a -> IO (s, [b]) }
This is some mixture of the IO, State, and [] monads, attached to a function as above such that you can compose them going through all three Monads at each step. I haven't really used hxt much, but if these are the Arrows you're working with, it's pretty simple to lift an arbitrary IO function to serve as one--just pass the state value s through unchanged, and turn the output of the function into a singleton list. There may already be a function to do this for you, but I didn't see one at a brief glance.
Basically, you'd want something like this:
liftArrIO :: (a -> IO b) -> IOSLA s a b
liftArrIO f = IOSLA $ \s x -> fmap (\y -> (s, [y])) (f x)

Related

Is there a function that can make a string representation of any type?

I was desperately looking for the last hour for a method in the OCaml Library which converts an 'a to a string:
'a -> string
Is there something in the library which I just haven't found? Or do I have to do it different (writing everything by my own)?
It is not possible to write a printing function show of type 'a -> string in OCaml.
Indeed, types are erased after compilation in OCaml. (They are in fact erased after the typechecking which is one of the early phase of the compilation pipeline).
Consequently, a function of type 'a -> _ can either:
ignore its argument:
let f _ = "<something>"
peek at the memory representation of a value
let f x = if Obj.is_block x then "<block>" else "<immediate>"
Even peeking at the memory representation of a value has limited utility since many different types will share the same memory representation.
If you want to print a type, you need to create a printer for this type. You can either do this by hand using the Fmt library (or the Format module in the standard library)
type tree = Leaf of int | Node of { left:tree; right: tree }
let pp ppf tree = match tree with
| Leaf d -> Fmt.fp ppf "Leaf %d" d
| Node n -> Fmt.fp ppf "Node { left:%a; right:%a}" pp n.left pp n.right
or by using a ppx (a small preprocessing extension for OCaml) like https://github.com/ocaml-ppx/ppx_deriving.
type tree = Leaf of int | Node of { left:tree; right: tree } [##deriving show]
If you just want a quick hacky solution, you can use dump from theBatteries library. It doesn't work for all cases, but it does work for primitives, lists, etc. It accesses the underlying raw memory representation, hence is able to overcome (to some extent) the difficulties mentioned in the other answers.
You can use it like this (after installing it via opam install batteries):
# #require "batteries";;
# Batteries.dump 1;;
- : string = "1"
# Batteries.dump 1.2;;
- : string = "1.2"
# Batteries.dump [1;2;3];;
- : string = "[1; 2; 3]"
If you want a more "proper" solution, use ppx_deriving as recommended by #octachron. It is much more reliable/maintainable/customizable.
What you are looking for is a meaningful function of type 'a. 'a -> string, with parametric polymorphism (i.e. a single function that can operate the same for all possible types 'a, even those that didn’t exist when the function was created). This is not possible in OCaml. Here are explications depending on your programming background.
Coming from Haskell
If you were expecting such a function because you are familiar with the Haskell function show, then notice that its type is actually show :: Show a => a -> String. It uses an instance of the typeclass Show a, which is implicitly inserted by the compiler at call sites. This is not parametric polymorphism, this is ad-hoc polymorphism (show is overloaded, if you want). There is no such feature in OCaml (yet? there are projects for the future of the language, look for “modular implicits” or “modular explicits”).
Coming from OOP
If you were expecting such a function because you are familiar with OO languages in which every value is an object with a method toString, then this is not the case of OCaml. OCaml does not use the object model pervasively, and run-time representation of OCaml values retains no (or very few) notion of type. I refer you to #octachron’s answer.
Again, toString in OOP is not parametric polymorphism but overloading: there is not a single method toString which is defined for all possible types. Instead there are multiple — possibly very different — implementations of a method of the same name. In some OO languages, programmers try to follow the discipline of implementing a method by that name for every class they define, but it is only a coding practice. One could very well create objects that do not have such a method.
[ Actually, the notions involved in both worlds are pretty similar: Haskell requires an instance of a typeclass Show a providing a function show; OOP requires an object of a class Stringifiable (for instance) providing a method toString. Or, of course, an instance/object of a descendent typeclass/class. ]
Another possibility is to use https://github.com/ocaml-ppx/ppx_deriving with will create the function of Path.To.My.Super.Type.t -> string you can then use with your value. However you still need to track the path of the type by hand but it is better than nothing.
Another project provide feature similar to Batterie https://github.com/reasonml/reason-native/blob/master/src/console/README.md (I haven't tested Batterie so can't give opinion) They have the same limitation: they introspect the runtime encoding so can't get something really useable. I think it was done with windows/browser in mind so if cross plat is required I will test this one before (unless batterie is already pulled). and even if the code source is in reason you can use with same API in OCaml.

Converting [Integer] -> Integer

I have never programmed before and i have just recently (1 week ago) started learning! The first course is functional programming, using Haskell.
I have a school assignment that I'd like to improve by removing one or two steps, but there's one pesky bug in my way.
Basically, I create a list and i get the result with the type [Integer], whereas I'd like to convert this to Integer, if possible? I've set my test function to accept the types Integer -> Integer -> Bool (takes two values, computes them, and returns a bool). The test function puts the values into two functions and compares their results.
I could just change the expected type to [Integer] (?) but that would eliminate the option of manually putting in values.
For my test cases I've chosen a few values and put them into lists. a = [0, 2, (-3)] and b = [0, 2, 4]. What I'd like to do when I call the function is to enter a and b as the values, instead of typing in each testcase every time. Is this possible? Example:
testFunction a b
instead of something like
testFunction Integer Integer.
I hope I made sense :-) Keep in mind I am just learning!
Without making this too complicated for you, what you seem to be looking for is the ability to pass two lists of Integers to a function, which accepts Integers, to perform an element-wise operation.
For this, you can use zipWith, which has a type signature of:
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
This type signature means that zipWith takes a function that accepts two individual elements, and two lists and returns a list built upon the results of the function that you passed.
In sum, you would be executing:
zipWith myFunction [1,2,3,4] [5,6,7,8]
Now, you want to create a function that first uses zipWith on each of your two functions, and then uses zipWith yet again to compare the two resulting lists, to finally return the Booleans. If you wanted to be even more sophisticated, you could use the and function at the end, to return a single Boolean, if all of the Booleans are True.
Now, building that function is left as an exercise to the reader :)
I hope this helps.

Nice way to keep track of several references between functions in ST monad?

I'm writing some code (a Metropolis-Hastings MCMC sampler) that will use a random number generator, and modify an array and potentially other structures based on this.
My initial idea was to use the ST monad, so that I could use ST arrays and the mersenne-random-pure64 package, keeping the PureMT generator as part of the state.
However I want to be able to split off some of the work into separate helper functions (e.g to sample a random integer in a given range, to update the array structure, and potentially more complicated things). To do this, I think I would need to pass the references to the PureMT gen and the array to all the functions, which could quickly become very ugly if I need to store more state.
My instinct is to group all of the state into a single data type that I can access anywhere, as I would using the State monad by defining a new datatype, but I don't know if that is possible with the ST monad, or the right way to go about it.
Are there any nice patterns for doing this sort of thing? I want to keep things as general as possible because I will probably need to add extra state and build more monadic code around the existing parts.
I have tried looking for examples of ST monad code but it does not seem to be covered in Real World Haskell, and the haskell wiki examples are very short and simple.
thanks!
My instinct is to group all of the state into a single data type that I can access anywhere, as I would using the State monad by defining a new datatype, but I don't know if that is possible with the ST monad, or the right way to go about it.
Are there any nice patterns for doing this sort of thing? I want to keep things as general as possible because I will probably need to add extra state and build more monadic code around the existing parts.
The key point to realize here is that it's completely irrelevant that you're using ST. The ST references themselves are just regular values, which you need access to in a variety of places, but you don't actually want to change them! The mutability occurs in ST, but the STRef values and whatnot are basically read-only. They're names pointing to the mutable data.
Of course, read-only access to an ambient environment is what the Reader monad is for. The ugly passing of references to all the functions is exactly what it's doing for you, but because you're already in ST, you can just bolt it on as a monad transformer. As a simple example, you can do something like this:
newtype STEnv s e a = STEnv (ReaderT e (ST s) a)
deriving (Functor, Applicative, Monad)
runEnv :: STEnv s e a -> ST s e -> ST s a
runEnv (STEnv r) e = runReaderT r =<< e
readSTEnv :: (e -> STRef s a) -> STEnv s e a
readSTEnv f = STEnv $ lift . readSTRef . f =<< ask
writeSTEnv :: (e -> STRef s a) -> a -> STEnv s e ()
writeSTEnv f x = STEnv $ lift . flip writeSTRef x . f =<< ask
For more generality, you could abstract over the details of the reference types, and make it into a general "environment with mutable references" monad.
You can use the ST monad just like the IO monad, bearing in mind that you only get arrays and refs and no other IO goodies. Just like IO, you can layer a StateT over it if you want to thread some state transparently through your computation.

Keeping type generic without η-expansion

What I'm doing: I'm writing a small interpreter system that can parse a file, turn it into a sequence of operations, and then feed thousands of data sets into that sequence to extract some final value from each. A compiled interpreter consists of a list of pure functions that take two arguments: a data set, and an execution context. Each function returns the modified execution context:
type ('data, 'context) interpreter = ('data -> 'context -> 'context) list
The compiler is essentially a tokenizer with a final token-to-instruction mapping step that uses a map description defined as follows:
type ('data, 'context) map = (string * ('data -> 'context -> 'context)) list
Typical interpreter usage looks like this:
let pocket_calc =
let map = [ "add", (fun d c -> c # add d) ;
"sub", (fun d c -> c # sub d) ;
"mul", (fun d c -> c # mul d) ]
in
Interpreter.parse map "path/to/file.txt"
let new_context = Interpreter.run pocket_calc data old_context
The problem: I'd like my pocket_calc interpreter to work with any class that supports add, sub and mul methods, and the corresponding data type (could be integers for one context class and floating-point numbers for another).
However, pocket_calc is defined as a value and not a function, so the type system does not make its type generic: the first time it's used, the 'data and 'context types are bound to the types of whatever data and context I first provide, and the interpreter becomes forever incompatible with any other data and context types.
A viable solution is to eta-expand the definition of the interpreter to allow its type parameters to be generic:
let pocket_calc data context =
let map = [ "add", (fun d c -> c # add d) ;
"sub", (fun d c -> c # sub d) ;
"mul", (fun d c -> c # mul d) ]
in
let interpreter = Interpreter.parse map "path/to/file.txt" in
Interpreter.run interpreter data context
However, this solution is unacceptable for several reasons:
It re-compiles the interpreter every time it's called, which significantly degrades performance. Even the mapping step (turning a token list into a interpreter using the map list) causes a noticeable slowdown.
My design relies on all interpreters being loaded at initialization time, because the compiler issues warnings whenever a token in the loaded file does not match a line in the map list, and I want to see all those warnings when the software launches (not when individual interpreters are eventually run).
I sometimes want to reuse a given map list in several interpreters, whether on its own or by prepending additional instructions (for instance, "div").
The questions: is there any way to make the type parametric other than eta-expansion? Maybe some clever trick involving module signatures or inheritance? If that's impossible, is there any way to alleviate the three issues I have mentioned above in order to make eta-expansion an acceptable solution? Thank you!
A viable solution is to eta-expand the
definition of the interpreter to allow
its type parameters to be generic:
let pocket_calc data context =
let map = [ "add", (fun d c -> c # add d) ;
"sub", (fun d c -> c # sub d) ;
"mul", (fun d c -> c # mul d) ]
in
let interpreter = Interpreter.parse map "path/to/file.txt" in
Interpreter.run interpreter data context
However, this solution is unacceptable
for several reasons:
It re-compiles the interpreter every time it's called, which
significantly degrades performance.
Even the mapping step (turning a token
list into a interpreter using the map
list) causes a noticeable slowdown.
It recompiles the interpreter every time because you are doing it wrong. The proper form is more something like this (and technically, if the partial interpretation of Interpreter.run to interpreter can do some computations, you should move it out of the fun too).
let pocket_calc =
let map = [ "add", (fun d c -> c # add d) ;
"sub", (fun d c -> c # sub d) ;
"mul", (fun d c -> c # mul d) ]
in
let interpreter = Interpreter.parse map "path/to/file.txt" in
fun data context -> Interpreter.run interpreter data context
I think your problem lies in a lack of polymorphism in your operations, which you would like to have a closed parametric type (works for all data supporting the following arithmetic primitives) instead of having a type parameter representing a fixed data type.
However, it's a bit difficult to ensure it's exactly this, because your code is not self-contained enough to test it.
Assuming the given type for primitives :
type 'a primitives = <
add : 'a -> 'a;
mul : 'a -> 'a;
sub : 'a -> 'a;
>
You can use the first-order polymorphism provided by structures and objects :
type op = { op : 'a . 'a -> 'a primitives -> 'a }
let map = [ "add", { op = fun d c -> c # add d } ;
"sub", { op = fun d c -> c # sub d } ;
"mul", { op = fun d c -> c # mul d } ];;
You get back the following data-agnostic type :
val map : (string * op) list
Edit: regarding your comment about different operation types, I'm not sure which level of flexibility you want. I don't think you could mix operations over different primitives in the same list, and still benefit from the specifities of each : at best, you could only transform an "operation over add/sub/mul" into an "operation over add/sub/mul/div" (as we're contravariant in the primitives type), but certainly not much.
On a more pragmatic level, it's true that, with that design, you need a different "operation" type for each primitives type. You could easily, however, build a functor parametrized by the primitives type and returning the operation type.
I don't know how one would expose a direct subtyping relation between different primitive types. The problem is that this would need a subtyping relation at the functor level, which I don't think we have in Caml. You could, however, using a simpler form of explicit subtyping (instead of casting a :> b, use a function a -> b), build second functor, contravariant, that, given a map from a primitive type to the other, would build a map from one operation type to the other.
It's entirely possible that, with a different and clever representation of the type evolved, a much simpler solution is possible. First-class modules of 3.12 might also come in play, but they tend to be helpful for first-class existential types, whereas here we rhater use universal types.
Interpretive overhead and operation reifications
Besides your local typing problem, I'm not sure you're heading the right way. You're trying to eliminate interpretive overhead by building, "ahead of time" (before using the operations), a closure corresponding to a in-language representation of your operation.
In my experience, this approach doesn't generally get rid of interpretive overhead, it rather moves it to another layer. If you create your closures naïvely, you will have the parsing flow of control reproduced at the closure layer : the closure will call other closures, etc., as your parsing code "interpreted" the input when creating the closure. You eliminated the cost of parsing, but the possibly suboptimal flow of control is still the same. Additionnaly, closures tend to be a pain to manipulate directly : you have to be very careful about comparison operations for example, serialization, etc.
I think you may be interested in the long term in an intermediate "reified" language representing your operations : a simple algebraic data type for arithmetic operations, that you would build from your textual representation. You can still try to build closures "ahead of time" from it, though I'm not sure the performances are much better than directly interpreting it, if the in-memory representation is decent. Moreover, it will be much easier to plug in intermediary analyzers/transformers to optimize your operations, for example going from an "associative binary operations" model to a "n-ary operations" model, which could be more efficiently evaluated.

Is it possible to test the return value of Haskell I/O functions?

Haskell is a pure functional language, which means Haskell functions have no side affects. I/O is implemented using monads that represent chunks of I/O computation.
Is it possible to test the return value of Haskell I/O functions?
Let's say we have a simple 'hello world' program:
main :: IO ()
main = putStr "Hello world!"
Is it possible for me to create a test harness that can run main and check that the I/O monad it returns the correct 'value'? Or does the fact that monads are supposed to be opaque blocks of computation prevent me from doing this?
Note, I'm not trying to compare the return values of I/O actions. I want to compare the return value of I/O functions - the I/O monad itself.
Since in Haskell I/O is returned rather than executed, I was hoping to examine the chunk of I/O computation returned by an I/O function and see whether or not it was correct. I thought this could allow I/O functions to be unit tested in a way they cannot in imperative languages where I/O is a side-effect.
The way I would do this would be to create my own IO monad which contained the actions that I wanted to model. The I would run the monadic computations I want to compare within my monad and compare the effects they had.
Let's take an example. Suppose I want to model printing stuff. Then I can model my IO monad like this:
data IO a where
Return :: a -> IO a
Bind :: IO a -> (a -> IO b) -> IO b
PutChar :: Char -> IO ()
instance Monad IO where
return a = Return a
Return a >>= f = f a
Bind m k >>= f = Bind m (k >=> f)
PutChar c >>= f = Bind (PutChar c) f
putChar c = PutChar c
runIO :: IO a -> (a,String)
runIO (Return a) = (a,"")
runIO (Bind m f) = (b,s1++s2)
where (a,s1) = runIO m
(b,s2) = runIO (f a)
runIO (PutChar c) = ((),[c])
Here's how I would compare the effects:
compareIO :: IO a -> IO b -> Bool
compareIO ioA ioB = outA == outB
where ioA = runIO ioA ioB
There are things that this kind of model doesn't handle. Input, for instance, is tricky. But I hope that it will fit your usecase. I should also mention that there are more clever and efficient ways of modelling effects in this way. I've chosen this particular way because I think it's the easiest one to understand.
For more information I can recommend the paper "Beauty in the Beast: A Functional Semantics for the Awkward Squad" which can be found on this page along with some other relevant papers.
Within the IO monad you can test the return values of IO functions. To test return values outside of the IO monad is unsafe: this means it can be done, but only at risk of breaking your program. For experts only.
It is worth noting that in the example you show, the value of main has type IO (), which means "I am an IO action which, when performed, does some I/O and then returns a value of type ()." Type () is pronounced "unit", and there are only two values of this type: the empty tuple (also written () and pronounced "unit") and "bottom", which is Haskell's name for a computation that does not terminate or otherwise goes wrong.
It is worth pointing out that testing return values of IO functions from within the IO monad is perfectly easy and normal, and that the idiomatic way to do it is by using do notation.
You can test some monadic code with QuickCheck 2. It's been a long time since I read the paper, so I don't remember if it applies to IO actions or to what kinds of monadic computations it can be applied. Also, it may be that you find it hard to express your unit tests as QuickCheck properties. Still, as a very satisfied user of QuickCheck, I'll say it's a lot better than doing nothing or than hacking around with unsafePerformIO.
I'm sorry to tell you that you can not do this.
unsafePerformIO basically let's you accomplish this. But I would strongly prefer that you do not use it.
Foreign.unsafePerformIO :: IO a -> a
:/
I like this answer to a similar question on SO and the comments to it. Basically, IO will normally produce some change which may be noticed from the outside world; your testing will need to have to do with whether that change seems correct. (E.g. the correct directory structure was produced etc.)
Basically, this means 'behavioural testing', which in complex cases may be quite a pain. This is part of the reason why you should keep the IO-specific part of your code to a minimum and move as much of the logic as possible to pure (therefore super easily testable) functions.
Then again, you could use an assert function:
actual_assert :: String -> Bool -> IO ()
actual_assert _ True = return ()
actual_assert msg False = error $ "failed assertion: " ++ msg
faux_assert :: String -> Bool -> IO ()
faux_assert _ _ = return ()
assert = if debug_on then actual_assert else faux_assert
(You might want to define debug_on in a separate module constructed just before the build by a build script. Also, this is very likely to be provided in a more polished form by a package on Hackage, if not a standard library... If someone knows of such a tool, please edit this post / comment so I can edit.)
I think GHC will be smart enough to skip any faux assertions it finds entirely, wheras actual assertions will definitely crash your programme upon failure.
This is, IMO, very unlikely to suffice -- you'll still need to do behavioural testing in complex scenarios -- but I guess it could help check that the basic assumptions the code is making are correct.