Given the below definition of runDB, will the second insert rollback upon failure of the first? - yesod

I'm using Yesod as the framework, postgresql as the DB, and have the following definition of runDB. Looking at the docs on the Yesod site, I have a hunch that using runDB in the following manner will cause a rollback of the first insert upon the failure of the second. Am I right. If not, how do I invoke a rollback?
instance YesodPersist App where
<snip>
runDB action = do
master <- getYesod
runSqlPool action $ appConnPool master
addKitteh :: Kitteh -> Handler (Either StoreError StoreResult)
addKitteh (Kitteh desc color size photo) = do
data_key <- runDB $ do
data_key <- insert (KittehDesc desc color size)
insert (KittehPic data_key photo)
...
edit - Also, what happens if the first insert fails?
edit - I thought the model might be significant
KittehDesc json
blurb Text
color Color
size KittehSize
deriving Show
KittehPic
kittehId KittehDescId Eq
kittehPic Base64
UniqueKittehId kittehId

Yes, everything in runDB is wrapped in a transaction. If the first insert fails an exception will be thrown, and the code won't reach the second insert.
I think this is documented somewhere, but I just traced the code to come to this conclusion: runDB is implemented with defaultRunDB, which calls into runPool, which calls runSqlPool, which calls into runSqlConn, whic you can see is handling rollbacks when an exception occurs:
runSqlConn :: MonadBaseControl IO m => SqlPersistT m a -> SqlBackend -> m a
runSqlConn r conn = control $ \runInIO -> mask $ \restore -> do
let getter = getStmtConn conn
restore $ connBegin conn getter
x <- onException
(restore $ runInIO $ runReaderT r conn)
(restore $ connRollback conn getter)
restore $ connCommit conn getter
return x

Related

Is there way to execute any particular flow after every other flow without need to plug it explicitly

I have multiple flows(To process message received from queue) to execute and after every flow I need to check if there is any error in previous flow, if yes, then I filter out the message in process, otherwise continue to next flow.
Currently, I have to plug this error handler flow explicitly after every other flow. Is there any way this can be done with some functionality where this error flow can be configured to run after every other flow. Or any other better way to do this?
Example:
flow 1 -> Validate message, if error, mark message as error
error flow -> check if message is marked error, if yes filter, otherwise continue.
flow 2 -> persist message to db, mark in case of error.
error flow -> check if message is marked error, if yes filter, otherwise continue
flow 3 -> and so on.
Or is there way to wrap (flow 1 + error flow), (flow 2 -> error flow) ?
I am not sure it is exactly what you asked for, but I have sort of a solution. What can be done, is creating all flows, for instance we can look at:
val flows = Seq (
Flow.fromFunction[Int, Int](x => { println(s"flow1: Received $x"); x * 2 }),
Flow.fromFunction[Int, Int](x => { println(s"flow2: Received $x"); x + 1}),
Flow.fromFunction[Int, Int](x => { println(s"flow3: Received $x"); x * x})
)
Then, we need to append to each of the exsiting flows, the error handling. So let's define it, and add it to each of the elements:
val errorHandling = Flow[Int].filter(_ % 2 == 0)
val errorsHandledFlows = flows.map(flow => flow.via(errorHandling))
Now, we need a helper function, that will connect all of our new flows:
def connectFlows(errorsHandledFlows: Seq[Flow[Int, Int, _]]): Flow[Int, Int, _] = {
errorsHandledFlows match {
case Seq() => Flow[Int] // This line is actually redundant, But I don't want to have an unexhausted pattern matching
case Seq(singleFlow) => singleFlow
case head :: tail => head.via(connectFlows(tail))
}
}
And now, we need to execute all together, for example:
Source(1 to 4).via(connectFlows(errorsHandledFlows)).to(Sink.foreach(println)).run()
Will provide the output:
flow1: Received 1
flow2: Received 2
flow1: Received 2
flow2: Received 4
flow1: Received 3
flow2: Received 6
flow1: Received 4
flow2: Received 8
As you can tell, we filter the odd numbers. Therefore the first flow gets all numbers from 1 to 4. The second flow received 2,4,6,8 (the first flow multiplied the values by 2), and the last one did not receive any flow, because the second flow makes all of the values odd.
You can also use Merge
val g = RunnableGraph.fromGraph(GraphDSL.create() { implicit builder =>
import GraphDSL.Implicits._
val merge = builder.add(Merge[Int](3))
val flow1 = ...
val flow2 = ...
val flow3 = ...
flow1 ~> merge
flow2 ~> merge
flow3 ~> merge
ClosedShape
})
Not sure if it meets your need, just showing the alternative.

How can I test a response whose body is streamed from within an ecto transaction?

I am attempting to write unit tests for an endpoint that streams its response in chunks. I can verify that the contents are fully and correctly streamed when accessed through my browser. But when I access the endpoint through my test suite, the response body is empty.
Example controller action
def stream_csv(conn, _params) do
conn = conn
|> put_resp_content_type("text/csv")
|> put_resp_header("content-disposition", "data.csv")
|> send_chunked(200)
Repo.transaction(fn ->
{:ok, conn} = chunk(conn, "csv data part")
end)
conn
end
Example unit test
test "stream endpoint" do
body = build_conn()
|> Phoenix.ConnTest.get("/stream_endpoint")
|> Phoenix.ConnTest.response(200)
assert body =~ "csv data part"
end
This will lead to an assertion failure, where body is an empty binary "".
I feel like there should be a way to wait on all the chunks before making assertions, or that I'm probably overlooking something obvious.
EDIT
The example I initially wrote works as expected. What seems to complicate things is when chunk is called from within a callback provided to Repo.transaction. I've updated the question and examples to better reflect the problem.
I had this problem and tried several ways to get to to work and ended up doing this
solution using mocks
So what is going on is that when we call Plug.Conn.chunk(conn, chunk), plug calls conn.adapter.chunk. From there, the chunk should be sent to the server (e.g. cowboy) for further handling. The conn is not aware of the chunk anymore.
To solve this, I moved the chunking to another functions with minimal side effects and easy mockable
defmodule MyApp.ControllerUtils do
use MyaAppWeb, :controller
#callback chunk_to_conn(map(), String.t()) :: map()
def chunk_to_conn(conn, current_chunk) do
conn |> chunk(current_chunk)
end
end
And in the response handler
def stream_csv(conn, _params) do
conn = conn
|> put_resp_content_type("text/csv")
|> put_resp_header("content-disposition", "data.csv")
|> send_chunked(200)
Repo.transaction(fn ->
{:ok, conn} = MyApp.ControllerUtils.chunk_to_conn(conn, "csv data part")
end)
conn
end
Now in your test you mock the chunking function to give you the chunk and use a something like Agent store and join the chunks or just assert them as they come.
import Mox
defp chunked_response_to_state(chunk, pid) do
current_chunk = Agent.get(pid, &Map.get(&1, :chunk_key))
Agent.update(pid, &Map.put(&1, :csv, current_chunk <> chunk))
end
setup do
MyApp.ControllerUtilsMock
|> stub(:chunk_to_conn, fn _, chunk -> chunk |> chunked_response_to_state(agent_pid) end)
{:ok, %{agent_pid: agent_pid}}
end
test "my test", state do
build_conn.get(somepath)
whole_chunks = Agent.get(state.agent_pid, &Map.get(&1, :chunk_key))
end

MirageOS - Http-fetch example

I'm trying to modify a bit the MirageOS http-fetch example (https://github.com/mirage/mirage-skeleton) that can be found inside mirage-skeleton but I'm having some problems understanding why I can't move some of the function executed inside the config.ml file to my unikernel.ml file. The original config.ml file follows (I'll copy just the interesting part) :
[...]
let client =
foreign "Unikernel.Client" ## console #-> resolver #-> conduit #-> job
let () =
add_to_ocamlfind_libraries ["mirage-http"];
add_to_opam_packages ["mirage-http"];
let sv4 = stack default_console in
let res_dns = resolver_dns sv4 in
let conduit = conduit_direct sv4 in
let job = [ client $ default_console $ res_dns $ conduit ] in
register "http-fetch" job
What I'm trying to do is move these two lines :
let res_dns = resolver_dns sv4 in
let conduit = conduit_direct sv4 in
into my unikernel.ml start method. Basically I want to pass to my module just the stack and let it create a dns resolver and a conduit. My start function follows:
let start c s =
C.log_s c (sprintf "Resolving in 1s using DNS server %s" ns) >>= fun () ->
OS.Time.sleep 1.0 >>= fun () ->
let res_dns = resolver_dns s in
let conduit = conduit_direct s in
http_fetch c res_dns conduit >>= fun (data) ->
Lwt.return(dump_to_db data);
Right now I'm getting this error at http_fetch parameters submission:
Error: This expression has type Mirage.resolver Mirage.impl
but an expression was expected of type Resolver_lwt.t
What I'm asking here is mostly a conceptual question because I'm clearly missing something. I'm not an expert in OCaml/MirageOS but this controversial behaviour of type mismatch is hard to understand considering that I'm just calling the same function from a different file.
config.ml is used to generate main.ml. You can copy the generated code from there if you want.

Pgocaml customizing sql queries

I am trying to write a query that simply drops a table.
let drop_table dbh table_name =
let query = String.concat " " ["drop table"; table_name] in
PGSQL(dbh) query
I am receiving the following error from the query
File "save.ml", line 37, characters 10-11:
Parse error: STRING _ expected after ")" (in [expr])
File "save.ml", line 1:
Error: Preprocessor error
Why am I getting this error? It appears that this function is valid Ocaml syntax.
Thanks guys!
You cannot construct query when using PG'OCaml's syntax extension. You must provide a literal string. This is the tradeoff for getting PG'Ocaml's compile time query validation. If query could be any OCaml expression, PG'OCaml wouldn't know how to validate it at compile time.
Personally, I've stopped using the syntax extension completely. My feeling is it doesn't scale to large projects. Instead I call prepare and execute directly. For example, this function will create a new database connection (assuming the connection parameters are previously defined), run the given query, and close the connection:
let exec query =
let db = PGOCaml.connect ~host ~user ~database ~port ~password ()
PGOCaml.prepare db ~query ();
let ans = PGOCaml.execute db ~params:[] () in
PGOCaml.close db;
ans
Of course, this isn't a robust implementation and shouldn't be used in production code. It doesn't handle errors and isn't asynchronous.

Can't launch ocsigen server due to failure : ("That function cannot be called here because it needs information about the request or the site.")

I want to create a service who generates its HTML according to the parameter given and a map. Given the parameter, the service search in the map for the html, and a function to launch on client side.
type sample =
(string (* little text *)*
Html5_types.html Eliom_content.Html5.elt (* html page *) *
(unit -> unit)(* Demonstration function *))
Given that the function is to be launched on client side, I insert it in the map as a client value :
{client{
let demo_function = ignore (Ojquery.add_html
(Ojquery.jQ "li") "<p id='test1'>new paragraph</p>") }}
let get_samples () =
let samples_map = Samples.empty in
let samples_map = Samples.add "add_html"
("text",
(Eliom_tools.F.html
(** html stuff **)
),
{unit->unit{demo_function}}) samples_map in
samples_map
And then I register the service like this :
let sample_service =
Eliom_service.service
~path:["examples"]
~get_params:Eliom_parameter.(string "entry")
()
let () =
Examples_app.register
~service:sample_service
(fun (entry) () ->
try
(let entry = Samples.find entry samples_map in
let html = ((function (name, html, func) -> html) entry) in
let func = ((function (name, html, func) -> func) entry) in
ignore {unit{%func ()}};
Lwt.return (html))
with Not_found -> Lwt.return (not_found)
)
The rest of the code is pretty much only the result of a classic eliom-distillery, with the inclusion of the ojquery package for the client function used.
The compilation phase goes smoothly, but when I try to launch the server, I get the following error message :
ocsigenserver: main: Fatal - Error in configuration file: Error while parsing configuration file: Eliom: while loading local/lib/examples/examples.cma: Failure("That function cannot be called here because it needs information about the request or the site.")
My first guess was that it is due to the fact that I store client values outside of a service, but is there any way to store this kind of values on the server?
I tried to wrap them in regular functions :
let demo_serv_func () = {unit{demo_client_func ()}}
But the problem remained...
I found the issue. The problem was not because I stored client functions, but because I used Eliom_tools.F.html outside of a service.
It happens that Eliom_tools needs the context of the service to function, and since I was storing it outside of the service, it could not work.
I solved the issue by using Eliom_tools inside the service, and storing the body of the HTML page in the map.