How can I test a response whose body is streamed from within an ecto transaction? - unit-testing

I am attempting to write unit tests for an endpoint that streams its response in chunks. I can verify that the contents are fully and correctly streamed when accessed through my browser. But when I access the endpoint through my test suite, the response body is empty.
Example controller action
def stream_csv(conn, _params) do
conn = conn
|> put_resp_content_type("text/csv")
|> put_resp_header("content-disposition", "data.csv")
|> send_chunked(200)
Repo.transaction(fn ->
{:ok, conn} = chunk(conn, "csv data part")
end)
conn
end
Example unit test
test "stream endpoint" do
body = build_conn()
|> Phoenix.ConnTest.get("/stream_endpoint")
|> Phoenix.ConnTest.response(200)
assert body =~ "csv data part"
end
This will lead to an assertion failure, where body is an empty binary "".
I feel like there should be a way to wait on all the chunks before making assertions, or that I'm probably overlooking something obvious.
EDIT
The example I initially wrote works as expected. What seems to complicate things is when chunk is called from within a callback provided to Repo.transaction. I've updated the question and examples to better reflect the problem.

I had this problem and tried several ways to get to to work and ended up doing this
solution using mocks
So what is going on is that when we call Plug.Conn.chunk(conn, chunk), plug calls conn.adapter.chunk. From there, the chunk should be sent to the server (e.g. cowboy) for further handling. The conn is not aware of the chunk anymore.
To solve this, I moved the chunking to another functions with minimal side effects and easy mockable
defmodule MyApp.ControllerUtils do
use MyaAppWeb, :controller
#callback chunk_to_conn(map(), String.t()) :: map()
def chunk_to_conn(conn, current_chunk) do
conn |> chunk(current_chunk)
end
end
And in the response handler
def stream_csv(conn, _params) do
conn = conn
|> put_resp_content_type("text/csv")
|> put_resp_header("content-disposition", "data.csv")
|> send_chunked(200)
Repo.transaction(fn ->
{:ok, conn} = MyApp.ControllerUtils.chunk_to_conn(conn, "csv data part")
end)
conn
end
Now in your test you mock the chunking function to give you the chunk and use a something like Agent store and join the chunks or just assert them as they come.
import Mox
defp chunked_response_to_state(chunk, pid) do
current_chunk = Agent.get(pid, &Map.get(&1, :chunk_key))
Agent.update(pid, &Map.put(&1, :csv, current_chunk <> chunk))
end
setup do
MyApp.ControllerUtilsMock
|> stub(:chunk_to_conn, fn _, chunk -> chunk |> chunked_response_to_state(agent_pid) end)
{:ok, %{agent_pid: agent_pid}}
end
test "my test", state do
build_conn.get(somepath)
whole_chunks = Agent.get(state.agent_pid, &Map.get(&1, :chunk_key))
end

Related

OCaml - Parmap executing Lwt threads hangs on the execution

This is a follow up to this question:
How to synchronously execute an Lwt thread
I am trying to run the following piece of code:
open Lwt
open Cohttp_lwt_unix
let server_content2 x =
"in server content x" |> print_endline ;
Client.get (Uri.of_string ("http://localhost:8080/"^x)) >>= fun (_, body) ->
(Cohttp_lwt.Body.to_string body) >|= fun sc -> sc
;;
let reyolo () =
List.init 10 (fun i -> server_content2 (string_of_int i) ) ;;
let par () =
let yolo = reyolo () in
"in par" |> print_endline;
Parmap.pariter
~ncores:4
(fun p -> "before run" |> print_endline ; "content:"^(Lwt_main.run p) |> print_endline ; "after run" |> print_endline )
(Parmap.L yolo);;
par ()
I expected this to perform 10 remote connections.
What I get is in par function Lwt_main.run seems to stuck before doing an actual remote call.
I doubt it might be of any significance but the server that suppose to respond is made in python and looks like this:
import subprocess
from bottle import run, post, request, response, get, route
#route('/<path>',method = 'GET')
def process(path):
print(path)
return "yolo"
run(host='localhost', port=8080, debug=True)
The issue is that the calls to server_content2, which start the requests, occur in the parent process. The code then tries to finish them in the child processes spawned by Parmap. Lwt breaks here: it cannot, in general, keep track of I/Os across a fork.
If you store either thunks or arguments in the list yolo, and delay the calls to server_content2 so that they are done in the child processes, the requests should work. To do that, make sure the calls happen in the callback of Parmap.pariter.

Why Finch using EndPoint to represent Router, Request Parameter and Request Body

In finch, we can define router, request parameters, request body like this.
case class Test(name: String, age: Int)
val router: Endpoint[Test] = post("hello") { Ok(Test("name", 30)) }
val requestBody: Endpoint[Test] = body.as[Test]
val requestParameters: Endpoint[Test] = Endpoint.derive[Test].fromParams
The benefit is that we can compose EndPoint together. For example, I can define:
The request path is hello and Parameter should have name and age. (router :: requestParameters)
However, I can still run an invalid endpoint which doesnt include any request path successfully (There is actually no compilation error)
Await.ready(Http.serve(":3000", requestParameters.toService))
The result is returning 404 not found page. Even though I expect that the error should report earlier like compilation error. I wonder that is this a design drawback or it is actually finch trying to fix ?
Many thanks in advance
First of all, thanks a lot for asking this!
Let me give you some insight on how Finch's endpoints work. If you speak category theory, an Endpoint is an Applicative embedding StateT represented as something close to Input => Option[(Input, A)].
Simply speaking, an endpoint takes an Input that wraps an HTTP request and also captures the current path (eg: /foo/bar/baz). When endpoint is applied on to a given request and either matches it (returning Some) or falls over (returning None). When matched, it changes the state of the Input, usually removing the first path segment from it (eg: removing foo from /foo/bar/baz) so the next endpoint is the chain can work with a new Input (and new path).
Once endpoint is matched, Finch checks if there is something else left in the Input that wasn't matched. If something is left, the match considered unsuccessful and your service returns 404.
scala> val e = "foo" :: "bar"
e: io.finch.Endpoint[shapeless.HNil] = foo/bar
scala> e(Input(Request("/foo/bar/baz"))).get._1.path
res1: Seq[String] = List(baz)
When it comes to endpoints matching/extracting query-string params, no path segments are being touched there and the state is passed to the next endpoint unchanged. So when an endpoint param("foo") is applied, the path is not affected. That simply means, the only way to serve a query-string endpoint (note: an endpoint that only extract query-string params) is to send it a request with empty path /.
scala> val s = param("foo").toService
s: com.twitter.finagle.Service[com.twitter.finagle.http.Request,com.twitter.finagle.http.Response] = <function1>
scala> s(Request("/", "foo" -> "bar")).get
res4: com.twitter.finagle.http.Response = Response("HTTP/1.1 Status(200)")
scala> s(Request("/bar", "foo" -> "bar")).get
res5: com.twitter.finagle.http.Response = Response("HTTP/1.1 Status(404)")

Elixir: Testing GenEvent for error reports

I have a GenEvent that has been added as a handler like so :error_logger.add_report_handler(HoloNet.ErrorLogger)
So that errors/exceptions are captured and forwarded to a exception monitoring service.
I have the following code in a event behaviour
defmodule MyApp.ErrorLogger do
use GenEvent
#bugsnag_client Application.get_env(:my_app, :bugsnag_client)
def init(_opts), do: {:ok, self}
def handle_event({:error_report, _gl, {_pid, _type, [message | _]}}, state) do
{ error, stacktrace } = extract_exception(message[:error_info])
context = extract_context(stacktrace)
#bugsnag_client.notify(error,stacktrace, context: context, release_stage: Mix.env |> to_string)
{:ok, state}
end
def handle_event({_level, _gl, _event}, state) do
{:ok, state}
end
defp extract_exception(error_info) do
{ _, exception, _ } = error_info
exception
end
defp extract_context(stacktrace) do
stacktrace |> List.first |> elem 0
end
end
The Client that makes the http request is mocked out using the application config.
defmodule Bugsnag.Mock do
#behaviour Bugsnag
def notify(error,stacktrace, options \\ []), do: nil
end
It works as it should when in production, but I wanted to have some test coverage.
I was thinking of testing this by crashing a GenServer or causing some exception then see if notify gets called. This doesn't feel very functional/Elixir, but I wanted to test that errors would be captured when an error is caused.
I say go ahead and crash things. :erlang.exit/2 will do the trick.
OTP and supervision trees are not easy. Testing how the application behaves under error conditions is necessary it you really want to achieve elixir's promised fault-tolerance.

How can I unit test Nancy modules with F#?

I'm trying to test Nancy modules with F# as described here, the thing is I can't see how to pass the second parameter in F#.
Here's what I have so far:
let should_return_status_ok_for_get() =
let bootstrapper = new DefaultNancyBootstrapper()
let browser = new Browser(bootstrapper, fun req -> req.Accept(new Responses.Negotiation.MediaRange("application/json")))
let result = browser.Get("/Menu", fun req -> req.HttpRequest())
Assert.AreEqual (HttpStatusCode.OK, result.StatusCode)
result
in the example, I should be able to instantiate a Browser object to test a specific Module:
var browser = new Browser(with => with.Module(new MySimpleModule()));
But I get a compile time error in F# when I try:
let browser = new Browser(fun req -> req.Module(new MenuModule()))
EDIT Error: No overloads match for method 'Browser'
Are there any examples of this in F#?
Also, is this the best way to go about this in F#?
This is how I run Nancy tests in F#:
I create a new bootstrapper in my test project by deriving from the DefaultNancyBootstrapper. I use this bootstrapper to register my mocks:
type Bootstrapper() =
inherit DefaultNancyBootstrapper()
override this.ConfigureApplicationContainer(container : TinyIoCContainer) =
base.ConfigureApplicationContainer(container)
container.Register<IMyClass, MyMockClass>() |> ignore
Then I write a simple test method to execute a GET request like so:
[<TestFixture>]
type ``Health Check Tests`` () =
[<Test>]
member test.``Given the service is healthy the health check endpoint returns a HTTP 200 response with status message "Everything is OK"`` () =
let bootstrapper = new Bootstrapper()
let browser = new Browser(bootstrapper)
let result = browser.Get("/healthcheck")
let healthCheckResponse = JsonSerializer.deserialize<HealthCheckResponse> <| result.Body.AsString()
result.StatusCode |> should equal HttpStatusCode.OK
healthCheckResponse.Message |> should equal "Everything is OK"
Let me know if this helps!

Guarantee order of messages posted to mailbox processor

I have a mailbox processor which receives a fixed number of messages:
let consumeThreeMessages = MailboxProcessor.Start(fun inbox ->
async {
let! msg1 = inbox.Receive()
printfn "msg1: %s" msg1
let! msg2 = inbox.Receive()
printfn "msg2: %s" msg2
let! msg3 = inbox.Receive()
printfn "msg3: %s" msg3
}
)
consumeThreeMessages.Post("First message")
consumeThreeMessages.Post("Second message")
consumeThreeMessages.Post("Third message")
These messages should be handled in exactly the order sent. During my testing, it prints out exactly what it should:
First message
Second message
Third message
However, since message posting is asynchronous, it sounds like posting 3 messages rapidly could result in items being processed in any order. For example, I do not want to receive messages out of order and get something like this:
Second message // <-- oh noes!
First message
Third message
Are messages guaranteed to be received and processed in the order sent? Or is it possible for messages to be received or processed out of order?
The code in your consumeThreeMessages function will always execute in order, because of the way F#'s async workflows work.
The following code:
async {
let! msg1 = inbox.Receive()
printfn "msg1: %s" msg1
let! msg2 = inbox.Receive()
printfn "msg2: %s" msg2
}
Roughly translates to:
async.Bind(
inbox.Receive(),
(fun msg1 ->
printfn "msg1: %s" msg1
async.Bind(
inbox.Receive(),
(fun msg2 -> printfn "msg2: %s" msg2)
)
)
)
When you look at the desugared form, it is clear that the code executes in serial. The 'async' part comes into play in the implementation of async.Bind, which will start the computation asynchronously and 'wake up' when it completes to finish the execution. This way you can take advantage of asynchronous hardware operations, and not waste time on OS threads waiting for IO operations.
That doesn't mean that you can't run into concurrency issues when using F#'s async workflows however. Imagine that you did the following:
let total = ref 0
let doTaskAsync() =
async {
for i = 0 to 1000 do
incr total
} |> Async.Start()
// Start the task twice
doTaskAsync()
doTaskAsync()
The above code will have two asynchronous workflows modifying the same state at the same time.
So, to answer your question in brief: within the body of a single async block things will always execute in order. (That is, the next line after a let! or do! doesn't execute until the async operation completes.) However, if you share state between two async tasks, then all bets are off. In that case you will need to consider locking or using Concurrent Data Structures that come with CLR 4.0.