Spray http client and thousands requests - akka

I want to configure spray http client in a way that controls max number of request that were sent to the server. I need this because server that i'm sending requests to blocks me if there are more then 2 request were sent. I get
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://smallTasks/user/IO-HTTP#151444590]] after [15000 ms]
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://smallTasks/user/IO-HTTP#151444590]] after [15000 ms]
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://smallTasks/user/IO-HTTP#151444590]] after [15000 ms]
akka.pattern.AskTimeoutException: Ask timed out on
I need to send thousands of requests but i get blocked after i got responses from ~ 100 requests.
I have this method:
implicit val system = ActorSystem("smallTasks")
implicit val timeout = new Timeout(15.seconds)
import system.dispatcher
def doHttpRequest(url: String): Future[HttpResponse] = {
(IO(Http) ? HttpRequest(GET, Uri(url))).mapTo[HttpResponse]
}
And here i catch responses and retry if it fails(recursively):
def getOnlineOffers(modelId: Int, count: Int = 0): Future[Any] = {
val result = Promise[Any]()
AkkaSys.doHttpRequest(Market.modelOffersUrl(modelId)).map(response => {
val responseCode = response.status.intValue
if (List(400, 404).contains(responseCode)) {
result.success("Bad request")
} else if (responseCode == 200) {
Try {
Json.parse(response.entity.asString).asOpt[JsObject]
} match {
case Success(Some(obj)) =>
Try {
(obj \\ "onlineOffers").head.as[Int]
} match {
case Success(offers) => result.success(offers)
case _ => result.success("Can't find property")
}
case _ => result.success("Wrong body")
}
} else {
result.success("Unexpected error")
}
}).recover { case err =>
if (count > 5) {
result.success("Too many tries")
} else {
println(err.toString)
Thread.sleep(200)
getOnlineOffers(modelId, count + 1).map(r => result.success(r))
}
}
result.future
}
How to do this properly? May be i need to configure akka dispatcher somehow?

you can use http://spray.io/documentation/1.2.2/spray-client/ and write you personal pipeline
val pipeline: Future[SendReceive] =
for (
Http.HostConnectorInfo(connector, _) <-
IO(Http) ? Http.HostConnectorSetup("www.spray.io", port = 80)
) yield sendReceive(connector)
val request = Get("/segment1/segment2/...")
val responseFuture: Future[HttpResponse] = pipeline.flatMap(_(request))
to get HttpResponse
import scala.concurrent.Await
import scala.concurrent.duration._
val response: HttpResponse = Aweit(responseFuture, ...)
to convert
import spray.json._
response.entity.asString.parseJson.convertTo[T]
to check
Try(response.entity.asString.parseJson).isSuccess
too many brackets. In scala you can write it shorter

Related

akka spray : java.lang.IllegalArgumentException: requirement failed

I am new one in akka spray :)
I have this router actor:
class ProjectResource extends HttpServiceActor with DefaultJsonFormats {
import spray.http.MediaTypes.`application/json`
def receive = runRoute {
pathPrefix("rest" / "1.2") {
path("users" / Segment / "projects" / Segment / "auth_url") {
case (key, id) =>
get {
respondWithMediaType(`application/json`) {
requestContext =>
val responder = context actorOf Responder.props(sender(), requestContext)
context actorOf GetTask.props(key, id, responder)
}
}
}
}
}
}
and responder:
class Responder(replyTo: ActorRef, ctx: RequestContext) extends Actor with DefaultJsonFormats {
implicit val authFormat = jsonFormat2(AuthDTO)
def receive = {
case x: AuthDTO =>
ctx.complete(200, x)
context stop self
}
}
and when i tried to
curl -v http://localhost:8080/rest/1.2/users/123/projects/1/auth_url
i got this error (on ctx.complete(200, x) string ...):
[ERROR] [07/11/2016 18:40:32.954] [default-akka.actor.default-dispatcher-3] [akka://default/user/projectResource] Error during processing of request HttpRequest(GET,http://localhost:8080/rest/1.2/users/123/projects/1/auth_url,List(User-Agent: curl/7.35.0, Host: localhost:8080),Empty,HTTP/1.1)
java.lang.IllegalArgumentException: requirement failed
What i am doing wrong? Help Please!

unit testing - stubbing or mocking finagle client

Here is the few lines of code I would like to unit test for different http response codes. One of them is code 201 . Please advise
val cb = ClientBuilder()
val myClient = cb.build()
val req =
RequestBuilder()
.url(loginUrl)
.setHeader("Content-Type", "application/json")
.buildPost(copiedBuffer(body.getBytes(UTF_8)))
myClient(req).map(http.Response(_)).flatMap { response =>
response.statusCode match {
case 201 => //parse response
case _ =>
println(" some reseponse")
Future(proxy(response))
}
}

Is it possible to create an "infinite" stream from a database table using Akka Stream

I'm playing with Akka Streams 2.4.2 and am wondering if it's possible to setup a stream which uses a database table for a source and whenever there is a record added to the table that record is materialized and pushed downstream?
UPDATE: 2/23/16
I've implemented the solution from #PH88. Here's my table definition:
case class Record(id: Int, value: String)
class Records(tag: Tag) extends Table[Record](tag, "my_stream") {
def id = column[Int]("id")
def value = column[String]("value")
def * = (id, value) <> (Record.tupled, Record.unapply)
}
Here's the implementation:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
try{
val newRecStream = Source.unfold((0, List[Record]())) { n =>
try {
val q = for (r <- TableQuery[Records].filter(row => row.id > n._1)) yield (r)
val r = Source.fromPublisher(db.stream(q.result)).collect {
case rec => println(s"${rec.id}, ${rec.value}"); rec
}.runFold((n._1, List[Record]())) {
case ((id, xs), current) => (current.id, current :: xs)
}
val answer: (Int, List[Record]) = Await.result(r, 5.seconds)
Option(answer, None)
}
catch { case e:Exception => println(e); Option(n, e) }
}
Await.ready(newRecStream.throttle(1, 1.second, 1, ThrottleMode.shaping).runForeach(_ => ()), Duration.Inf)
}
finally {
system.shutdown
db.close
}
But my problem is that when I attempt to call flatMapConcat the type I get is Serializable.
UPDATE: 2/24/16
Updated to try db.run suggestion from #PH88:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
val disableAutoCommit = SimpleDBIO(_.connection.setAutoCommit(false))
val queryLimit = 1
try {
val newRecStream = Source.unfoldAsync(0) { n =>
val q = TableQuery[Records].filter(row => row.id > n).take(queryLimit)
db.run(q.result).map { recs =>
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(s"${rec.id}, ${rec.value}")
}
Await.ready(newRecStream, Duration.Inf)
}
catch
{
case ex: Throwable => println(ex)
}
finally {
system.shutdown
db.close
}
Which works (I changed query limit to 1 since I only have a couple items in my database table currently) - except once it prints the last row in the table the program exists. Here's my log output:
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/Users/xxxxxxx/dev/src/scratch/scala/fpp-in-scala/target/scala-2.11/classes/logback.xml]
17:09:28,062 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
17:09:28,064 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
17:09:28,079 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
17:09:28,102 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [application] to DEBUG
17:09:28,103 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
17:09:28,103 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
17:09:28,103 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
17:09:28,104 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator#4278284b - Registering current configuration as safe fallback point
17:09:28.117 [main] INFO com.zaxxer.hikari.HikariDataSource - pg-postgres - is starting.
1, WASSSAAAAAAAP!
2, WHAAAAT?!?
3, booyah!
4, what!
5, This rocks!
6, Again!
7, Again!2
8, I love this!
9, Akka Streams rock
10, Tuning jdbc
17:09:39.000 [main] INFO com.zaxxer.hikari.pool.HikariPool - pg-postgres - is closing down.
Process finished with exit code 0
Found the missing piece - need to replace this:
Some(recs.last.id, recs)
with this:
val lastId = if(recs.isEmpty) n else recs.last.id
Some(lastId, recs)
The call to recs.last.id was throwing java.lang.UnsupportedOperationException: empty.last when the result set was empty.
In general SQL database is a 'passive' construct and does not actively push changes like what you described. You can only 'simulate' the 'push' with periodic polling like:
val newRecStream = Source
// Query for table changes
.unfold(initState) { lastState =>
// query for new data since lastState and save the current state into newState...
Some((newState, newRecords))
}
// Throttle to limit the poll frequency
.throttle(...)
// breaks down into individual records...
.flatMapConcat { newRecords =>
Source.unfold(newRecords) { pendingRecords =>
if (records is empty) {
None
} else {
// take one record from pendingRecords and save to newRec. Save the rest into remainingRecords.
Some(remainingRecords, newRec)
}
}
}
Updated: 2/24/2016
Pseudo code example based on the 2/23/2016 updates of the question:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
val queryLimit = 10
try {
val completion = Source
.unfoldAsync(0) { lastRowId =>
val q = TableQuery[Records].filter(row => row.id > lastRowId).take(queryLimit)
db.run(q.result).map { recs =>
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(s"${rec.id}, ${rec.value}")
}
// Block forever
Await.ready(completion, Duration.Inf)
} catch {
case ex: Throwable => println(ex)
} finally {
system.shutdown
db.close
}
It will repeatedly execute the query in unfoldAsync against the DB, retrieving at most 10 (queryLimit) records a time and send the records downstream (-> throttle -> flatMapConcat -> runForeach). The Await at the end will actually block forever.
Updated: 2/25/2016
Executable 'proof-of-concept' code:
import akka.actor.ActorSystem
import akka.stream.{ThrottleMode, ActorMaterializer}
import akka.stream.scaladsl.Source
import scala.concurrent.duration.Duration
import scala.concurrent.{Await, Future}
import scala.concurrent.duration._
object Infinite extends App{
implicit val system = ActorSystem("Publisher")
implicit val ec = system.dispatcher
implicit val materializer = ActorMaterializer()
case class Record(id: Int, value: String)
try {
val completion = Source
.unfoldAsync(0) { lastRowId =>
Future {
val recs = (lastRowId to lastRowId + 10).map(i => Record(i, s"rec#$i"))
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.Shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(rec)
}
Await.ready(completion, Duration.Inf)
} catch {
case ex: Throwable => println(ex)
} finally {
system.shutdown
}
}
Here is database infinite streaming working code. This has been tested with millions of records being inserted into postgresql database while streaming app is running -
package infinite.streams.db
import akka.NotUsed
import akka.actor.ActorSystem
import akka.stream.alpakka.slick.scaladsl.SlickSession
import akka.stream.scaladsl.{Flow, Sink, Source}
import akka.stream.{ActorMaterializer, ThrottleMode}
import org.slf4j.LoggerFactory
import slick.basic.DatabaseConfig
import slick.jdbc.JdbcProfile
import scala.concurrent.duration._
import scala.concurrent.{Await, ExecutionContextExecutor}
case class Record(id: Int, value: String) {
val content = s"<ROW><ID>$id</ID><VALUE>$value</VALUE></ROW>"
}
object InfiniteStreamingApp extends App {
println("Starting app...")
implicit val system: ActorSystem = ActorSystem("Publisher")
implicit val ec: ExecutionContextExecutor = system.dispatcher
implicit val materializer: ActorMaterializer = ActorMaterializer()
println("Initializing database configuration...")
val databaseConfig: DatabaseConfig[JdbcProfile] = DatabaseConfig.forConfig[JdbcProfile]("postgres3")
implicit val session: SlickSession = SlickSession.forConfig(databaseConfig)
import databaseConfig.profile.api._
class Records(tag: Tag) extends Table[Record](tag, "test2") {
def id = column[Int]("c1")
def value = column[String]("c2")
def * = (id, value) <> (Record.tupled, Record.unapply)
}
val db = databaseConfig.db
println("Prime for streaming...")
val logic: Flow[(Int, String), (Int, String), NotUsed] = Flow[(Int, String)].map {
case (id, value) => (id, value.toUpperCase)
}
val fetchSize = 5
try {
val done = Source
.unfoldAsync(0) {
lastId =>
println(s"Fetching next: $fetchSize records with id > $lastId")
val query = TableQuery[Records].filter(_.id > lastId).take(fetchSize)
db.run(query.result.withPinnedSession)
.map {
recs => Some(recs.last.id, recs)
}
}
.throttle(5, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat {
recs => Source.fromIterator(() => recs.iterator)
}
.map(x => (x.id, x.content))
.via(logic)
.log("*******Post Transformation******")
// .runWith(Sink.foreach(r => println("SINK: " + r._2)))
// Use runForeach or runWith(Sink)
.runForeach(rec => println("REC: " + rec))
println("Waiting for result....")
Await.ready(done, Duration.Inf)
} catch {
case ex: Throwable => println(ex.getMessage)
} finally {
println("Streaming end successfully")
db.close()
system.terminate()
}
}
application.conf
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "INFO"
}
# Load using SlickSession.forConfig("slick-postgres")
postgres3 {
profile = "slick.jdbc.PostgresProfile$"
db {
dataSourceClass = "slick.jdbc.DriverDataSource"
properties = {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost/testdb"
user = "postgres"
password = "postgres"
}
numThreads = 2
}
}

Akka-stream and delegating processing to an actor

I have the following case, where I'm trying to delegate processing to an actor. What I want to happen is that whenever my flow processes a message, it sends it to the actor, and the actor will uppercase it and write it to the stream as a response.
So I should be able to connect to port 8000, type in "hello", have the flow send it to the actor, and have the actor publish it back to the stream so it's echoed back to me uppercased. The actor itself is pretty basic, from the ActorPublisher example in the docs.
I know this code doesn't work, I cleaned up my experiments to get it to compile. Right now, it's just two separate streams. I tried to experiment with merging the sources or the sinks, to no avail.
object Sample {
def main(args: Array[String]): Unit = {
implicit val system = ActorSystem("sample")
implicit val materializer = ActorMaterializer()
val connections: Source[IncomingConnection,
Future[ServerBinding]] = Tcp().bind("localhost", 8000)
val filter = Source.actorPublisher[ByteString](Props[Filter])
val filterRef = Flow[ByteString]
.to(Sink.ignore)
.runWith(filter)
connections runForeach { conn =>
val echo = Flow[ByteString] .map {
// would like to send 'p' to the actor,
// and have it publish to the stream
case p:ByteString => filterRef ! p
}
}
}
}
// this actor is supposed to simply uppercase all
// input and write it to the stream
class Filter extends ActorPublisher[ByteString] with Actor
{
var buf = Vector.empty[ByteString]
val delay = 0
def receive = {
case p: ByteString =>
if (buf.isEmpty && totalDemand > 0)
onNext(p)
else {
buf :+= ByteString(p.utf8String.toUpperCase)
deliverBuf()
}
case Request(_) =>
deliverBuf()
case Cancel =>
context.stop(self)
}
#tailrec final def deliverBuf(): Unit =
if (totalDemand > 0) {
if (totalDemand <= Int.MaxValue) {
val (use, keep) = buf.splitAt(totalDemand.toInt)
buf = keep
use foreach onNext
} else {
val (use, keep) = buf.splitAt(Int.MaxValue)
buf = keep
use foreach onNext
deliverBuf()
}
}
}
I've had this problem too before, solved it in a bit of roundabout way, hopefully you're ok with this way. Essentially, it involves creating a sink that immediately forwards the messages it gets to the src actor.
Of course, you can use a direct flow (commented it out), but I guess that's not the point of this exercise :)
object Sample {
def main(args: Array[String]): Unit = {
implicit val system = ActorSystem("sample")
implicit val materializer = ActorMaterializer()
val connections: Source[IncomingConnection,
Future[ServerBinding]] = Tcp().bind("localhost", 8000)
def filterProps = Props[Filter]
connections runForeach { conn =>
val actorRef = system.actorOf(filterProps)
val snk = Sink.foreach[ByteString]{s => actorRef ! s}
val src = Source.fromPublisher(ActorPublisher[ByteString](actorRef))
conn.handleWith(Flow.fromSinkAndSource(snk, src))
// conn.handleWith(Flow[ByteString].map(s => ByteString(s.utf8String.toUpperCase())))
}
}
}
// this actor is supposed to simply uppercase all
// input and write it to the stream
class Filter extends ActorPublisher[ByteString]
{
import akka.stream.actor.ActorPublisherMessage._
var buf = mutable.Queue.empty[String]
val delay = 0
def receive = {
case p: ByteString =>
buf += p.utf8String.toUpperCase
deliverBuf()
case Request(n) =>
deliverBuf()
case Cancel =>
context.stop(self)
}
def deliverBuf(): Unit = {
while (totalDemand > 0 && buf.nonEmpty) {
val s = ByteString(buf.dequeue() + "\n")
onNext(s)
}
}

Writing a unit test for Play websockets

I am working on a Scala + Play application utilizing websockets. I have a simple web socket defined as such:
def indexWS = WebSocket.using[String] { request =>
val out = Enumerator("Hello!")
val in = Iteratee.foreach[String](println).map { _ =>
println("Disconnected")
}
(in,out)
}
I have verified this works using Chrome's console. The issue I'm having is trying to write a unit test for this. Currently I have this:
"send awk for websocket connection" in {
running(FakeApplication()){
val js = route(FakeRequest(GET,"/WS")).get
status(js) must equalTo (OK)
contentType(js) must beSome.which(_ == "text/javascript")
}
}
However, when running my tests in play console, I receive this error, where line 35 corresponds to this line 'val js = route(FakeRequest(GET,"/WS")).get':
NoSuchElementException: None.get (ApplicationSpec.scala:35)
I have not been able to find a good example of unit testing scala/play websockets and am confused on how to properly write this test.
Inspired by answer from bruce-lowe, here is the alternative example with Hookup:
import java.net.URI
import io.backchat.hookup._
import org.specs2.mutable._
import play.api.test._
import scala.collection.mutable.ListBuffer
class ApplicationSpec extends Specification {
"Application" should {
"Test websocket" in new WithServer(port = 9000) {
val hookupClient = new DefaultHookupClient(HookupClientConfig(URI.create("ws://localhost:9000/ws"))) {
val messages = ListBuffer[String]()
def receive = {
case Connected =>
println("Connected")
case Disconnected(_) =>
println("Disconnected")
case JsonMessage(json) =>
println("Json message = " + json)
case TextMessage(text) =>
messages += text
println("Text message = " + text)
}
connect() onSuccess {
case Success => send("Hello Server")
}
}
hookupClient.messages.contains("Hello Client") must beTrue.eventually
}
}
}
The example assumed the websocket actor would reply with "Hello Client" text.
To include the library, add this line to libraryDependencies in build.sbt:
"io.backchat.hookup" %% "hookup" % "0.4.2"
A bit late to answer this one, but in case its useful, here is how I wrote a test for my Websockets. It uses a library from here (https://github.com/TooTallNate/Java-WebSocket)
import org.specs2.mutable._
import play.api.test.Helpers._
import play.api.test._
class ApplicationSpec extends Specification {
"Application" should {
"work" in {
running(TestServer(9000)) {
val clientInteraction = new ClientInteraction()
clientInteraction.client.connectBlocking()
clientInteraction.client.send("Hello Server")
eventually {
clientInteraction.messages.contains("Hello Client")
}
}
}
}
}
And a little utility class to store all messages / events (I'm sure you can enhance it yourself to meet your needs)
import java.net.URI
import org.java_websocket.client.WebSocketClient
import org.java_websocket.drafts.Draft_17
import org.java_websocket.handshake.ServerHandshake
import collection.JavaConversions._
import scala.collection.mutable.ListBuffer
class ClientInteraction {
val messages = ListBuffer[String]()
val client = new WebSocketClient(URI.create("ws://localhost:9000/wsWithActor"),
new Draft_17(), Map("HeaderKey1" -> "HeaderValue1"), 0) {
def onError(p1: Exception) {
println("onError")
}
def onMessage(message: String) {
messages += message
println("onMessage, message = " + message)
}
def onClose(code: Int, reason: String, remote: Boolean) {
println("onClose")
}
def onOpen(handshakedata: ServerHandshake) {
println("onOpen")
}
}
}
This is in my SBT file
libraryDependencies ++= Seq(
ws,
"org.java-websocket" % "Java-WebSocket" % "1.3.0",
"org.specs2" %% "specs2-core" % "3.7" % "test"
)
( There is a sample program here https://github.com/BruceLowe/play-with-websockets with a test )
I think That you can check this site it has a pretty good example about testing websockets with Spec
This a sample from typesafe:
/*
* Copyright (C) 2009-2014 Typesafe Inc. <http://www.typesafe.com>
*/
package play.it.http.websocket
import play.api.test._
import play.api.Application
import scala.concurrent.{Future, Promise}
import play.api.mvc.{Handler, Results, WebSocket}
import play.api.libs.iteratee._
import java.net.URI
import org.jboss.netty.handler.codec.http.websocketx._
import org.specs2.matcher.Matcher
import akka.actor.{ActorRef, PoisonPill, Actor, Props}
import play.mvc.WebSocket.{Out, In}
import play.core.Router.HandlerDef
import java.util.concurrent.atomic.AtomicReference
import org.jboss.netty.buffer.ChannelBuffers
object WebSocketSpec extends PlaySpecification with WsTestClient {
sequential
def withServer[A](webSocket: Application => Handler)(block: => A): A = {
val currentApp = new AtomicReference[FakeApplication]
val app = FakeApplication(
withRoutes = {
case (_, _) => webSocket(currentApp.get())
}
)
currentApp.set(app)
running(TestServer(testServerPort, app))(block)
}
def runWebSocket[A](handler: (Enumerator[WebSocketFrame], Iteratee[WebSocketFrame, _]) => Future[A]): A = {
val innerResult = Promise[A]()
WebSocketClient { client =>
await(client.connect(URI.create("ws://localhost:" + testServerPort + "/stream")) { (in, out) =>
innerResult.completeWith(handler(in, out))
})
}
await(innerResult.future)
}
def textFrame(matcher: Matcher[String]): Matcher[WebSocketFrame] = beLike {
case t: TextWebSocketFrame => t.getText must matcher
}
def closeFrame(status: Int = 1000): Matcher[WebSocketFrame] = beLike {
case close: CloseWebSocketFrame => close.getStatusCode must_== status
}
def binaryBuffer(text: String) = ChannelBuffers.wrappedBuffer(text.getBytes("utf-8"))
/**
* Iteratee getChunks that invokes a callback as soon as it's done.
*/
def getChunks[A](chunks: List[A], onDone: List[A] => _): Iteratee[A, List[A]] = Cont {
case Input.El(c) => getChunks(c :: chunks, onDone)
case Input.EOF =>
val result = chunks.reverse
onDone(result)
Done(result, Input.EOF)
case Input.Empty => getChunks(chunks, onDone)
}
/*
* Shared tests
*/
def allowConsumingMessages(webSocket: Application => Promise[List[String]] => Handler) = {
val consumed = Promise[List[String]]()
withServer(app => webSocket(app)(consumed)) {
val result = runWebSocket { (in, out) =>
Enumerator(new TextWebSocketFrame("a"), new TextWebSocketFrame("b"), new CloseWebSocketFrame(1000, "")) |>>> out
consumed.future
}
result must_== Seq("a", "b")
}
}
def allowSendingMessages(webSocket: Application => List[String] => Handler) = {
withServer(app => webSocket(app)(List("a", "b"))) {
val frames = runWebSocket { (in, out) =>
in |>>> Iteratee.getChunks[WebSocketFrame]
}
frames must contain(exactly(
textFrame(be_==("a")),
textFrame(be_==("b")),
closeFrame()
).inOrder)
}
}
def cleanUpWhenClosed(webSocket: Application => Promise[Boolean] => Handler) = {
val cleanedUp = Promise[Boolean]()
withServer(app => webSocket(app)(cleanedUp)) {
runWebSocket { (in, out) =>
out.run
cleanedUp.future
} must beTrue
}
}
def closeWhenTheConsumerIsDone(webSocket: Application => Handler) = {
withServer(app => webSocket(app)) {
val frames = runWebSocket { (in, out) =>
Enumerator[WebSocketFrame](new TextWebSocketFrame("foo")) |>> out
in |>>> Iteratee.getChunks[WebSocketFrame]
}
frames must contain(exactly(
closeFrame()
))
}
}
def allowRejectingTheWebSocketWithAResult(webSocket: Application => Int => Handler) = {
withServer(app => webSocket(app)(FORBIDDEN)) {
implicit val port = testServerPort
await(wsUrl("/stream").withHeaders(
"Upgrade" -> "websocket",
"Connection" -> "upgrade"
).get()).status must_== FORBIDDEN
}
}
"Plays WebSockets" should {
"allow consuming messages" in allowConsumingMessages { _ => consumed =>
WebSocket.using[String] { req =>
(getChunks[String](Nil, consumed.success _), Enumerator.empty)
}
}
"allow sending messages" in allowSendingMessages { _ => messages =>
WebSocket.using[String] { req =>
(Iteratee.ignore, Enumerator.enumerate(messages) >>> Enumerator.eof)
}
}
"close when the consumer is done" in closeWhenTheConsumerIsDone { _ =>
WebSocket.using[String] { req =>
(Iteratee.head, Enumerator.empty)
}
}
"clean up when closed" in cleanUpWhenClosed { _ => cleanedUp =>
WebSocket.using[String] { req =>
(Iteratee.ignore, Enumerator.empty[String].onDoneEnumerating(cleanedUp.success(true)))
}
}
"allow rejecting a websocket with a result" in allowRejectingTheWebSocketWithAResult { _ => statusCode =>
WebSocket.tryAccept[String] { req =>
Future.successful(Left(Results.Status(statusCode)))
}
}
"allow handling a WebSocket with an actor" in {
"allow consuming messages" in allowConsumingMessages { implicit app => consumed =>
WebSocket.acceptWithActor[String, String] { req => out =>
Props(new Actor() {
var messages = List.empty[String]
def receive = {
case msg: String =>
messages = msg :: messages
}
override def postStop() = {
consumed.success(messages.reverse)
}
})
}
}
"allow sending messages" in allowSendingMessages { implicit app => messages =>
WebSocket.acceptWithActor[String, String] { req => out =>
Props(new Actor() {
messages.foreach { msg =>
out ! msg
}
out ! PoisonPill
def receive = PartialFunction.empty
})
}
}
"close when the consumer is done" in closeWhenTheConsumerIsDone { implicit app =>
WebSocket.acceptWithActor[String, String] { req => out =>
Props(new Actor() {
out ! PoisonPill
def receive = PartialFunction.empty
})
}
}
"clean up when closed" in cleanUpWhenClosed { implicit app => cleanedUp =>
WebSocket.acceptWithActor[String, String] { req => out =>
Props(new Actor() {
def receive = PartialFunction.empty
override def postStop() = {
cleanedUp.success(true)
}
})
}
}
"allow rejecting a websocket with a result" in allowRejectingTheWebSocketWithAResult { implicit app => statusCode =>
WebSocket.tryAcceptWithActor[String, String] { req =>
Future.successful(Left(Results.Status(statusCode)))
}
}
"aggregate text frames" in {
val consumed = Promise[List[String]]()
withServer(app => WebSocket.using[String] { req =>
(getChunks[String](Nil, consumed.success _), Enumerator.empty)
}) {
val result = runWebSocket { (in, out) =>
Enumerator(
new TextWebSocketFrame("first"),
new TextWebSocketFrame(false, 0, "se"),
new ContinuationWebSocketFrame(false, 0, "co"),
new ContinuationWebSocketFrame(true, 0, "nd"),
new TextWebSocketFrame("third"),
new CloseWebSocketFrame(1000, "")) |>>> out
consumed.future
}
result must_== Seq("first", "second", "third")
}
}
"aggregate binary frames" in {
val consumed = Promise[List[Array[Byte]]]()
withServer(app => WebSocket.using[Array[Byte]] { req =>
(getChunks[Array[Byte]](Nil, consumed.success _), Enumerator.empty)
}) {
val result = runWebSocket { (in, out) =>
Enumerator(
new BinaryWebSocketFrame(binaryBuffer("first")),
new BinaryWebSocketFrame(false, 0, binaryBuffer("se")),
new ContinuationWebSocketFrame(false, 0, binaryBuffer("co")),
new ContinuationWebSocketFrame(true, 0, binaryBuffer("nd")),
new BinaryWebSocketFrame(binaryBuffer("third")),
new CloseWebSocketFrame(1000, "")) |>>> out
consumed.future
}
result.map(b => b.toSeq) must_== Seq("first".getBytes("utf-8").toSeq, "second".getBytes("utf-8").toSeq, "third".getBytes("utf-8").toSeq)
}
}
"close the websocket when the buffer limit is exceeded" in {
withServer(app => WebSocket.using[String] { req =>
(Iteratee.ignore, Enumerator.empty)
}) {
val frames = runWebSocket { (in, out) =>
Enumerator[WebSocketFrame](
new TextWebSocketFrame(false, 0, "first frame"),
new ContinuationWebSocketFrame(true, 0, new String(Array.range(1, 65530).map(_ => 'a')))
) |>> out
in |>>> Iteratee.getChunks[WebSocketFrame]
}
frames must contain(exactly(
closeFrame(1009)
))
}
}
}
"allow handling a WebSocket in java" in {
import play.core.Router.HandlerInvokerFactory
import play.core.Router.HandlerInvokerFactory._
import play.mvc.{ WebSocket => JWebSocket, Results => JResults }
import play.libs.F
implicit def toHandler[J <: AnyRef](javaHandler: J)(implicit factory: HandlerInvokerFactory[J]): Handler = {
val invoker = factory.createInvoker(
javaHandler,
new HandlerDef(javaHandler.getClass.getClassLoader, "package", "controller", "method", Nil, "GET", "", "/stream")
)
invoker.call(javaHandler)
}
"allow consuming messages" in allowConsumingMessages { _ => consumed =>
new JWebSocket[String] {
#volatile var messages = List.empty[String]
def onReady(in: In[String], out: Out[String]) = {
in.onMessage(new F.Callback[String] {
def invoke(msg: String) = messages = msg :: messages
})
in.onClose(new F.Callback0 {
def invoke() = consumed.success(messages.reverse)
})
}
}
}
"allow sending messages" in allowSendingMessages { _ => messages =>
new JWebSocket[String] {
def onReady(in: In[String], out: Out[String]) = {
messages.foreach { msg =>
out.write(msg)
}
out.close()
}
}
}
"clean up when closed" in cleanUpWhenClosed { _ => cleanedUp =>
new JWebSocket[String] {
def onReady(in: In[String], out: Out[String]) = {
in.onClose(new F.Callback0 {
def invoke() = cleanedUp.success(true)
})
}
}
}
"allow rejecting a websocket with a result" in allowRejectingTheWebSocketWithAResult { _ => statusCode =>
JWebSocket.reject[String](JResults.status(statusCode))
}
"allow handling a websocket with an actor" in allowSendingMessages { _ => messages =>
JWebSocket.withActor[String](new F.Function[ActorRef, Props]() {
def apply(out: ActorRef) = {
Props(new Actor() {
messages.foreach { msg =>
out ! msg
}
out ! PoisonPill
def receive = PartialFunction.empty
})
}
})
}
}
}
}