Is it possible to create an "infinite" stream from a database table using Akka Stream - slick-3.0

I'm playing with Akka Streams 2.4.2 and am wondering if it's possible to setup a stream which uses a database table for a source and whenever there is a record added to the table that record is materialized and pushed downstream?
UPDATE: 2/23/16
I've implemented the solution from #PH88. Here's my table definition:
case class Record(id: Int, value: String)
class Records(tag: Tag) extends Table[Record](tag, "my_stream") {
def id = column[Int]("id")
def value = column[String]("value")
def * = (id, value) <> (Record.tupled, Record.unapply)
}
Here's the implementation:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
try{
val newRecStream = Source.unfold((0, List[Record]())) { n =>
try {
val q = for (r <- TableQuery[Records].filter(row => row.id > n._1)) yield (r)
val r = Source.fromPublisher(db.stream(q.result)).collect {
case rec => println(s"${rec.id}, ${rec.value}"); rec
}.runFold((n._1, List[Record]())) {
case ((id, xs), current) => (current.id, current :: xs)
}
val answer: (Int, List[Record]) = Await.result(r, 5.seconds)
Option(answer, None)
}
catch { case e:Exception => println(e); Option(n, e) }
}
Await.ready(newRecStream.throttle(1, 1.second, 1, ThrottleMode.shaping).runForeach(_ => ()), Duration.Inf)
}
finally {
system.shutdown
db.close
}
But my problem is that when I attempt to call flatMapConcat the type I get is Serializable.
UPDATE: 2/24/16
Updated to try db.run suggestion from #PH88:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
val disableAutoCommit = SimpleDBIO(_.connection.setAutoCommit(false))
val queryLimit = 1
try {
val newRecStream = Source.unfoldAsync(0) { n =>
val q = TableQuery[Records].filter(row => row.id > n).take(queryLimit)
db.run(q.result).map { recs =>
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(s"${rec.id}, ${rec.value}")
}
Await.ready(newRecStream, Duration.Inf)
}
catch
{
case ex: Throwable => println(ex)
}
finally {
system.shutdown
db.close
}
Which works (I changed query limit to 1 since I only have a couple items in my database table currently) - except once it prints the last row in the table the program exists. Here's my log output:
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/Users/xxxxxxx/dev/src/scratch/scala/fpp-in-scala/target/scala-2.11/classes/logback.xml]
17:09:28,062 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
17:09:28,064 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
17:09:28,079 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
17:09:28,102 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [application] to DEBUG
17:09:28,103 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
17:09:28,103 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
17:09:28,103 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
17:09:28,104 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator#4278284b - Registering current configuration as safe fallback point
17:09:28.117 [main] INFO com.zaxxer.hikari.HikariDataSource - pg-postgres - is starting.
1, WASSSAAAAAAAP!
2, WHAAAAT?!?
3, booyah!
4, what!
5, This rocks!
6, Again!
7, Again!2
8, I love this!
9, Akka Streams rock
10, Tuning jdbc
17:09:39.000 [main] INFO com.zaxxer.hikari.pool.HikariPool - pg-postgres - is closing down.
Process finished with exit code 0
Found the missing piece - need to replace this:
Some(recs.last.id, recs)
with this:
val lastId = if(recs.isEmpty) n else recs.last.id
Some(lastId, recs)
The call to recs.last.id was throwing java.lang.UnsupportedOperationException: empty.last when the result set was empty.

In general SQL database is a 'passive' construct and does not actively push changes like what you described. You can only 'simulate' the 'push' with periodic polling like:
val newRecStream = Source
// Query for table changes
.unfold(initState) { lastState =>
// query for new data since lastState and save the current state into newState...
Some((newState, newRecords))
}
// Throttle to limit the poll frequency
.throttle(...)
// breaks down into individual records...
.flatMapConcat { newRecords =>
Source.unfold(newRecords) { pendingRecords =>
if (records is empty) {
None
} else {
// take one record from pendingRecords and save to newRec. Save the rest into remainingRecords.
Some(remainingRecords, newRec)
}
}
}
Updated: 2/24/2016
Pseudo code example based on the 2/23/2016 updates of the question:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
val queryLimit = 10
try {
val completion = Source
.unfoldAsync(0) { lastRowId =>
val q = TableQuery[Records].filter(row => row.id > lastRowId).take(queryLimit)
db.run(q.result).map { recs =>
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(s"${rec.id}, ${rec.value}")
}
// Block forever
Await.ready(completion, Duration.Inf)
} catch {
case ex: Throwable => println(ex)
} finally {
system.shutdown
db.close
}
It will repeatedly execute the query in unfoldAsync against the DB, retrieving at most 10 (queryLimit) records a time and send the records downstream (-> throttle -> flatMapConcat -> runForeach). The Await at the end will actually block forever.
Updated: 2/25/2016
Executable 'proof-of-concept' code:
import akka.actor.ActorSystem
import akka.stream.{ThrottleMode, ActorMaterializer}
import akka.stream.scaladsl.Source
import scala.concurrent.duration.Duration
import scala.concurrent.{Await, Future}
import scala.concurrent.duration._
object Infinite extends App{
implicit val system = ActorSystem("Publisher")
implicit val ec = system.dispatcher
implicit val materializer = ActorMaterializer()
case class Record(id: Int, value: String)
try {
val completion = Source
.unfoldAsync(0) { lastRowId =>
Future {
val recs = (lastRowId to lastRowId + 10).map(i => Record(i, s"rec#$i"))
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.Shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(rec)
}
Await.ready(completion, Duration.Inf)
} catch {
case ex: Throwable => println(ex)
} finally {
system.shutdown
}
}

Here is database infinite streaming working code. This has been tested with millions of records being inserted into postgresql database while streaming app is running -
package infinite.streams.db
import akka.NotUsed
import akka.actor.ActorSystem
import akka.stream.alpakka.slick.scaladsl.SlickSession
import akka.stream.scaladsl.{Flow, Sink, Source}
import akka.stream.{ActorMaterializer, ThrottleMode}
import org.slf4j.LoggerFactory
import slick.basic.DatabaseConfig
import slick.jdbc.JdbcProfile
import scala.concurrent.duration._
import scala.concurrent.{Await, ExecutionContextExecutor}
case class Record(id: Int, value: String) {
val content = s"<ROW><ID>$id</ID><VALUE>$value</VALUE></ROW>"
}
object InfiniteStreamingApp extends App {
println("Starting app...")
implicit val system: ActorSystem = ActorSystem("Publisher")
implicit val ec: ExecutionContextExecutor = system.dispatcher
implicit val materializer: ActorMaterializer = ActorMaterializer()
println("Initializing database configuration...")
val databaseConfig: DatabaseConfig[JdbcProfile] = DatabaseConfig.forConfig[JdbcProfile]("postgres3")
implicit val session: SlickSession = SlickSession.forConfig(databaseConfig)
import databaseConfig.profile.api._
class Records(tag: Tag) extends Table[Record](tag, "test2") {
def id = column[Int]("c1")
def value = column[String]("c2")
def * = (id, value) <> (Record.tupled, Record.unapply)
}
val db = databaseConfig.db
println("Prime for streaming...")
val logic: Flow[(Int, String), (Int, String), NotUsed] = Flow[(Int, String)].map {
case (id, value) => (id, value.toUpperCase)
}
val fetchSize = 5
try {
val done = Source
.unfoldAsync(0) {
lastId =>
println(s"Fetching next: $fetchSize records with id > $lastId")
val query = TableQuery[Records].filter(_.id > lastId).take(fetchSize)
db.run(query.result.withPinnedSession)
.map {
recs => Some(recs.last.id, recs)
}
}
.throttle(5, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat {
recs => Source.fromIterator(() => recs.iterator)
}
.map(x => (x.id, x.content))
.via(logic)
.log("*******Post Transformation******")
// .runWith(Sink.foreach(r => println("SINK: " + r._2)))
// Use runForeach or runWith(Sink)
.runForeach(rec => println("REC: " + rec))
println("Waiting for result....")
Await.ready(done, Duration.Inf)
} catch {
case ex: Throwable => println(ex.getMessage)
} finally {
println("Streaming end successfully")
db.close()
system.terminate()
}
}
application.conf
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "INFO"
}
# Load using SlickSession.forConfig("slick-postgres")
postgres3 {
profile = "slick.jdbc.PostgresProfile$"
db {
dataSourceClass = "slick.jdbc.DriverDataSource"
properties = {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost/testdb"
user = "postgres"
password = "postgres"
}
numThreads = 2
}
}

Related

LINQ query throw an exception when to get a count of a type

I write a test to filter a list.
Service.cs
public async Task<GetAllParticipantsDto> ListAllAsync(GetAllParticipantsRequest queryModel)
{
int count = 0;
var participantsList = participantRepository.AsNoTracking()
.Include(a => a.ParticipantConnections)!.ThenInclude(a => a.Connection)
.Include(a => a.ParticipantApplicationUsers)!.ThenInclude(a => a.ApplicationUser)
.AsQueryable();
if (participantsList != null && !String.IsNullOrEmpty(queryModel.RelatedConnection))
{
participantsList = participantsList.Where(x => x.ParticipantConnections.Any(y => y.Connection.FirstName.ToLower().Contains(queryModel.RelatedConnection.ToLower())) || x.ParticipantConnections.Any(y => y.Connection.LastName.ToLower().Contains(queryModel.RelatedConnection.ToLower())) ||
x.ParticipantConnections.Any(y => (y.Connection.FirstName.ToLower()+ y.Connection.LastName.ToLower()).Contains(queryModel.RelatedConnection.ToLower())) || x.ParticipantConnections.Any(y => (y.Connection.FirstName.ToLower() +" "+ y.Connection.LastName.ToLower()).Contains(queryModel.RelatedConnection.ToLower())));
count = participantsList.Count(); // NullReferenceException Exception thrown in this line
}
}
unit test
public async void ListAllAsync_ShouldReturn_FilterByBy_RelatedConnection()
{
var query = new GetAllParticipantsRequest()
{ // others null
RelatedConnection = "Alice Doe, Bob Doe"
};
Func<DSMP.ApplicationCore.Entities.Participant, bool> exists = n => true; // prepare Func outside of Setup
//1 - create a List<T> with test items
var participantList = ParticipantMockData.ListAllAsyncEntity();
//2 - build mock by extension
var mock = participantList.AsQueryable().BuildMock();
//3 - setup the mock as Queryable for Moq
_participantRepository.Setup(x => x.AsNoTracking(false)).Returns(mock);
var sut = new ParticipantService(
_participantRepository.Object, _participantSupportNeed.Object, _participantConnection.Object, _participantDisability.Object, _participantMedicalCondition.Object, _supportNeed.Object, _disability.Object, _medicalCondition.Object, _participantApplicationUser.Object, _appLogger.Object);
// Act
var result = await sut.ListAllAsync(query);
// Assert
}
The exception
Message - System.NullReferenceException: 'Object reference not set to an instance of an object.'
StackTrace -
" at System.Linq.Enumerable.Any[TSource](IEnumerable1 source, Func2 predicate)\r\n at System.Linq.Enumerable.WhereListIterator1.MoveNext()\r\n at System.Collections.Generic.LargeArrayBuilder1.AddRange(IEnumerable1 items)\r\n at System.Collections.Generic.EnumerableHelpers.ToArray[T](IEnumerable1 source)\r\n at System.Linq.Enumerable.ToArray[TSource](IEnumerable1 source)\r\n at System.Linq.SystemCore_EnumerableDebugView1.get_Items()"
I could not find why this error occur.
Can any one help me?

How to access metrics of Alpakka CommittableSource with back off?

Accessing the metrics of an Alpakka PlainSource seems fairly straight forward, but how can I do the same thing with a CommittableSource?
I currently have a simple consumer, something like this:
class Consumer(implicit val ma: ActorMaterializer, implicit val ec: ExecutionContext) extends Actor {
private val settings = ConsumerSettings(
context.system,
new ByteArrayDeserializer,
new StringDeserializer)
.withProperties(...)
override def receive: Receive = Actor.emptyBehavior
RestartSource
.withBackoff(minBackoff = 2.seconds, maxBackoff = 20.seconds, randomFactor = 0.2)(consumer)
.runForeach { handleMessage }
private def consumer() = {
AkkaConsumer
.committableSource(settings, Subscriptions.topics(Set(topic)))
.log(getClass.getSimpleName)
.withAttributes(ActorAttributes.supervisionStrategy(_ => Supervision.Resume))
}
private def handleMessage(message: CommittableMessage[Array[Byte], String]): Unit = {
...
}
}
How can I get access to the consumer metrics in this case?
We are using the Java prometheus client and I solved my issue with a custom collector that fetches its metrics directly from JMX:
import java.lang.management.ManagementFactory
import java.util
import io.prometheus.client.Collector
import io.prometheus.client.Collector.MetricFamilySamples
import io.prometheus.client.CounterMetricFamily
import io.prometheus.client.GaugeMetricFamily
import javax.management.ObjectName
import scala.collection.JavaConverters._
import scala.collection.mutable
class ConsumerMetricsCollector(val labels: Map[String, String] = Map.empty) extends Collector {
val metrics: mutable.Map[String, MetricFamilySamples] = mutable.Map.empty
def collect: util.List[MetricFamilySamples] = {
val server = ManagementFactory.getPlatformMBeanServer
for {
attrType <- List("consumer-metrics", "consumer-coordinator-metrics", "consumer-fetch-manager-metrics")
name <- server.queryNames(new ObjectName(s"kafka.consumer:type=$attrType,client-id=*"), null).asScala
attrInfo <- server.getMBeanInfo(name).getAttributes.filter { _.getType == "double" }
} yield {
val attrName = attrInfo.getName
val metricLabels = attrName.split(",").map(_.split("=").toList).collect {
case "client-id" :: (id: String) :: Nil => ("client-id", id)
}.toList ++ labels
val metricName = "kafka_consumer_" + attrName.replaceAll(raw"""[^\p{Alnum}]+""", "_")
val labelKeys = metricLabels.map(_._1).asJava
val metric = metrics.getOrElseUpdate(metricName,
if(metricName.endsWith("_total") || metricName.endsWith("_sum")) {
new CounterMetricFamily(metricName, attrInfo.getDescription, labelKeys)
} else {
new GaugeMetricFamily(metricName, attrInfo.getDescription, labelKeys)
}: MetricFamilySamples
)
val metricValue = server.getAttribute(name, attrName).asInstanceOf[Double]
val labelValues = metricLabels.map(_._2).asJava
metric match {
case f: CounterMetricFamily => f.addMetric(labelValues, metricValue)
case f: GaugeMetricFamily => f.addMetric(labelValues, metricValue)
case _ =>
}
}
metrics.values.toList.asJava
}
}

Akka Kafka stream supervison strategy not working

I am running an Akka Streams Kafka application and I want to incorporate the supervision strategy on the stream consumer such that if the broker goes down, and the stream consumer dies after a stop timeout, the supervisor can restart the consumer.
Here is my complete code:
UserEventStream:
import akka.actor.{Actor, PoisonPill, Props}
import akka.kafka.{ConsumerSettings, Subscriptions}
import akka.kafka.scaladsl.Consumer
import akka.stream.scaladsl.Sink
import akka.util.Timeout
import org.apache.kafka.clients.consumer.ConsumerConfig
import org.apache.kafka.common.serialization.{ByteArrayDeserializer, StringDeserializer}
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits.global
import scala.util.{Failure, Success}
import akka.pattern.ask
import akka.stream.ActorMaterializer
class UserEventStream extends Actor {
val settings = Settings(context.system).KafkaConsumers
implicit val timeout: Timeout = Timeout(10 seconds)
implicit val materializer = ActorMaterializer()
override def preStart(): Unit = {
super.preStart()
println("Starting UserEventStream....s")
}
override def receive = {
case "start" =>
val consumerConfig = settings.KafkaConsumerInfo
println(s"ConsumerConfig with $consumerConfig")
startStreamConsumer(consumerConfig("UserEventMessage" + ".c" + 1))
}
def startStreamConsumer(config: Map[String, String]) = {
println(s"startStreamConsumer with config $config")
val consumerSource = createConsumerSource(config)
val consumerSink = createConsumerSink()
val messageProcessor = context.actorOf(Props[MessageProcessor], "messageprocessor")
println("START: The UserEventStream processing")
val future =
consumerSource
.mapAsync(parallelism = 50) { message =>
val m = s"${message.record.value()}"
messageProcessor ? m
}
.runWith(consumerSink)
future.onComplete {
case Failure(ex) =>
println("FAILURE : The UserEventStream processing, stopping the actor.")
self ! PoisonPill
case Success(ex) =>
}
}
def createConsumerSource(config: Map[String, String]) = {
val kafkaMBAddress = config("bootstrap-servers")
val groupID = config("groupId")
val topicSubscription = config("subscription-topic").split(',').toList
println(s"Subscriptiontopics $topicSubscription")
val consumerSettings = ConsumerSettings(context.system, new ByteArrayDeserializer, new StringDeserializer)
.withBootstrapServers(kafkaMBAddress)
.withGroupId(groupID)
.withProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
.withProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true")
Consumer.committableSource(consumerSettings, Subscriptions.topics(topicSubscription: _*))
}
def createConsumerSink() = {
Sink.foreach(println)
}
}
StreamProcessorSupervisor (this is the supervisor class of the UserEventStream class):
import akka.actor.{Actor, Props}
import akka.pattern.{Backoff, BackoffSupervisor}
import akka.stream.ActorMaterializer
import stream.StreamProcessorSupervisor.StartClient
import scala.concurrent.duration._
object StreamProcessorSupervisor {
final case object StartSimulator
final case class StartClient(id: String)
def props(implicit materializer: ActorMaterializer) =
Props(classOf[StreamProcessorSupervisor], materializer)
}
class StreamProcessorSupervisor(implicit materializer: ActorMaterializer) extends Actor {
override def preStart(): Unit = {
self ! StartClient(self.path.name)
}
def receive: Receive = {
case StartClient(id) =>
println(s"startCLient with id $id")
val childProps = Props(classOf[UserEventStream])
val supervisor = BackoffSupervisor.props(
Backoff.onFailure(
childProps,
childName = "usereventstream",
minBackoff = 1.second,
maxBackoff = 1.minutes,
randomFactor = 0.2
)
)
context.actorOf(supervisor, name = s"$id-backoff-supervisor")
val userEventStrean = context.actorOf(Props(classOf[UserEventStream]),"usereventstream")
userEventStrean ! "start"
}
}
App (the main application class):
import akka.actor.{ActorSystem, Props}
import akka.stream.ActorMaterializer
object App extends App {
implicit val system = ActorSystem("stream-test")
implicit val materializer = ActorMaterializer()
system.actorOf(StreamProcessorSupervisor.props,"StreamProcessorSupervisor")
}
application.conf:
kafka {
consumer {
num-consumers = "1"
c1 {
bootstrap-servers = "localhost:9092"
bootstrap-servers = ${?KAFKA_CONSUMER_ENDPOINT1}
groupId = "localakkagroup1"
subscription-topic = "test"
subscription-topic = ${?SUBSCRIPTION_TOPIC1}
message-type = "UserEventMessage"
poll-interval = 50ms
poll-timeout = 50ms
stop-timeout = 30s
close-timeout = 20s
commit-timeout = 15s
wakeup-timeout = 10s
max-wakeups = 10
use-dispatcher = "akka.kafka.default-dispatcher"
kafka-clients {
enable.auto.commit = true
}
}
}
}
After running the application, I purposely killed the Kafka broker and then found that after 30 seconds, the actor is stopping itself by sending a poison pill. But strangely it doesn't restart as mentioned in the BackoffSupervisor strategy.
What could be the issue here?
There are two instances of UserEventStream in your code: one is the child actor that the BackoffSupervisor internally creates with the Props that you pass to it, and the other is the val userEventStrean that is a child of StreamProcessorSupervisor. You're sending the "start" message to the latter, when you should be sending that message to the former.
You don't need val userEventStrean, because the BackoffSupervisor creates the child actor. Messages sent to the BackoffSupervisor are forwarded to the child, so to send a "start" message to the child, send it to the BackoffSupervisor:
class StreamProcessorSupervisor(implicit materializer: ActorMaterializer) extends Actor {
override def preStart(): Unit = {
self ! StartClient(self.path.name)
}
def receive: Receive = {
case StartClient(id) =>
println(s"startCLient with id $id")
val childProps = Props[UserEventStream]
val supervisorProps = BackoffSupervisor.props(...)
val supervisor = context.actorOf(supervisorProps, name = s"$id-backoff-supervisor")
supervisor ! "start"
}
}
The other issue is that when an actor receives a PoisonPill, that's not the same thing as that actor throwing an exception. Therefore, Backoff.onFailure won't be triggered when UserEventStream sends itself a PoisonPill. A PoisonPill stops the actor, so use Backoff.onStop instead:
val supervisorProps = BackoffSupervisor.props(
Backoff.onStop( // <--- use onStop
childProps,
...
)
)
val supervisor = context.actorOf(supervisorProps, name = s"$id-backoff-supervisor")
supervisor ! "start"

Akka-stream and delegating processing to an actor

I have the following case, where I'm trying to delegate processing to an actor. What I want to happen is that whenever my flow processes a message, it sends it to the actor, and the actor will uppercase it and write it to the stream as a response.
So I should be able to connect to port 8000, type in "hello", have the flow send it to the actor, and have the actor publish it back to the stream so it's echoed back to me uppercased. The actor itself is pretty basic, from the ActorPublisher example in the docs.
I know this code doesn't work, I cleaned up my experiments to get it to compile. Right now, it's just two separate streams. I tried to experiment with merging the sources or the sinks, to no avail.
object Sample {
def main(args: Array[String]): Unit = {
implicit val system = ActorSystem("sample")
implicit val materializer = ActorMaterializer()
val connections: Source[IncomingConnection,
Future[ServerBinding]] = Tcp().bind("localhost", 8000)
val filter = Source.actorPublisher[ByteString](Props[Filter])
val filterRef = Flow[ByteString]
.to(Sink.ignore)
.runWith(filter)
connections runForeach { conn =>
val echo = Flow[ByteString] .map {
// would like to send 'p' to the actor,
// and have it publish to the stream
case p:ByteString => filterRef ! p
}
}
}
}
// this actor is supposed to simply uppercase all
// input and write it to the stream
class Filter extends ActorPublisher[ByteString] with Actor
{
var buf = Vector.empty[ByteString]
val delay = 0
def receive = {
case p: ByteString =>
if (buf.isEmpty && totalDemand > 0)
onNext(p)
else {
buf :+= ByteString(p.utf8String.toUpperCase)
deliverBuf()
}
case Request(_) =>
deliverBuf()
case Cancel =>
context.stop(self)
}
#tailrec final def deliverBuf(): Unit =
if (totalDemand > 0) {
if (totalDemand <= Int.MaxValue) {
val (use, keep) = buf.splitAt(totalDemand.toInt)
buf = keep
use foreach onNext
} else {
val (use, keep) = buf.splitAt(Int.MaxValue)
buf = keep
use foreach onNext
deliverBuf()
}
}
}
I've had this problem too before, solved it in a bit of roundabout way, hopefully you're ok with this way. Essentially, it involves creating a sink that immediately forwards the messages it gets to the src actor.
Of course, you can use a direct flow (commented it out), but I guess that's not the point of this exercise :)
object Sample {
def main(args: Array[String]): Unit = {
implicit val system = ActorSystem("sample")
implicit val materializer = ActorMaterializer()
val connections: Source[IncomingConnection,
Future[ServerBinding]] = Tcp().bind("localhost", 8000)
def filterProps = Props[Filter]
connections runForeach { conn =>
val actorRef = system.actorOf(filterProps)
val snk = Sink.foreach[ByteString]{s => actorRef ! s}
val src = Source.fromPublisher(ActorPublisher[ByteString](actorRef))
conn.handleWith(Flow.fromSinkAndSource(snk, src))
// conn.handleWith(Flow[ByteString].map(s => ByteString(s.utf8String.toUpperCase())))
}
}
}
// this actor is supposed to simply uppercase all
// input and write it to the stream
class Filter extends ActorPublisher[ByteString]
{
import akka.stream.actor.ActorPublisherMessage._
var buf = mutable.Queue.empty[String]
val delay = 0
def receive = {
case p: ByteString =>
buf += p.utf8String.toUpperCase
deliverBuf()
case Request(n) =>
deliverBuf()
case Cancel =>
context.stop(self)
}
def deliverBuf(): Unit = {
while (totalDemand > 0 && buf.nonEmpty) {
val s = ByteString(buf.dequeue() + "\n")
onNext(s)
}
}

Writing a unit test for Play websockets

I am working on a Scala + Play application utilizing websockets. I have a simple web socket defined as such:
def indexWS = WebSocket.using[String] { request =>
val out = Enumerator("Hello!")
val in = Iteratee.foreach[String](println).map { _ =>
println("Disconnected")
}
(in,out)
}
I have verified this works using Chrome's console. The issue I'm having is trying to write a unit test for this. Currently I have this:
"send awk for websocket connection" in {
running(FakeApplication()){
val js = route(FakeRequest(GET,"/WS")).get
status(js) must equalTo (OK)
contentType(js) must beSome.which(_ == "text/javascript")
}
}
However, when running my tests in play console, I receive this error, where line 35 corresponds to this line 'val js = route(FakeRequest(GET,"/WS")).get':
NoSuchElementException: None.get (ApplicationSpec.scala:35)
I have not been able to find a good example of unit testing scala/play websockets and am confused on how to properly write this test.
Inspired by answer from bruce-lowe, here is the alternative example with Hookup:
import java.net.URI
import io.backchat.hookup._
import org.specs2.mutable._
import play.api.test._
import scala.collection.mutable.ListBuffer
class ApplicationSpec extends Specification {
"Application" should {
"Test websocket" in new WithServer(port = 9000) {
val hookupClient = new DefaultHookupClient(HookupClientConfig(URI.create("ws://localhost:9000/ws"))) {
val messages = ListBuffer[String]()
def receive = {
case Connected =>
println("Connected")
case Disconnected(_) =>
println("Disconnected")
case JsonMessage(json) =>
println("Json message = " + json)
case TextMessage(text) =>
messages += text
println("Text message = " + text)
}
connect() onSuccess {
case Success => send("Hello Server")
}
}
hookupClient.messages.contains("Hello Client") must beTrue.eventually
}
}
}
The example assumed the websocket actor would reply with "Hello Client" text.
To include the library, add this line to libraryDependencies in build.sbt:
"io.backchat.hookup" %% "hookup" % "0.4.2"
A bit late to answer this one, but in case its useful, here is how I wrote a test for my Websockets. It uses a library from here (https://github.com/TooTallNate/Java-WebSocket)
import org.specs2.mutable._
import play.api.test.Helpers._
import play.api.test._
class ApplicationSpec extends Specification {
"Application" should {
"work" in {
running(TestServer(9000)) {
val clientInteraction = new ClientInteraction()
clientInteraction.client.connectBlocking()
clientInteraction.client.send("Hello Server")
eventually {
clientInteraction.messages.contains("Hello Client")
}
}
}
}
}
And a little utility class to store all messages / events (I'm sure you can enhance it yourself to meet your needs)
import java.net.URI
import org.java_websocket.client.WebSocketClient
import org.java_websocket.drafts.Draft_17
import org.java_websocket.handshake.ServerHandshake
import collection.JavaConversions._
import scala.collection.mutable.ListBuffer
class ClientInteraction {
val messages = ListBuffer[String]()
val client = new WebSocketClient(URI.create("ws://localhost:9000/wsWithActor"),
new Draft_17(), Map("HeaderKey1" -> "HeaderValue1"), 0) {
def onError(p1: Exception) {
println("onError")
}
def onMessage(message: String) {
messages += message
println("onMessage, message = " + message)
}
def onClose(code: Int, reason: String, remote: Boolean) {
println("onClose")
}
def onOpen(handshakedata: ServerHandshake) {
println("onOpen")
}
}
}
This is in my SBT file
libraryDependencies ++= Seq(
ws,
"org.java-websocket" % "Java-WebSocket" % "1.3.0",
"org.specs2" %% "specs2-core" % "3.7" % "test"
)
( There is a sample program here https://github.com/BruceLowe/play-with-websockets with a test )
I think That you can check this site it has a pretty good example about testing websockets with Spec
This a sample from typesafe:
/*
* Copyright (C) 2009-2014 Typesafe Inc. <http://www.typesafe.com>
*/
package play.it.http.websocket
import play.api.test._
import play.api.Application
import scala.concurrent.{Future, Promise}
import play.api.mvc.{Handler, Results, WebSocket}
import play.api.libs.iteratee._
import java.net.URI
import org.jboss.netty.handler.codec.http.websocketx._
import org.specs2.matcher.Matcher
import akka.actor.{ActorRef, PoisonPill, Actor, Props}
import play.mvc.WebSocket.{Out, In}
import play.core.Router.HandlerDef
import java.util.concurrent.atomic.AtomicReference
import org.jboss.netty.buffer.ChannelBuffers
object WebSocketSpec extends PlaySpecification with WsTestClient {
sequential
def withServer[A](webSocket: Application => Handler)(block: => A): A = {
val currentApp = new AtomicReference[FakeApplication]
val app = FakeApplication(
withRoutes = {
case (_, _) => webSocket(currentApp.get())
}
)
currentApp.set(app)
running(TestServer(testServerPort, app))(block)
}
def runWebSocket[A](handler: (Enumerator[WebSocketFrame], Iteratee[WebSocketFrame, _]) => Future[A]): A = {
val innerResult = Promise[A]()
WebSocketClient { client =>
await(client.connect(URI.create("ws://localhost:" + testServerPort + "/stream")) { (in, out) =>
innerResult.completeWith(handler(in, out))
})
}
await(innerResult.future)
}
def textFrame(matcher: Matcher[String]): Matcher[WebSocketFrame] = beLike {
case t: TextWebSocketFrame => t.getText must matcher
}
def closeFrame(status: Int = 1000): Matcher[WebSocketFrame] = beLike {
case close: CloseWebSocketFrame => close.getStatusCode must_== status
}
def binaryBuffer(text: String) = ChannelBuffers.wrappedBuffer(text.getBytes("utf-8"))
/**
* Iteratee getChunks that invokes a callback as soon as it's done.
*/
def getChunks[A](chunks: List[A], onDone: List[A] => _): Iteratee[A, List[A]] = Cont {
case Input.El(c) => getChunks(c :: chunks, onDone)
case Input.EOF =>
val result = chunks.reverse
onDone(result)
Done(result, Input.EOF)
case Input.Empty => getChunks(chunks, onDone)
}
/*
* Shared tests
*/
def allowConsumingMessages(webSocket: Application => Promise[List[String]] => Handler) = {
val consumed = Promise[List[String]]()
withServer(app => webSocket(app)(consumed)) {
val result = runWebSocket { (in, out) =>
Enumerator(new TextWebSocketFrame("a"), new TextWebSocketFrame("b"), new CloseWebSocketFrame(1000, "")) |>>> out
consumed.future
}
result must_== Seq("a", "b")
}
}
def allowSendingMessages(webSocket: Application => List[String] => Handler) = {
withServer(app => webSocket(app)(List("a", "b"))) {
val frames = runWebSocket { (in, out) =>
in |>>> Iteratee.getChunks[WebSocketFrame]
}
frames must contain(exactly(
textFrame(be_==("a")),
textFrame(be_==("b")),
closeFrame()
).inOrder)
}
}
def cleanUpWhenClosed(webSocket: Application => Promise[Boolean] => Handler) = {
val cleanedUp = Promise[Boolean]()
withServer(app => webSocket(app)(cleanedUp)) {
runWebSocket { (in, out) =>
out.run
cleanedUp.future
} must beTrue
}
}
def closeWhenTheConsumerIsDone(webSocket: Application => Handler) = {
withServer(app => webSocket(app)) {
val frames = runWebSocket { (in, out) =>
Enumerator[WebSocketFrame](new TextWebSocketFrame("foo")) |>> out
in |>>> Iteratee.getChunks[WebSocketFrame]
}
frames must contain(exactly(
closeFrame()
))
}
}
def allowRejectingTheWebSocketWithAResult(webSocket: Application => Int => Handler) = {
withServer(app => webSocket(app)(FORBIDDEN)) {
implicit val port = testServerPort
await(wsUrl("/stream").withHeaders(
"Upgrade" -> "websocket",
"Connection" -> "upgrade"
).get()).status must_== FORBIDDEN
}
}
"Plays WebSockets" should {
"allow consuming messages" in allowConsumingMessages { _ => consumed =>
WebSocket.using[String] { req =>
(getChunks[String](Nil, consumed.success _), Enumerator.empty)
}
}
"allow sending messages" in allowSendingMessages { _ => messages =>
WebSocket.using[String] { req =>
(Iteratee.ignore, Enumerator.enumerate(messages) >>> Enumerator.eof)
}
}
"close when the consumer is done" in closeWhenTheConsumerIsDone { _ =>
WebSocket.using[String] { req =>
(Iteratee.head, Enumerator.empty)
}
}
"clean up when closed" in cleanUpWhenClosed { _ => cleanedUp =>
WebSocket.using[String] { req =>
(Iteratee.ignore, Enumerator.empty[String].onDoneEnumerating(cleanedUp.success(true)))
}
}
"allow rejecting a websocket with a result" in allowRejectingTheWebSocketWithAResult { _ => statusCode =>
WebSocket.tryAccept[String] { req =>
Future.successful(Left(Results.Status(statusCode)))
}
}
"allow handling a WebSocket with an actor" in {
"allow consuming messages" in allowConsumingMessages { implicit app => consumed =>
WebSocket.acceptWithActor[String, String] { req => out =>
Props(new Actor() {
var messages = List.empty[String]
def receive = {
case msg: String =>
messages = msg :: messages
}
override def postStop() = {
consumed.success(messages.reverse)
}
})
}
}
"allow sending messages" in allowSendingMessages { implicit app => messages =>
WebSocket.acceptWithActor[String, String] { req => out =>
Props(new Actor() {
messages.foreach { msg =>
out ! msg
}
out ! PoisonPill
def receive = PartialFunction.empty
})
}
}
"close when the consumer is done" in closeWhenTheConsumerIsDone { implicit app =>
WebSocket.acceptWithActor[String, String] { req => out =>
Props(new Actor() {
out ! PoisonPill
def receive = PartialFunction.empty
})
}
}
"clean up when closed" in cleanUpWhenClosed { implicit app => cleanedUp =>
WebSocket.acceptWithActor[String, String] { req => out =>
Props(new Actor() {
def receive = PartialFunction.empty
override def postStop() = {
cleanedUp.success(true)
}
})
}
}
"allow rejecting a websocket with a result" in allowRejectingTheWebSocketWithAResult { implicit app => statusCode =>
WebSocket.tryAcceptWithActor[String, String] { req =>
Future.successful(Left(Results.Status(statusCode)))
}
}
"aggregate text frames" in {
val consumed = Promise[List[String]]()
withServer(app => WebSocket.using[String] { req =>
(getChunks[String](Nil, consumed.success _), Enumerator.empty)
}) {
val result = runWebSocket { (in, out) =>
Enumerator(
new TextWebSocketFrame("first"),
new TextWebSocketFrame(false, 0, "se"),
new ContinuationWebSocketFrame(false, 0, "co"),
new ContinuationWebSocketFrame(true, 0, "nd"),
new TextWebSocketFrame("third"),
new CloseWebSocketFrame(1000, "")) |>>> out
consumed.future
}
result must_== Seq("first", "second", "third")
}
}
"aggregate binary frames" in {
val consumed = Promise[List[Array[Byte]]]()
withServer(app => WebSocket.using[Array[Byte]] { req =>
(getChunks[Array[Byte]](Nil, consumed.success _), Enumerator.empty)
}) {
val result = runWebSocket { (in, out) =>
Enumerator(
new BinaryWebSocketFrame(binaryBuffer("first")),
new BinaryWebSocketFrame(false, 0, binaryBuffer("se")),
new ContinuationWebSocketFrame(false, 0, binaryBuffer("co")),
new ContinuationWebSocketFrame(true, 0, binaryBuffer("nd")),
new BinaryWebSocketFrame(binaryBuffer("third")),
new CloseWebSocketFrame(1000, "")) |>>> out
consumed.future
}
result.map(b => b.toSeq) must_== Seq("first".getBytes("utf-8").toSeq, "second".getBytes("utf-8").toSeq, "third".getBytes("utf-8").toSeq)
}
}
"close the websocket when the buffer limit is exceeded" in {
withServer(app => WebSocket.using[String] { req =>
(Iteratee.ignore, Enumerator.empty)
}) {
val frames = runWebSocket { (in, out) =>
Enumerator[WebSocketFrame](
new TextWebSocketFrame(false, 0, "first frame"),
new ContinuationWebSocketFrame(true, 0, new String(Array.range(1, 65530).map(_ => 'a')))
) |>> out
in |>>> Iteratee.getChunks[WebSocketFrame]
}
frames must contain(exactly(
closeFrame(1009)
))
}
}
}
"allow handling a WebSocket in java" in {
import play.core.Router.HandlerInvokerFactory
import play.core.Router.HandlerInvokerFactory._
import play.mvc.{ WebSocket => JWebSocket, Results => JResults }
import play.libs.F
implicit def toHandler[J <: AnyRef](javaHandler: J)(implicit factory: HandlerInvokerFactory[J]): Handler = {
val invoker = factory.createInvoker(
javaHandler,
new HandlerDef(javaHandler.getClass.getClassLoader, "package", "controller", "method", Nil, "GET", "", "/stream")
)
invoker.call(javaHandler)
}
"allow consuming messages" in allowConsumingMessages { _ => consumed =>
new JWebSocket[String] {
#volatile var messages = List.empty[String]
def onReady(in: In[String], out: Out[String]) = {
in.onMessage(new F.Callback[String] {
def invoke(msg: String) = messages = msg :: messages
})
in.onClose(new F.Callback0 {
def invoke() = consumed.success(messages.reverse)
})
}
}
}
"allow sending messages" in allowSendingMessages { _ => messages =>
new JWebSocket[String] {
def onReady(in: In[String], out: Out[String]) = {
messages.foreach { msg =>
out.write(msg)
}
out.close()
}
}
}
"clean up when closed" in cleanUpWhenClosed { _ => cleanedUp =>
new JWebSocket[String] {
def onReady(in: In[String], out: Out[String]) = {
in.onClose(new F.Callback0 {
def invoke() = cleanedUp.success(true)
})
}
}
}
"allow rejecting a websocket with a result" in allowRejectingTheWebSocketWithAResult { _ => statusCode =>
JWebSocket.reject[String](JResults.status(statusCode))
}
"allow handling a websocket with an actor" in allowSendingMessages { _ => messages =>
JWebSocket.withActor[String](new F.Function[ActorRef, Props]() {
def apply(out: ActorRef) = {
Props(new Actor() {
messages.foreach { msg =>
out ! msg
}
out ! PoisonPill
def receive = PartialFunction.empty
})
}
})
}
}
}
}