Writing a unit test for Play websockets - unit-testing

I am working on a Scala + Play application utilizing websockets. I have a simple web socket defined as such:
def indexWS = WebSocket.using[String] { request =>
val out = Enumerator("Hello!")
val in = Iteratee.foreach[String](println).map { _ =>
println("Disconnected")
}
(in,out)
}
I have verified this works using Chrome's console. The issue I'm having is trying to write a unit test for this. Currently I have this:
"send awk for websocket connection" in {
running(FakeApplication()){
val js = route(FakeRequest(GET,"/WS")).get
status(js) must equalTo (OK)
contentType(js) must beSome.which(_ == "text/javascript")
}
}
However, when running my tests in play console, I receive this error, where line 35 corresponds to this line 'val js = route(FakeRequest(GET,"/WS")).get':
NoSuchElementException: None.get (ApplicationSpec.scala:35)
I have not been able to find a good example of unit testing scala/play websockets and am confused on how to properly write this test.

Inspired by answer from bruce-lowe, here is the alternative example with Hookup:
import java.net.URI
import io.backchat.hookup._
import org.specs2.mutable._
import play.api.test._
import scala.collection.mutable.ListBuffer
class ApplicationSpec extends Specification {
"Application" should {
"Test websocket" in new WithServer(port = 9000) {
val hookupClient = new DefaultHookupClient(HookupClientConfig(URI.create("ws://localhost:9000/ws"))) {
val messages = ListBuffer[String]()
def receive = {
case Connected =>
println("Connected")
case Disconnected(_) =>
println("Disconnected")
case JsonMessage(json) =>
println("Json message = " + json)
case TextMessage(text) =>
messages += text
println("Text message = " + text)
}
connect() onSuccess {
case Success => send("Hello Server")
}
}
hookupClient.messages.contains("Hello Client") must beTrue.eventually
}
}
}
The example assumed the websocket actor would reply with "Hello Client" text.
To include the library, add this line to libraryDependencies in build.sbt:
"io.backchat.hookup" %% "hookup" % "0.4.2"

A bit late to answer this one, but in case its useful, here is how I wrote a test for my Websockets. It uses a library from here (https://github.com/TooTallNate/Java-WebSocket)
import org.specs2.mutable._
import play.api.test.Helpers._
import play.api.test._
class ApplicationSpec extends Specification {
"Application" should {
"work" in {
running(TestServer(9000)) {
val clientInteraction = new ClientInteraction()
clientInteraction.client.connectBlocking()
clientInteraction.client.send("Hello Server")
eventually {
clientInteraction.messages.contains("Hello Client")
}
}
}
}
}
And a little utility class to store all messages / events (I'm sure you can enhance it yourself to meet your needs)
import java.net.URI
import org.java_websocket.client.WebSocketClient
import org.java_websocket.drafts.Draft_17
import org.java_websocket.handshake.ServerHandshake
import collection.JavaConversions._
import scala.collection.mutable.ListBuffer
class ClientInteraction {
val messages = ListBuffer[String]()
val client = new WebSocketClient(URI.create("ws://localhost:9000/wsWithActor"),
new Draft_17(), Map("HeaderKey1" -> "HeaderValue1"), 0) {
def onError(p1: Exception) {
println("onError")
}
def onMessage(message: String) {
messages += message
println("onMessage, message = " + message)
}
def onClose(code: Int, reason: String, remote: Boolean) {
println("onClose")
}
def onOpen(handshakedata: ServerHandshake) {
println("onOpen")
}
}
}
This is in my SBT file
libraryDependencies ++= Seq(
ws,
"org.java-websocket" % "Java-WebSocket" % "1.3.0",
"org.specs2" %% "specs2-core" % "3.7" % "test"
)
( There is a sample program here https://github.com/BruceLowe/play-with-websockets with a test )

I think That you can check this site it has a pretty good example about testing websockets with Spec
This a sample from typesafe:
/*
* Copyright (C) 2009-2014 Typesafe Inc. <http://www.typesafe.com>
*/
package play.it.http.websocket
import play.api.test._
import play.api.Application
import scala.concurrent.{Future, Promise}
import play.api.mvc.{Handler, Results, WebSocket}
import play.api.libs.iteratee._
import java.net.URI
import org.jboss.netty.handler.codec.http.websocketx._
import org.specs2.matcher.Matcher
import akka.actor.{ActorRef, PoisonPill, Actor, Props}
import play.mvc.WebSocket.{Out, In}
import play.core.Router.HandlerDef
import java.util.concurrent.atomic.AtomicReference
import org.jboss.netty.buffer.ChannelBuffers
object WebSocketSpec extends PlaySpecification with WsTestClient {
sequential
def withServer[A](webSocket: Application => Handler)(block: => A): A = {
val currentApp = new AtomicReference[FakeApplication]
val app = FakeApplication(
withRoutes = {
case (_, _) => webSocket(currentApp.get())
}
)
currentApp.set(app)
running(TestServer(testServerPort, app))(block)
}
def runWebSocket[A](handler: (Enumerator[WebSocketFrame], Iteratee[WebSocketFrame, _]) => Future[A]): A = {
val innerResult = Promise[A]()
WebSocketClient { client =>
await(client.connect(URI.create("ws://localhost:" + testServerPort + "/stream")) { (in, out) =>
innerResult.completeWith(handler(in, out))
})
}
await(innerResult.future)
}
def textFrame(matcher: Matcher[String]): Matcher[WebSocketFrame] = beLike {
case t: TextWebSocketFrame => t.getText must matcher
}
def closeFrame(status: Int = 1000): Matcher[WebSocketFrame] = beLike {
case close: CloseWebSocketFrame => close.getStatusCode must_== status
}
def binaryBuffer(text: String) = ChannelBuffers.wrappedBuffer(text.getBytes("utf-8"))
/**
* Iteratee getChunks that invokes a callback as soon as it's done.
*/
def getChunks[A](chunks: List[A], onDone: List[A] => _): Iteratee[A, List[A]] = Cont {
case Input.El(c) => getChunks(c :: chunks, onDone)
case Input.EOF =>
val result = chunks.reverse
onDone(result)
Done(result, Input.EOF)
case Input.Empty => getChunks(chunks, onDone)
}
/*
* Shared tests
*/
def allowConsumingMessages(webSocket: Application => Promise[List[String]] => Handler) = {
val consumed = Promise[List[String]]()
withServer(app => webSocket(app)(consumed)) {
val result = runWebSocket { (in, out) =>
Enumerator(new TextWebSocketFrame("a"), new TextWebSocketFrame("b"), new CloseWebSocketFrame(1000, "")) |>>> out
consumed.future
}
result must_== Seq("a", "b")
}
}
def allowSendingMessages(webSocket: Application => List[String] => Handler) = {
withServer(app => webSocket(app)(List("a", "b"))) {
val frames = runWebSocket { (in, out) =>
in |>>> Iteratee.getChunks[WebSocketFrame]
}
frames must contain(exactly(
textFrame(be_==("a")),
textFrame(be_==("b")),
closeFrame()
).inOrder)
}
}
def cleanUpWhenClosed(webSocket: Application => Promise[Boolean] => Handler) = {
val cleanedUp = Promise[Boolean]()
withServer(app => webSocket(app)(cleanedUp)) {
runWebSocket { (in, out) =>
out.run
cleanedUp.future
} must beTrue
}
}
def closeWhenTheConsumerIsDone(webSocket: Application => Handler) = {
withServer(app => webSocket(app)) {
val frames = runWebSocket { (in, out) =>
Enumerator[WebSocketFrame](new TextWebSocketFrame("foo")) |>> out
in |>>> Iteratee.getChunks[WebSocketFrame]
}
frames must contain(exactly(
closeFrame()
))
}
}
def allowRejectingTheWebSocketWithAResult(webSocket: Application => Int => Handler) = {
withServer(app => webSocket(app)(FORBIDDEN)) {
implicit val port = testServerPort
await(wsUrl("/stream").withHeaders(
"Upgrade" -> "websocket",
"Connection" -> "upgrade"
).get()).status must_== FORBIDDEN
}
}
"Plays WebSockets" should {
"allow consuming messages" in allowConsumingMessages { _ => consumed =>
WebSocket.using[String] { req =>
(getChunks[String](Nil, consumed.success _), Enumerator.empty)
}
}
"allow sending messages" in allowSendingMessages { _ => messages =>
WebSocket.using[String] { req =>
(Iteratee.ignore, Enumerator.enumerate(messages) >>> Enumerator.eof)
}
}
"close when the consumer is done" in closeWhenTheConsumerIsDone { _ =>
WebSocket.using[String] { req =>
(Iteratee.head, Enumerator.empty)
}
}
"clean up when closed" in cleanUpWhenClosed { _ => cleanedUp =>
WebSocket.using[String] { req =>
(Iteratee.ignore, Enumerator.empty[String].onDoneEnumerating(cleanedUp.success(true)))
}
}
"allow rejecting a websocket with a result" in allowRejectingTheWebSocketWithAResult { _ => statusCode =>
WebSocket.tryAccept[String] { req =>
Future.successful(Left(Results.Status(statusCode)))
}
}
"allow handling a WebSocket with an actor" in {
"allow consuming messages" in allowConsumingMessages { implicit app => consumed =>
WebSocket.acceptWithActor[String, String] { req => out =>
Props(new Actor() {
var messages = List.empty[String]
def receive = {
case msg: String =>
messages = msg :: messages
}
override def postStop() = {
consumed.success(messages.reverse)
}
})
}
}
"allow sending messages" in allowSendingMessages { implicit app => messages =>
WebSocket.acceptWithActor[String, String] { req => out =>
Props(new Actor() {
messages.foreach { msg =>
out ! msg
}
out ! PoisonPill
def receive = PartialFunction.empty
})
}
}
"close when the consumer is done" in closeWhenTheConsumerIsDone { implicit app =>
WebSocket.acceptWithActor[String, String] { req => out =>
Props(new Actor() {
out ! PoisonPill
def receive = PartialFunction.empty
})
}
}
"clean up when closed" in cleanUpWhenClosed { implicit app => cleanedUp =>
WebSocket.acceptWithActor[String, String] { req => out =>
Props(new Actor() {
def receive = PartialFunction.empty
override def postStop() = {
cleanedUp.success(true)
}
})
}
}
"allow rejecting a websocket with a result" in allowRejectingTheWebSocketWithAResult { implicit app => statusCode =>
WebSocket.tryAcceptWithActor[String, String] { req =>
Future.successful(Left(Results.Status(statusCode)))
}
}
"aggregate text frames" in {
val consumed = Promise[List[String]]()
withServer(app => WebSocket.using[String] { req =>
(getChunks[String](Nil, consumed.success _), Enumerator.empty)
}) {
val result = runWebSocket { (in, out) =>
Enumerator(
new TextWebSocketFrame("first"),
new TextWebSocketFrame(false, 0, "se"),
new ContinuationWebSocketFrame(false, 0, "co"),
new ContinuationWebSocketFrame(true, 0, "nd"),
new TextWebSocketFrame("third"),
new CloseWebSocketFrame(1000, "")) |>>> out
consumed.future
}
result must_== Seq("first", "second", "third")
}
}
"aggregate binary frames" in {
val consumed = Promise[List[Array[Byte]]]()
withServer(app => WebSocket.using[Array[Byte]] { req =>
(getChunks[Array[Byte]](Nil, consumed.success _), Enumerator.empty)
}) {
val result = runWebSocket { (in, out) =>
Enumerator(
new BinaryWebSocketFrame(binaryBuffer("first")),
new BinaryWebSocketFrame(false, 0, binaryBuffer("se")),
new ContinuationWebSocketFrame(false, 0, binaryBuffer("co")),
new ContinuationWebSocketFrame(true, 0, binaryBuffer("nd")),
new BinaryWebSocketFrame(binaryBuffer("third")),
new CloseWebSocketFrame(1000, "")) |>>> out
consumed.future
}
result.map(b => b.toSeq) must_== Seq("first".getBytes("utf-8").toSeq, "second".getBytes("utf-8").toSeq, "third".getBytes("utf-8").toSeq)
}
}
"close the websocket when the buffer limit is exceeded" in {
withServer(app => WebSocket.using[String] { req =>
(Iteratee.ignore, Enumerator.empty)
}) {
val frames = runWebSocket { (in, out) =>
Enumerator[WebSocketFrame](
new TextWebSocketFrame(false, 0, "first frame"),
new ContinuationWebSocketFrame(true, 0, new String(Array.range(1, 65530).map(_ => 'a')))
) |>> out
in |>>> Iteratee.getChunks[WebSocketFrame]
}
frames must contain(exactly(
closeFrame(1009)
))
}
}
}
"allow handling a WebSocket in java" in {
import play.core.Router.HandlerInvokerFactory
import play.core.Router.HandlerInvokerFactory._
import play.mvc.{ WebSocket => JWebSocket, Results => JResults }
import play.libs.F
implicit def toHandler[J <: AnyRef](javaHandler: J)(implicit factory: HandlerInvokerFactory[J]): Handler = {
val invoker = factory.createInvoker(
javaHandler,
new HandlerDef(javaHandler.getClass.getClassLoader, "package", "controller", "method", Nil, "GET", "", "/stream")
)
invoker.call(javaHandler)
}
"allow consuming messages" in allowConsumingMessages { _ => consumed =>
new JWebSocket[String] {
#volatile var messages = List.empty[String]
def onReady(in: In[String], out: Out[String]) = {
in.onMessage(new F.Callback[String] {
def invoke(msg: String) = messages = msg :: messages
})
in.onClose(new F.Callback0 {
def invoke() = consumed.success(messages.reverse)
})
}
}
}
"allow sending messages" in allowSendingMessages { _ => messages =>
new JWebSocket[String] {
def onReady(in: In[String], out: Out[String]) = {
messages.foreach { msg =>
out.write(msg)
}
out.close()
}
}
}
"clean up when closed" in cleanUpWhenClosed { _ => cleanedUp =>
new JWebSocket[String] {
def onReady(in: In[String], out: Out[String]) = {
in.onClose(new F.Callback0 {
def invoke() = cleanedUp.success(true)
})
}
}
}
"allow rejecting a websocket with a result" in allowRejectingTheWebSocketWithAResult { _ => statusCode =>
JWebSocket.reject[String](JResults.status(statusCode))
}
"allow handling a websocket with an actor" in allowSendingMessages { _ => messages =>
JWebSocket.withActor[String](new F.Function[ActorRef, Props]() {
def apply(out: ActorRef) = {
Props(new Actor() {
messages.foreach { msg =>
out ! msg
}
out ! PoisonPill
def receive = PartialFunction.empty
})
}
})
}
}
}
}

Related

STUN/TURN server configuration for Django channels with WebRTC

I developed using django channels to implement video chat and message chat.(I was referring to a YouTuber's course.)
When I first completed the development, I noticed that webrtc works only in the local network, and I found out that a stun/turn server was needed.
So, I created a separate EC2 and built the stun/turn server and set this to the RTCPeerconnection in the web server.
Stun/turn server test in Trickle ICE is good.
But My video call still works only within the local network. And even within the same network, the connection was very slow.
Server overall configuration.(Webserver with SSL application loadbalancer)
Coturn config
# /etc/default/coturn
TURNSERVER_ENABLED=1
# /etc/turnserver.conf
# STUN server port is 3478 for UDP and TCP, and 5349 for TLS.
# Allow connection on the UDP port 3478
listening-port=3478
# and 5349 for TLS (secure)
tls-listening-port=5349
# Require authentication
fingerprint
lt-cred-mech
server-name=mysite.com
realm=mysite.com
# Important:
# Create a test user if you want
# You can remove this user after testing
user=myuser:userpassword
total-quota=100
stale-nonce=600
# Path to the SSL certificate and private key. In this example we will use
# the letsencrypt generated certificate files.
cert=/etc/letsencrypt/live/stun.mysite.com/cert.pem
pkey=/etc/letsencrypt/live/stun.mysite.com/privkey.pem
# Specify the allowed OpenSSL cipher list for TLS/DTLS connections
cipher-list="~~~~~-SHA512:~~~~~SHA512:~~~~~-SHA384:~~~~~SHA384:~~~-AES256-SHA384"
# Specify the process user and group
proc-user=turnserver
proc-group=turnserver
main.js in Web Server
let mapPeers = {};
let usernameInput = document.querySelector('#username');
let btnJoin = document.querySelector('#btn-join');
let username;
let webSocket;
const iceConfiguration = {
iceServers: [
{ urls:'stun:stun.mysite.com' },
{
username: 'myuser',
credential: 'userpassword',
urls: 'turn:turn.mysite.com'
},
]
}
function webSocketOnMessage(event) {
let parsedData = JSON.parse(event.data);
let peerUsername = parsedData['peer'];
let action = parsedData['action'];
if (username === peerUsername){
return;
}
let receiver_channel_name = parsedData['message']['receiver_channel_name'];
if (action === 'new-peer'){
createOfferer(peerUsername, receiver_channel_name);
return;
}
if (action === 'new-offer'){
let offer = parsedData['message']['sdp']
createAnswerer(offer, peerUsername, receiver_channel_name);
return;
}
if (action === 'new-answer'){
let answer = parsedData['message']['sdp'];
let peer = mapPeers[peerUsername][0];
peer.setRemoteDescription(answer);
return;
}
// console.log('message : ', message)
}
btnJoin.addEventListener('click', () => {
username = usernameInput.value;
console.log('username : ', username);
if (username === ''){
return;
}
usernameInput.value = '';
usernameInput.disabled = true;
usernameInput.style.visibility = 'hidden';
btnJoin.disabled = true;
btnJoin.style.visibility = 'hidden';
let labelUsername = document.querySelector('#label-username');
labelUsername.innerHTML = username;
let loc = window.location;
let wsStart = 'ws://';
if (loc.protocol === 'https:'){
wsStart = 'wss://';
}
let endpoint = wsStart + loc.host + loc.pathname + 'ws/';
console.log(loc.host)
console.log(loc.pathname)
console.log('endpoint: ', endpoint);
webSocket = new WebSocket(endpoint);
console.log('--------', webSocket)
webSocket.addEventListener('open', (e) => {
console.log('Connection opened!');
sendSignal('new-peer', {});
});
webSocket.addEventListener('message', webSocketOnMessage);
webSocket.addEventListener('close', (e) => {
console.log('Connection closed!', e)
});
})
// Media
let localStream = new MediaStream();
const constraints = {
'video': true,
'audio': true
}
const localVideo = document.querySelector('#local-video');
const btnToggleAudio = document.querySelector('#btn-toggle-audio');
const btnToggleVideo = document.querySelector('#btn-toggle-video');
let userMedia = navigator.mediaDevices.getUserMedia(constraints)
.then(stream => {
localStream = stream;
localVideo.srcObject = localStream;
localVideo.muted = true;
let audioTracks = stream.getAudioTracks();
let videoTracks = stream.getVideoTracks();
audioTracks[0].enabled = true;
videoTracks[0].enabled = true;
btnToggleAudio.addEventListener('click', () => {
audioTracks[0].enabled = !audioTracks[0].enabled;
if (audioTracks[0].enabled){
btnToggleAudio.innerHTML = 'Audio mute'
return;
}
btnToggleAudio.innerHTML = 'Audio unmute'
});
btnToggleVideo.addEventListener('click', () => {
videoTracks[0].enabled = !videoTracks[0].enabled;
if (videoTracks[0].enabled){
btnToggleVideo.innerHTML = 'Video off'
return;
}
btnToggleVideo.innerHTML = 'Video on'
});
})
.catch(error => {
console.log('Error accessing media devices', error);
});
// Message
let btnSendMsg = document.querySelector('#btn-send-msd');
let messageList = document.querySelector('#message-list');
let messageInput = document.querySelector('#msg');
btnSendMsg.addEventListener('click', sendMsgOnclick);
function sendMsgOnclick() {
let message = messageInput.value;
let li = document.createElement('li');
li.appendChild(document.createTextNode('Me: ' + message));
messageList.appendChild(li);
let dataChannels = getDataChannels();
message = username + ': ' + message;
console.log("---------console.log(dataChannels)----------")
console.log(dataChannels)
for (index in dataChannels){
console.log("---------console.log(index)----------")
console.log(index)
dataChannels[index].send(message);
}
messageInput.value = '';
}
function sendSignal(action, message){
let jsonStr = JSON.stringify({
'peer': username,
'action': action,
"message": message,
});
webSocket.send(jsonStr);
}
function createOfferer(peerUsername, receiver_channel_name) {
let peer = new RTCPeerConnection(iceConfiguration);
console.log('=================================')
console.log(peer)
console.log('=================================')
addLocalTracks(peer);
let dc = peer.createDataChannel('channel');
dc.addEventListener('open', () => {
console.log('connection opened!')
})
dc.addEventListener('message', dcOnMessage);
let remoteVideo = createVideo(peerUsername);
setOnTrack(peer, remoteVideo);
mapPeers[peerUsername] = [peer, dc];
peer.addEventListener('iceconnectionstatechange', () => {
let iceConnectionState = peer.iceConnectionState;
if (iceConnectionState === 'failed' || iceConnectionState === 'disconnected' || iceConnectionState === 'closed'){
delete mapPeers[peerUsername];
if (iceConnectionState !== 'closed'){
peer.close();
}
removeVideo(remoteVideo);
}
})
peer.addEventListener('icecandidate', (event) => {
if (event.candidate){
console.log('new ice candidate', JSON.stringify(peer.localDescription))
return;
}
sendSignal('new-offer', {
'sdp': peer.localDescription,
'receiver_channel_name': receiver_channel_name
});
});
peer.createOffer()
.then(o => peer.setLocalDescription(o))
.then(() => {
console.log('Local description set successfully!');
});
}
function createAnswerer(offer, peerUsername, receiver_channel_name) {
let peer = new RTCPeerConnection(iceConfiguration);
console.log('=================================')
console.log(peer)
console.log('=================================')
// let peer = new RTCPeerConnection(null);
addLocalTracks(peer);
let remoteVideo = createVideo(peerUsername);
setOnTrack(peer, remoteVideo);
peer.addEventListener('datachannel', e => {
peer.dc = e.channel;
peer.dc.addEventListener('open', () => {
console.log('connection opened!')
})
peer.dc.addEventListener('message', dcOnMessage);
mapPeers[peerUsername] = [peer, peer.dc];
});
peer.addEventListener('iceconnectionstatechange', () => {
let iceConnectionState = peer.iceConnectionState;
if (iceConnectionState === 'failed' || iceConnectionState === 'disconnected' || iceConnectionState === 'closed'){
delete mapPeers[peerUsername];
if (iceConnectionState !== 'closed'){
peer.close();
}
removeVideo(remoteVideo);
}
})
peer.addEventListener('icecandidate', (event) => {
if (event.candidate){
console.log('new ice candidate', JSON.stringify(peer.localDescription))
return;
}
sendSignal('new-answer', {
'sdp': peer.localDescription,
'receiver_channel_name': receiver_channel_name
});
});
peer.setRemoteDescription(offer)
.then(() => {
console.log('Remote description set successfully for %s.', peerUsername);
return peer.createAnswer();
})
.then(a => {
console.log('Answer created!')
peer.setLocalDescription(a);
})
// peer.createOffer()
// .then(o => peer.setLocalDescription(o))
// .then(() => {
// console.log('Local description set successfully!');
// });
}
function addLocalTracks(peer) {
localStream.getTracks().forEach(track => {
peer.addTrack(track, localStream);
return;
});
}
function dcOnMessage(event) {
let message = event.data;
let li = document.createElement('li')
li.appendChild(document.createTextNode(message));
messageList.appendChild(li);
}
function createVideo(peerUsername) {
let videoContainer = document.querySelector('#video-container');
let remoteVideo = document.createElement('video');
remoteVideo.id = peerUsername + '-video';
remoteVideo.autoplay = true;
remoteVideo.playsInline = true;
let videoWrapper = document.createElement('div');
videoContainer.appendChild(videoWrapper);
videoWrapper.appendChild(remoteVideo);
return remoteVideo;
}
function setOnTrack(peer, remoteVideo) {
let remoteStream = new MediaStream();
remoteVideo.srcObject = remoteStream;
peer.addEventListener('track', async (event) => {
remoteStream.addTrack(event.track, remoteStream);
});
}
function removeVideo(video) {
let videoWrapper = video.parentNode;
videoWrapper.parentNode.removeChild(videoWrapper);
}
function getDataChannels() {
let dataChannels = []
for (peerUsername in mapPeers){
let dataChannel = mapPeers[peerUsername][1];
dataChannels.push(dataChannel);
}
return dataChannels;
}
django channels code in webserver
# Consumer.py
import json
from channels.generic.websocket import AsyncWebsocketConsumer
class ChatConsumer(AsyncWebsocketConsumer):
async def connect(self):
print('connect!!')
self.room_group_name = 'test_room'
await self.channel_layer.group_add(
self.room_group_name,
self.channel_name
)
print(f"room_group_name : {self.room_group_name} and channel_name : {self.channel_name}")
await self.accept()
async def disconnect(self, close_code):
await self.channel_layer.group_discard(
self.room_group_name,
self.channel_name
)
print('disconnected!')
async def receive(self, text_data):
receive_dict = json.loads(text_data)
print(f"receive_data : {receive_dict}")
message = receive_dict['message']
action = receive_dict['action']
if (action == 'new-offer') or (action == 'new-answer'):
receiver_channel_name = receive_dict['message']['receiver_channel_name']
receive_dict['message']['receiver_channel_name'] = self.channel_name
await self.channel_layer.send(
receiver_channel_name,
{
'type': 'send.sdp',
'receive_dict': receive_dict
}
)
return
receive_dict['message']['receiver_channel_name'] = self.channel_name
await self.channel_layer.group_send(
self.room_group_name,
{
'type': 'send.sdp',
'receive_dict': receive_dict
}
)
async def send_sdp(self, event):
print('send_sdp!!')
receive_dict = event['receive_dict']
await self.send(text_data=json.dumps(receive_dict))
# asgi.py
import os
from django.core.asgi import get_asgi_application
from channels.routing import ProtocolTypeRouter, URLRouter
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'Project.settings')
django_asgi_app = get_asgi_application()
from channels.auth import AuthMiddlewareStack
import video_app.routing
application = ProtocolTypeRouter({
"http": django_asgi_app,
"websocket": AuthMiddlewareStack(
URLRouter(
video_app.routing.websocket_urlpatterns
)
),
})
#routing.py
from django.urls import re_path
from . import consumers
websocket_urlpatterns = [
re_path(r"^ws/$", consumers.ChatConsumer.as_asgi()),
]
# settings.py
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("myredis.cache.amazonaws.com", 6379)]
},
},
}
What is my fault?

How to access metrics of Alpakka CommittableSource with back off?

Accessing the metrics of an Alpakka PlainSource seems fairly straight forward, but how can I do the same thing with a CommittableSource?
I currently have a simple consumer, something like this:
class Consumer(implicit val ma: ActorMaterializer, implicit val ec: ExecutionContext) extends Actor {
private val settings = ConsumerSettings(
context.system,
new ByteArrayDeserializer,
new StringDeserializer)
.withProperties(...)
override def receive: Receive = Actor.emptyBehavior
RestartSource
.withBackoff(minBackoff = 2.seconds, maxBackoff = 20.seconds, randomFactor = 0.2)(consumer)
.runForeach { handleMessage }
private def consumer() = {
AkkaConsumer
.committableSource(settings, Subscriptions.topics(Set(topic)))
.log(getClass.getSimpleName)
.withAttributes(ActorAttributes.supervisionStrategy(_ => Supervision.Resume))
}
private def handleMessage(message: CommittableMessage[Array[Byte], String]): Unit = {
...
}
}
How can I get access to the consumer metrics in this case?
We are using the Java prometheus client and I solved my issue with a custom collector that fetches its metrics directly from JMX:
import java.lang.management.ManagementFactory
import java.util
import io.prometheus.client.Collector
import io.prometheus.client.Collector.MetricFamilySamples
import io.prometheus.client.CounterMetricFamily
import io.prometheus.client.GaugeMetricFamily
import javax.management.ObjectName
import scala.collection.JavaConverters._
import scala.collection.mutable
class ConsumerMetricsCollector(val labels: Map[String, String] = Map.empty) extends Collector {
val metrics: mutable.Map[String, MetricFamilySamples] = mutable.Map.empty
def collect: util.List[MetricFamilySamples] = {
val server = ManagementFactory.getPlatformMBeanServer
for {
attrType <- List("consumer-metrics", "consumer-coordinator-metrics", "consumer-fetch-manager-metrics")
name <- server.queryNames(new ObjectName(s"kafka.consumer:type=$attrType,client-id=*"), null).asScala
attrInfo <- server.getMBeanInfo(name).getAttributes.filter { _.getType == "double" }
} yield {
val attrName = attrInfo.getName
val metricLabels = attrName.split(",").map(_.split("=").toList).collect {
case "client-id" :: (id: String) :: Nil => ("client-id", id)
}.toList ++ labels
val metricName = "kafka_consumer_" + attrName.replaceAll(raw"""[^\p{Alnum}]+""", "_")
val labelKeys = metricLabels.map(_._1).asJava
val metric = metrics.getOrElseUpdate(metricName,
if(metricName.endsWith("_total") || metricName.endsWith("_sum")) {
new CounterMetricFamily(metricName, attrInfo.getDescription, labelKeys)
} else {
new GaugeMetricFamily(metricName, attrInfo.getDescription, labelKeys)
}: MetricFamilySamples
)
val metricValue = server.getAttribute(name, attrName).asInstanceOf[Double]
val labelValues = metricLabels.map(_._2).asJava
metric match {
case f: CounterMetricFamily => f.addMetric(labelValues, metricValue)
case f: GaugeMetricFamily => f.addMetric(labelValues, metricValue)
case _ =>
}
}
metrics.values.toList.asJava
}
}

Akka: Message ordering after Akka restart

The following code example (which you can copy and run) shows a MyParentActor that creates a MyChildActor.
The MyChildActor throws an exception for its first message which causes it to be restarted.
However, what I want to achieve is for "Message 1" to still be processed before "Message 2" on restart of the MyChildActor.
Instead, what is happening is that Message 1 is added to the tail of the mailbox queue, and so Message 2 is processed first.
How do I achieve ordering of the original messages on restart of an actor, without having to create my own mailbox etc?
object TestApp extends App {
var count = 0
val actorSystem = ActorSystem()
val parentActor = actorSystem.actorOf(Props(classOf[MyParentActor]))
parentActor ! "Message 1"
parentActor ! "Message 2"
class MyParentActor extends Actor with ActorLogging{
var childActor: ActorRef = null
#throws[Exception](classOf[Exception])
override def preStart(): Unit = {
childActor = context.actorOf(Props(classOf[MyChildActor]))
}
override def receive = {
case message: Any => {
childActor ! message
}
}
override def supervisorStrategy: SupervisorStrategy = {
OneForOneStrategy() {
case _: CustomException => Restart
case _: Exception => Restart
}
}
}
class MyChildActor extends Actor with ActorLogging{
override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
message match {
case Some(e) => self ! e
}
}
override def receive = {
case message: String => {
if (count == 0) {
count += 1
throw new CustomException("Exception occurred")
}
log.info("Received message {}", message)
}
}
}
class CustomException(message: String) extends RuntimeException(message)
}
You could mark the failing message with a special envelope and stash everything up to the receiving of that message (see child actor implementation). Just define a behaviour where the actor stashes every message except for the specific envelope, processes it's payload and then unstashes all other messages and returns to it's normal behaviour.
This gives me:
INFO TestApp$MyChildActor - Received message Message 1
INFO TestApp$MyChildActor - Received message Message 2
object TestApp extends App {
var count = 0
val actorSystem = ActorSystem()
val parentActor = actorSystem.actorOf(Props(classOf[MyParentActor]))
parentActor ! "Message 1"
parentActor ! "Message 2"
class MyParentActor extends Actor with ActorLogging{
var childActor: ActorRef = null
#throws[Exception](classOf[Exception])
override def preStart(): Unit = {
childActor = context.actorOf(Props(classOf[MyChildActor]))
}
override def receive = {
case message: Any => {
childActor ! message
}
}
override def supervisorStrategy: SupervisorStrategy = {
OneForOneStrategy() {
case e: CustomException => Restart
case _: Exception => Restart
}
}
}
class MyChildActor extends Actor with Stash with ActorLogging{
override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
message match {
case Some(e) =>
self ! Unstash(e)
}
}
override def postRestart(reason: Throwable): Unit = {
context.become(stashing)
preStart()
}
override def receive = {
case message: String => {
if (count == 0) {
count += 1
throw new CustomException("Exception occurred")
}
log.info("Received message {}", message)
}
}
private def stashing: Receive = {
case Unstash( payload ) =>
receive(payload)
unstashAll()
context.unbecome()
case m =>
stash()
}
}
case class Unstash( payload: Any )
class CustomException(message: String) extends RuntimeException(message)
}

Is it possible to create an "infinite" stream from a database table using Akka Stream

I'm playing with Akka Streams 2.4.2 and am wondering if it's possible to setup a stream which uses a database table for a source and whenever there is a record added to the table that record is materialized and pushed downstream?
UPDATE: 2/23/16
I've implemented the solution from #PH88. Here's my table definition:
case class Record(id: Int, value: String)
class Records(tag: Tag) extends Table[Record](tag, "my_stream") {
def id = column[Int]("id")
def value = column[String]("value")
def * = (id, value) <> (Record.tupled, Record.unapply)
}
Here's the implementation:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
try{
val newRecStream = Source.unfold((0, List[Record]())) { n =>
try {
val q = for (r <- TableQuery[Records].filter(row => row.id > n._1)) yield (r)
val r = Source.fromPublisher(db.stream(q.result)).collect {
case rec => println(s"${rec.id}, ${rec.value}"); rec
}.runFold((n._1, List[Record]())) {
case ((id, xs), current) => (current.id, current :: xs)
}
val answer: (Int, List[Record]) = Await.result(r, 5.seconds)
Option(answer, None)
}
catch { case e:Exception => println(e); Option(n, e) }
}
Await.ready(newRecStream.throttle(1, 1.second, 1, ThrottleMode.shaping).runForeach(_ => ()), Duration.Inf)
}
finally {
system.shutdown
db.close
}
But my problem is that when I attempt to call flatMapConcat the type I get is Serializable.
UPDATE: 2/24/16
Updated to try db.run suggestion from #PH88:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
val disableAutoCommit = SimpleDBIO(_.connection.setAutoCommit(false))
val queryLimit = 1
try {
val newRecStream = Source.unfoldAsync(0) { n =>
val q = TableQuery[Records].filter(row => row.id > n).take(queryLimit)
db.run(q.result).map { recs =>
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(s"${rec.id}, ${rec.value}")
}
Await.ready(newRecStream, Duration.Inf)
}
catch
{
case ex: Throwable => println(ex)
}
finally {
system.shutdown
db.close
}
Which works (I changed query limit to 1 since I only have a couple items in my database table currently) - except once it prints the last row in the table the program exists. Here's my log output:
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/Users/xxxxxxx/dev/src/scratch/scala/fpp-in-scala/target/scala-2.11/classes/logback.xml]
17:09:28,062 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
17:09:28,064 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
17:09:28,079 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
17:09:28,102 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [application] to DEBUG
17:09:28,103 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
17:09:28,103 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
17:09:28,103 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
17:09:28,104 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator#4278284b - Registering current configuration as safe fallback point
17:09:28.117 [main] INFO com.zaxxer.hikari.HikariDataSource - pg-postgres - is starting.
1, WASSSAAAAAAAP!
2, WHAAAAT?!?
3, booyah!
4, what!
5, This rocks!
6, Again!
7, Again!2
8, I love this!
9, Akka Streams rock
10, Tuning jdbc
17:09:39.000 [main] INFO com.zaxxer.hikari.pool.HikariPool - pg-postgres - is closing down.
Process finished with exit code 0
Found the missing piece - need to replace this:
Some(recs.last.id, recs)
with this:
val lastId = if(recs.isEmpty) n else recs.last.id
Some(lastId, recs)
The call to recs.last.id was throwing java.lang.UnsupportedOperationException: empty.last when the result set was empty.
In general SQL database is a 'passive' construct and does not actively push changes like what you described. You can only 'simulate' the 'push' with periodic polling like:
val newRecStream = Source
// Query for table changes
.unfold(initState) { lastState =>
// query for new data since lastState and save the current state into newState...
Some((newState, newRecords))
}
// Throttle to limit the poll frequency
.throttle(...)
// breaks down into individual records...
.flatMapConcat { newRecords =>
Source.unfold(newRecords) { pendingRecords =>
if (records is empty) {
None
} else {
// take one record from pendingRecords and save to newRec. Save the rest into remainingRecords.
Some(remainingRecords, newRec)
}
}
}
Updated: 2/24/2016
Pseudo code example based on the 2/23/2016 updates of the question:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
val queryLimit = 10
try {
val completion = Source
.unfoldAsync(0) { lastRowId =>
val q = TableQuery[Records].filter(row => row.id > lastRowId).take(queryLimit)
db.run(q.result).map { recs =>
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(s"${rec.id}, ${rec.value}")
}
// Block forever
Await.ready(completion, Duration.Inf)
} catch {
case ex: Throwable => println(ex)
} finally {
system.shutdown
db.close
}
It will repeatedly execute the query in unfoldAsync against the DB, retrieving at most 10 (queryLimit) records a time and send the records downstream (-> throttle -> flatMapConcat -> runForeach). The Await at the end will actually block forever.
Updated: 2/25/2016
Executable 'proof-of-concept' code:
import akka.actor.ActorSystem
import akka.stream.{ThrottleMode, ActorMaterializer}
import akka.stream.scaladsl.Source
import scala.concurrent.duration.Duration
import scala.concurrent.{Await, Future}
import scala.concurrent.duration._
object Infinite extends App{
implicit val system = ActorSystem("Publisher")
implicit val ec = system.dispatcher
implicit val materializer = ActorMaterializer()
case class Record(id: Int, value: String)
try {
val completion = Source
.unfoldAsync(0) { lastRowId =>
Future {
val recs = (lastRowId to lastRowId + 10).map(i => Record(i, s"rec#$i"))
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.Shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(rec)
}
Await.ready(completion, Duration.Inf)
} catch {
case ex: Throwable => println(ex)
} finally {
system.shutdown
}
}
Here is database infinite streaming working code. This has been tested with millions of records being inserted into postgresql database while streaming app is running -
package infinite.streams.db
import akka.NotUsed
import akka.actor.ActorSystem
import akka.stream.alpakka.slick.scaladsl.SlickSession
import akka.stream.scaladsl.{Flow, Sink, Source}
import akka.stream.{ActorMaterializer, ThrottleMode}
import org.slf4j.LoggerFactory
import slick.basic.DatabaseConfig
import slick.jdbc.JdbcProfile
import scala.concurrent.duration._
import scala.concurrent.{Await, ExecutionContextExecutor}
case class Record(id: Int, value: String) {
val content = s"<ROW><ID>$id</ID><VALUE>$value</VALUE></ROW>"
}
object InfiniteStreamingApp extends App {
println("Starting app...")
implicit val system: ActorSystem = ActorSystem("Publisher")
implicit val ec: ExecutionContextExecutor = system.dispatcher
implicit val materializer: ActorMaterializer = ActorMaterializer()
println("Initializing database configuration...")
val databaseConfig: DatabaseConfig[JdbcProfile] = DatabaseConfig.forConfig[JdbcProfile]("postgres3")
implicit val session: SlickSession = SlickSession.forConfig(databaseConfig)
import databaseConfig.profile.api._
class Records(tag: Tag) extends Table[Record](tag, "test2") {
def id = column[Int]("c1")
def value = column[String]("c2")
def * = (id, value) <> (Record.tupled, Record.unapply)
}
val db = databaseConfig.db
println("Prime for streaming...")
val logic: Flow[(Int, String), (Int, String), NotUsed] = Flow[(Int, String)].map {
case (id, value) => (id, value.toUpperCase)
}
val fetchSize = 5
try {
val done = Source
.unfoldAsync(0) {
lastId =>
println(s"Fetching next: $fetchSize records with id > $lastId")
val query = TableQuery[Records].filter(_.id > lastId).take(fetchSize)
db.run(query.result.withPinnedSession)
.map {
recs => Some(recs.last.id, recs)
}
}
.throttle(5, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat {
recs => Source.fromIterator(() => recs.iterator)
}
.map(x => (x.id, x.content))
.via(logic)
.log("*******Post Transformation******")
// .runWith(Sink.foreach(r => println("SINK: " + r._2)))
// Use runForeach or runWith(Sink)
.runForeach(rec => println("REC: " + rec))
println("Waiting for result....")
Await.ready(done, Duration.Inf)
} catch {
case ex: Throwable => println(ex.getMessage)
} finally {
println("Streaming end successfully")
db.close()
system.terminate()
}
}
application.conf
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "INFO"
}
# Load using SlickSession.forConfig("slick-postgres")
postgres3 {
profile = "slick.jdbc.PostgresProfile$"
db {
dataSourceClass = "slick.jdbc.DriverDataSource"
properties = {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost/testdb"
user = "postgres"
password = "postgres"
}
numThreads = 2
}
}

Spray http client and thousands requests

I want to configure spray http client in a way that controls max number of request that were sent to the server. I need this because server that i'm sending requests to blocks me if there are more then 2 request were sent. I get
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://smallTasks/user/IO-HTTP#151444590]] after [15000 ms]
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://smallTasks/user/IO-HTTP#151444590]] after [15000 ms]
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://smallTasks/user/IO-HTTP#151444590]] after [15000 ms]
akka.pattern.AskTimeoutException: Ask timed out on
I need to send thousands of requests but i get blocked after i got responses from ~ 100 requests.
I have this method:
implicit val system = ActorSystem("smallTasks")
implicit val timeout = new Timeout(15.seconds)
import system.dispatcher
def doHttpRequest(url: String): Future[HttpResponse] = {
(IO(Http) ? HttpRequest(GET, Uri(url))).mapTo[HttpResponse]
}
And here i catch responses and retry if it fails(recursively):
def getOnlineOffers(modelId: Int, count: Int = 0): Future[Any] = {
val result = Promise[Any]()
AkkaSys.doHttpRequest(Market.modelOffersUrl(modelId)).map(response => {
val responseCode = response.status.intValue
if (List(400, 404).contains(responseCode)) {
result.success("Bad request")
} else if (responseCode == 200) {
Try {
Json.parse(response.entity.asString).asOpt[JsObject]
} match {
case Success(Some(obj)) =>
Try {
(obj \\ "onlineOffers").head.as[Int]
} match {
case Success(offers) => result.success(offers)
case _ => result.success("Can't find property")
}
case _ => result.success("Wrong body")
}
} else {
result.success("Unexpected error")
}
}).recover { case err =>
if (count > 5) {
result.success("Too many tries")
} else {
println(err.toString)
Thread.sleep(200)
getOnlineOffers(modelId, count + 1).map(r => result.success(r))
}
}
result.future
}
How to do this properly? May be i need to configure akka dispatcher somehow?
you can use http://spray.io/documentation/1.2.2/spray-client/ and write you personal pipeline
val pipeline: Future[SendReceive] =
for (
Http.HostConnectorInfo(connector, _) <-
IO(Http) ? Http.HostConnectorSetup("www.spray.io", port = 80)
) yield sendReceive(connector)
val request = Get("/segment1/segment2/...")
val responseFuture: Future[HttpResponse] = pipeline.flatMap(_(request))
to get HttpResponse
import scala.concurrent.Await
import scala.concurrent.duration._
val response: HttpResponse = Aweit(responseFuture, ...)
to convert
import spray.json._
response.entity.asString.parseJson.convertTo[T]
to check
Try(response.entity.asString.parseJson).isSuccess
too many brackets. In scala you can write it shorter