Given:
val system = ActorSystem("test")
val http = IO(Http)(system)
def fetch = http ! HttpRequest(GET, "http://0.0.0.0:8080/loadtest")
If I were to do:
(0 to 25).foreach(_ => fetch)
I would expect that the code would fire off 25 asynchronous requests. What happens instead is that four requests are set off. They wait for a response. When the response to all 4 comes back then four more are sent off until all 25 are processed.
I tried tweaking with Spray's configuration to create a custom dispatcher but this had no effect...
outbound-http-dispatcher {
type = Dispatcher
executor = "thread-pool-executor"
throughput = 250
}
spray.can {
host-connector-dispatcher = outbound-http-dispatcher
manager-dispatcher = outbound-http-dispatcher
}
How can I configure Akka/Spray to send off all 25 requests asynchronously?
Using: Akka 2.2.3, Spray 1.2.0
You are running into the max connections configuration setting for the host-connector in spray (it is 4 by default).
This is how you change it:
spray.can.host-connector.max-connections=25
Related
I want to send notifications to clients via websockets. This notifications are generated by actors, hence I'm trying to create a stream of actor's messages at server startup and subscribe websockects connections to this stream (sending only those notifications emitted since subscription)
With Source.actorRef we can create a Source of actor messages.
val ref = Source.actorRef[Weather](Int.MaxValue, fail)
.filter(!_.raining)
.to(Sink foreach println )
.run()
ref ! Weather("02139", 32.0, true)
But how can I subscribe (akka http*) websockets connections to this source if has been materialized already?
*WebSockets connections in Akka HTTP requires a Flow[Message, Message, Any]
What I'm trying to do is something like
// at server startup
val notifications: Source[Notification,ActorRef] = Source.actorRef[Notificacion](5,OverflowStrategy.fail)
val ref = notifications.to(Sink.foreach(println(_))).run()
val notificationActor = system.actorOf(NotificationActor.props(ref))
// on ws connection
val notificationsWS = path("notificationsWS") {
parameter('name) { name ⇒
get {
onComplete(flow(name)){
case Success(f) => handleWebSocketMessages(f)
case Failure(e) => throw e
}
}
}
}
def flow(name: String) = {
val messages = notifications filter { n => n.name equals name } map { n => TextMessage.Strict(n.data) }
Flow.fromSinkAndSource(Sink.ignore, notifications)
}
This doensn't work because the notifications source is not the one that is materialized, hence it doens't emit any element.
Note: I was using Source.actorPublisher and it was working but ktoso discourages his usage and also I was getting this error:
java.lang.IllegalStateException: onNext is not allowed when the stream has not requested elements, totalDemand was 0.
You could expose the materialised actorRef to some external router actor using mapMaterializedValue.
Flow.fromSinkAndSourceMat(Sink.ignore, notifications)(Keep.right)
.mapMaterializedValue(srcRef => router ! srcRef)
The router can keep track of your sources actorrefs (deathwatch can help tidying things up) and forward messages to them.
NB: you're probably already aware, but note that by using Source.actorRef to feed your flow, your flow will not be backpressure aware (with the strategy you chose it will just crash under load).
I am using DRF with Twilio SMS sending service. I have added this code on some object save - which I do in some of the API calls. But as I can see Django waits for Twilio code to be executed (which probably waits for response) and it takes around 1-2 seconds to get response from Twilio server.
I would like to optimize my API, but I am not sure how should I send a request for Twilio SMS asynchronously. This is my code.
def send_sms_registration(sender, instance, **kwargs):
start = int(round(time.time() * 1000))
if not instance.ignore_sms:
client = TwilioRestClient(TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN)
activation_code = instance.activation_code
client.messages.create(
to = instance.phone_number,
from_ = DEFAULT_SMS_NAME,
body = SMS_REGISTRATION_TEXT + activation_code,
)
end = int(round(time.time() * 1000))
print("send_sms_registration")
print(end - start)
post_save.connect(send_sms_registration, sender=Person, dispatch_uid="send_sms_registration")
Thanks for suggestions!
The API call is not asynchronous, you need to use other methods to make sending SMS async, you can use any of the following:
django-background-tasks: Simple and doesn't require a worker
python-rq: Great for simple async tasks
celery: A more complete solution
I am trying to connect http client to a http service exposed by server, the source should send request every 1 second for that I have crated following partial graphs:
def httpSourceGraph() = {
Source.fromGraph(GraphDSL.create() { implicit builder =>
val sourceOutLet = builder.add(Source.tick(FiniteDuration(0, TimeUnit.SECONDS), FiniteDuration(1,
TimeUnit.SECONDS),
HttpRequest(uri ="/test", method = HttpMethods.GET))).out
// expose outlet
SourceShape(sourceOutLet)
})
}
def httpConnFlow() = {
Flow.fromGraph(GraphDSL.create() { implicit builder =>
val httpSourceFlow = builder.add(Http(system).outgoingConnection(host = "localhost", port = 8080))
FlowShape(httpSourceFlow.in, httpSourceFlow.out)
})
}
the graph is composed as
val response= httpSourceGraph.via(httpConnFlow()).runForeach(println)
if the http server (localhost:8080/test) is up and running, everything works fine, every 1 second I can see the response coming back from the server. I am not able to any response in case of either server is down or it goes down later.
I think it should give me following error:
akka.stream.StreamTcpException: Tcp command [Connect(localhost/127.0.0.1:8080,None,List(),Some(10 seconds),true)] failed
This can tested with some wrong url as well. (domain name stackoverflow1.com and wrong url "/test")
Thanks for the help.
-Arun
I can propose one way to get the behavior that you are seeking. I think what's at the heart of your issue is that the Flow produced by Http().outgoingConnection will terminate when a failure is encountered. Once that happens, there is no more downstream demand to pull the requests from the Source and the whole flow stops. If you want something that will continue to emit elements downstream regardless of if the connection is lost then you might try and use a host connection pool instead of just a single connection. The pool will be more resilient to failures with individual connections and it's also setup from the get go to send either a Success or Failure downstream. A simplified version of your flow, using a host connection pool could be defined as follows:
val source =
Source.tick(
1 second,
5 second,
(HttpRequest(uri ="/", method = HttpMethods.GET), 1)
)
val connFlow = Http(system).
newHostConnectionPool[Int](host = "www.aquto.com", port = 80)
val sink = Sink.foreach[(util.Try[HttpResponse], Int)]{
case (util.Success(r), _ ) =>
r.entity.toStrict(10 seconds)
println(s"Success: ${r.status}")
case (util.Failure(ex), _) =>
println(s"Failure: ${ex.getMessage}")
}
source.via(connFlow).to(sink).run
I tested this out, unplugging my network connection in the middle of the test and this is what I see as output:
Success: 200 OK
Success: 200 OK
Failure: Tcp command [Connect(www.aquto.com/50.112.131.12:80,None,List(),Some(10 seconds),true)] failed
Failure: Tcp command [Connect(www.aquto.com/50.112.131.12:80,None,List(),Some(10 seconds),true)] failed
Failure: Tcp command [Connect(www.aquto.com/50.112.131.12:80,None,List(),Some(10 seconds),true)] failed
Success: 200 OK
Success: 200 OK
So I was reading this article on how to create proxy/broker for (X)PUB/(X)SUB messaging in ZMQ. There is this nice picture of what shall architecture look like :
But when I look at XSUB socket description I do not get how to forward all subscriptions via it due to the fact that its Outgoing routing strategy is N/A
So how one shall implement (un)subscription forwarding in ZeroMQ, what is minimal user code for such forwarding application (one that can be inserted between simple Publisher and Subscriber samples)?
XPUB does receive messages - the only messages it receives are subscriptions from connected subscribers, and these messages should be forwarded upstream as-is via XSUB.
The very simplest way to relay messages is with zmq_proxy:
xpub = ctx.socket(zmq.XPUB)
xpub.bind(xpub_url)
xsub = ctx.socket(zmq.XSUB)
xsub.bind(xsub_url)
pub = ctx.socket(zmq.PUB)
pub.bind(pub_url)
zmq.proxy(xpub, xsub, pub)
which will relay messages to/from xpub and xsub. Optionally, you can add a PUB socket to monitor the traffic that passes through in either direction.
If you want user code in the middle to implement extra routing logic, you would do something like this,
which re-implements the inner loop of zmq_proxy:
def broker(ctx):
xpub = ctx.socket(zmq.XPUB)
xpub.bind(xpub_url)
xsub = ctx.socket(zmq.XSUB)
xsub.bind(xsub_url)
poller = zmq.Poller()
poller.register(xpub, zmq.POLLIN)
poller.register(xsub, zmq.POLLIN)
while True:
events = dict(poller.poll(1000))
if xpub in events:
message = xpub.recv_multipart()
print "[BROKER] subscription message: %r" % message[0]
xsub.send_multipart(message)
if xsub in events:
message = xsub.recv_multipart()
# print "publishing message: %r" % message
xpub.send_multipart(message)
# insert user code here
full working (Python) example
The code to get connected to my WebService (Lotus Notes Database) is created by the Flash Builder over "Data/Connect with WebService...". All works fine, but I have a problem to increase the request timeout. The API says that you can set the request timeout like this:
_serviceControl.requestTimeout = 300;
On a iOS (iPad) it seems to be work all fine. But if I run my app on desktop or on an android smartphone this only works if I set up the request timeout lower than ~30 seconds. If I don't set up the request timeout or higher than 30 and my app needs longer than 30 seconds to wait for an answer/result the "_serviceControl" fires an FaultEvent with the message:
body = ""
clientId = "DirectHTTPChannel0"
correlationId = "CDED773E-34E5-56F8-D521-4FFC393D7565"
destination = ""
extendedData = (null)
faultCode = "Server.Error.Request"
faultDetail = "Error: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2032: Stream Error. URL: "http://...?OpenWebService" errorID=2032]. URL: "http://...?OpenWebService"
faultString = "HTTP request error"
headers = (Object)#1
DSStatusCode = 0
messageId = "91D11378-49D4-EDF7-CE7A-4FFCB09EBC47"
rootCause = (flash.events::IOErrorEvent)#2
bubbles = false
cancelable = false
currentTarget = (flash.net::URLLoader)#3
bytesLoaded = 0
bytesTotal = 0
data = ""
dataFormat = "text"
errorID = 2032
eventPhase = 2
target = (flash.net::URLLoader)#3
text = "Error #2032: Stream Error. URL: "http://...?OpenWebService"
type = "ioError"
timestamp = 0
timeToLive = 0
Any idea why this happens?
I had the same problem, requestTimeout didn't work.
If someone is looking for an answer, this configuration works fine for me :
import flash.net.URLRequestDefaults;
URLRequestDefaults.idleTimeout = 120000; //note this value represents milliseconds (120 secs)
Have a look here for more details : Flex HTTPService times out anyway
Though it seems to be assumed that requestTimeout doesn't work. It actually does... the 1st time.
After the 1st request, the requestTimeout is set in
HTTPService.channelSet.currentChannel.requestTimeout
If you have to change the timeout, you will want to do it there.
To see the specific offending code, see AbstractOperation.getDirectChannelSet(). Even for different instances of HTTPService, it pulls from:
private static var _directChannelSet:ChannelSet;
_directChannelSet is only instantiated once, and the requestTimeout on it is only set on creation, so even if you change the requestTimeout on HTTPService, it won't reflect in the request.