Tuning Spring JMS for AWS SQS free tier - amazon-web-services

I'm using Spring JMS to communicate with Amazon SQS queues. I set up a handful of queues and wired up the listeners, but the app isn't sending any messages through them currently. AWS allows 1 million requests per month for free, which I thought should be no problem, but after a month I got billed a small amount for going over that limit.
Is there a way to tune SQS or Spring JMS to keep the requests down?
I'm assuming a request is whenever my app polls the queue to check for new messages. Some queues don't need to be near realtime so I could definitely reduce those requests. I'd appreciate any insights you can offer into how SQS and Spring JMS communicate.

"Normal" JMS clients, when polled for messages, don't poll the server - the server pushes messages to the client and the poll is just done locally.
If the SQS client polls the server, that would be unusual, to say the least, but if it's using REST calls, I can see why it would happen.
Increasing the container's receiveTimeout (default 1 second) might help, but without knowing what the client is doing under the covers, it's hard to tell.

Related

Latency on publishing to SNS Topic/SQS Subscription?

We are currently implementing a distributed Spring Boot microservice architecture on Amazon's AWS, where we use SNS/SQS as our messaging system:
Events are published by a Spring Boot service to an SNS FIFO topic using Spring Cloud AWS. The topic hands over the events to multiple SQS queues subscribed to the topic, and the queues are then in turn consumed by different consumer services (again Spring Boot using Spring Cloud AWS).
Everything works as intended, but we are sometimes seeing very high latency on our production services.
Our product isn't released yet (we are currently in testing), meaning we have very, very low traffic on prod, i.e., only a few messages a day.
Unfortunately, we see very high latency until a message is delivered to its subscribers after a long period of inactivity (typically up to 6 seconds, but can be as high as 60 seconds). Things speed up considerably afterwards with message delivery times dropping to below 100ms for the next messages being sent to the topic.
Turning on logging on the SNS topic in AWS revealed that most of the delay for the first message is spent at the SNS part of things, where the SNS dwellTime roughly correlates with the delays we are seeing in message delivery. Spring Cloud AWS seems fine.
Is this something expected? Is there something like a "cold startup" time for idle SNS FIFO topics (as seen when using AWS lambdas)? Will this latency simply go away once we increase the load and heat up the topic? Or is there something we missed configuring?
We are using fairly standard SQS subscriptions, btw, no subscription throttling in place. The Spring Boot services run on a Fargate ECS cluster.
Seems like AWS inactivates unused SNS topics somehow. What we are doing now is, we are sending a "dummy" Keep-Alive message to the topic every ten minutes, which keeps the dwellTime reasonably low for us (<500ms)

Queue is needed when I use aws api gateway/lambda as web server?

I am learning Apache Kafka as Queue.
I can understand queue is needed when I run web server not to drop burst traffic.
Queue can help not to drop data for rush hours.
Unless using Queue, the only thing I can do is to put more server as much as rush hour traffic.
Is it right?
If it is right,
Assume that, I use aws api gateway + lambda for web server.
aws lambda can be auto scale. So my lambda web server never drop burst traffic. It means Queue such as Kafka is not needed in this case ?
Surely if I need any pub/sub architecture, Kafka is needed.
Is it right what I think?
API Gateway is typically used for cases where you care about the result of the API call and want to do something with the response. In this case, you need to wait for the Lambda function to finish and return the result so it can be passed back to the client. You don't need a queue because Lambda will scale out and add processes for each request. The limit would be the 10,000 requests per second of API Gateway, or the capacity of any downstream systems like a database.
Kafka is designed for real-time data streaming cases; things where you want to process data immediately, such as transcribing video. It is different than pub/sub. Consumers request data from Kafka. If the process requires merging data from multiple input sources on an on-going basis, then Kafka is a good fit. To say this another way, if the size of the input has no upper bound, stream processing is a good choice. A similar service that is available on AWS is Amazon Kinesis.
Pub/sub (such as Amazon SNS, which can easily trigger Lambda functions) is better for use cases where the size of the input, or the size of a useful batch, can be easily defined, but where data should still be processed near real-time. In a pub/sub system, events are published to subscribers rather than subscribers requesting them.
Another option is a queue like Amazon SQS, which can be useful if there is a bottleneck somewhere else in the system, such as database write capacity, or a Lambda concurrency limit. In this architecture, consumers request items from the queue when they are ready to process them, so it is better for use-cases where results are not immediately required.

What is the simplest way to process WebSockets messages on AWS?

What is a good way to deploy a WebSockets client on AWS?
I'm building an app on AWS which needs to subscribe to several WebSockets and several REST sources and process incoming messages (WebSockets) or make periodic requests (REST). I'm trying to go server-less and maximize use of AWS platform services, to eliminate the need to manage VMs, OS patches, etc. (and hopefully reduce cost).
My idea so far is to trigger a Lambda function every time a message arrives. The function can then transform/normalize the message and push it to an SQS queue for further processing by other subsystems.
There would be two types of such Lambda clients, one that subscribes to WebSockets messages and another that makes HTTP request periodically when invoked by a CloudWatch schedule. It would look something like this:
This approach seems reasonable for my REST clients, but I haven't been able to determine if it's possible to subscribe to WebSockets messages using Lambda. Lambdas can be triggered by IoT, and apparently IoT supports WebSockets now, but apparently only as a transport for the MQTT protocol:
AWS IoT Now Supports WebSockets, Custom Keepalive Intervals, and Enhanced Console
What is the best/easiest/cheapest way to deploy a WebSockets client without deploying an entire EC2 or Docker instance?

Build a firebase / fanout.io like service on amazon web services aws

I am using firebase to notify web browsers (javascript clients) about changes on specific topics. I am very happy with it. However I would really like to (only) use aws web services.
Unfortunately I am not able to determine whether it is possible to build such a service on aws. I am not talking about having EC2 instances running some firebase / fanout.io alternatives. I am talking about utilizing services such as lambda, dynamodb streams, SNS & SQS.
Are there any socket notification services available or is it possible to achieve something similar by using the provided services?
I looked into this very recently with the same idea, but eventually I came down on just using fanout. AWS does not provide server-push HTTP notification services out of the box.
Lambda functions are billed per 100 ms, so any long-polling against lambda will end up billing for the entirety of the time the client is connected.
SNS does not provide long polling to browsers; the available clients are geared towards mobile, email, HTTP/S, and other Amazon products like Lambda and SQS.
SQS would require a dedicated queue per client as it does not support broadcast.
Now, if the lambda pricing doesn't bother you, you could possibly do this:
Write a lambda function that is called via the API service that opens up a connection to SQS and waits for a message. The key is to start the lambda call from HTTP, but within the function wait on the queue (using Boto, for example, if you are writing this in Python). This code would need to create a queue dedicated to servicing one particular client, uniquely keyed by something like a GUID that is passed in by the client.
Link to the lambda function using the Amazon API service.
Call the lambda function via the API from the browser and wait for it to either receive a message on the dedicated SQS queue or timeout, probably using long-polling both in the API connection and the SQS connection. Fully draining the queue (or at least taking as many messages in a batch up to some limit) would be advisable here as well in order to reduce the number of calls to the API.
Publish your event to the dedicated SQS queue associated with the client. This will require the publisher to know the client's unique key.
Return the event read from SQS as the result of the lambda call.
Some problems with this approach:
Lambda pricing - not terribly expensive, but something like fanout is basically free
You would need a dedicated SQS queue per client; cleanup might become a problem
SQS bills on number of calls, which includes checking for a message. Long-polling SQS will alleviate some of this
You would need to write the JavaScript client to call the lambda API endpoint repeatedly in a long-polling fashion
Lambda is currently limited as to the number of concurrently running functions it supports (100 right now but you can contact support to bump that up)
Some benefits with this approach:
SQS queues are persistent, so unless a message is processed successfully it will go back on the queue after the visibility timeout
You can set up CloudWatch to monitor all of the API, Lambda, and SQS events
Other Notes
You could call the SQS APIs directly from the browser by using Lambda to issue temporary security credentials via STS. Receiving a message in JavaScript is documented here: http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-examples.html#Receiving_a_message I do not, however, know off the top of my head if you would run into cross-domain issues.
Your only other option, if it must be all AWS, is to use load-balanced EC2 instances running something like fanout as you mentioned.
Using fanout is very little work: it's both extremely affordable and already built and tested.

AWS Sqs ReceiveMessage is not as fast as SendMessage

I have first web service which is used to send messages into the aws sqs, this web service is deployed on a separate ec2 instance. Web service is running under IIS 8. This web service is able to handle 500 requests per second from two machines meaning 1000 requests per second. It can handle more requests.
I have second web service deployed on another ec2 instance of the same power/configuration. This web service will be used to process the messages stored in the Sqs. For testing purpose currently, I am only receiving the message from Sqs and just deleting that.
I have a aws Sns service which tells the second web service that a message has come in the sqs, go and receive that message to process.
But I observe that my second web service is not as fast as my first web service, every time I run the test, messages are left in the sqs, but ideally no message should remain in the sqs.
Please guide me what are the possible reasons of this and area on which I should focus.
Thanks in advance.
The receiver has double the work to do since it both receives and deletes the message which is done in two separate calls. You may need double the instances to process the sent messages if you have high volume.
How many messages are you receiving at once? I highly recommend setting the MaxNumberOfMessages to 10 and then using DeleteMessageBatch with batches of 10. Not only will this greatly increase throughput, it will cut your SQS bill in by about 60%.
Also, I'm confused about the SNS topic. There is no need to have an SNS topic tell the other web service that a message exists. If every message generates a publish to that topic, then you are adding a lot of extra work and expense. Instead you should use long polling and set the WaitTimeSeconds to 20 and just always be calling SQS. Even if you get 0 messages for a month 2 servers constantly long polling will be well within the free tier. If you are above the free tier, the total cost of 2 servers constantly long polling an SQS queue is $0.13/month