Scaling AWS AppSync beyond 1000 rps - amazon-web-services

AWS throttles AppSync to 1000 rps per API. What can be done if expected request rate is 50000 rps?

According to Amazon's documentation, you can't manually increase requests per second on your own. Depending on your account, you may be able to request an increase by selecting "Service limit increase" in the AWS Support Center.

Related

AWS CloudWatch Logs Data limitation

Is there any data limitation on aws cloudwatch logs to send the logs , because in my case I am getting the logs data 6 million records per 3 days from my application. So is aws cloudwatch logs will able to handle that much data?
Check out the aws quotas page. Not sure what you mean by "60lac" but the limits on CloudWatch are more than adequate for the majority of use cases.
There is no published limit on the overall data volume held. There'll be a practical limit somewhere but it won't be hit by a single AWS customer. If you're using the putLogEvents API you could be constrained by the limit of 5 requests per second per log stream, in which case consider using more streams or larger batches of events (up to 1MB).

Does AWS CloudFront stop working after I breach FreeTier limits

I am using AWS CloudFront and I'm currently free-tier.
I am close to going over 2.000.000 HTTP/S requests that is max allowed for free-tier.
Will AWS automatically charge me for the traffic that goes over those 2.000.000 requests (per on-demand prices) or will CloudFront become unavailable and stop receiving more requests? How do I upgrade to on-demand?
Cloudfront will still be available, however, you will be charged requests for over the max allowed free tiers.
Yes AWS will automatically charge for any excess usage over the free tier quota. Any resource you created in your account will not affect and they will work without any interruption. You do not need to do any upgrades from your side to cater to increased requests.
If you are budget concerned, you can create AWS Budget and create an alert so AWS will notify you before you reach a pre-defined budget. From there you can also define what to do if you reach the budget. For example, you can shut down ec2 instances if this resource consumed more than the allocated budget.

AWS lambda to Confluent Cloud latency issues

I am currently using basic version of cluster on Confluent cloud and I only have one topic with 9 partitions. I have a REST Api that’s setup using AWS lambda service which publishes messages to Kafka.
Currently i am stress testing pipeline with 5k-10k requests per second, I found that Latency is shooting up to 20-30 seconds to publish a record of size 1kb. Which is generally 300 ms for a single request.
I added producer configurations like linger.ms - 500 ms and batch.size to 100kb. I see some improvement (15-20 seconds per request) but I feel it’s still too high.
Is there anything that I am missing or is it something with the basic cluster on confluent cloud? All of the configurations on the cluster were default.
Identified that the issue is with API request which is getting throttled. As mentioned by Chris Chen, due to the exponential back-off strategy by AWS SDK the avg time is shooting up. Requested AWS for increase in concurrent executions. I am sure it should solve the issue.

AWS API Gateway + Lamda - how to handle 1 million requests per second

we would like to create serverless architecture for our startup and we would like to support up to 1 million requests per second and 50 millions active users. How can we handle this use case with AWS architecture?
Regarding to AWS documentation API Gateway can handle only 10K requests/s and lamda can process 1K invocations/s and for us this is unacceptable.
How can we overcome this limitation? Can we request this throughput with AWS support or can we connect somehow to another AWS services (queues)?
Thanks!
Those numbers you quoted are the default account limits. Lambda and API Gateway can handle more than that, but you have to send a request to Amazon to raise your account limits. If you are truly going to receive 1 million API requests per second then you should discuss it with an AWS account rep. Are you sure most of those requests won't be handled by a cache like CloudFront?
The gateway is NOT your API Server. Lambda's are the bottleneck.
While the gateway can handle 100000 messages/sec (because it is going through a message queue), Lambdas top out at around 2,200 rps even with scaling (https://amido.com/blog/azure-functions-vs-aws-lambda-vs-google-cloud-functions-javascript-scaling-face-off/)
This differs dramatically from actually API framework implementations wherein the scale goes up to 3,500+ rps...
I think you should go with Application Load Balancer.
It is limitless in terms of RPS and can potentially be even cheaper for a large number of requests. It does have fewer integrations with AWS services though, but in general, it has everything you need for a gateway.
https://dashbird.io/blog/aws-api-gateway-vs-application-load-balancer/

Do I get charged for attempting to publish messages to disabled endpoints?

Do I get charged for publishing a message to an endpoint, even though this one is saved as not enabled on AWS SNS. Do I need to do a clean up job on my side to make sure that all the endpoints linked to a user are always up to date?
Thanks.
So you're charged for each authenticated API Request, ontop of additional pricing per Notification etc.
It's currently charged at $0.50 per 1 million API Requests, regardless if it returns a HTTP 200 or not.
First 1 million Amazon SNS requests per month are free
$0.50 per 1 million Amazon SNS requests thereafter
Source