Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have an EC2 instance that schedule many tasks (using crontab).
some of them are executed every 1 min, 5 min, and so on..
I want to move all cron tasks into AWS service.
I am trying to figure which AWS service can give me the best solution.
I found 2 services that can schedule cron like tasks:
AWS Data Pipeline
AWS Lambda
which of them can give me the best solution?
I don't know how you want to define "best" but if you have many tasks, each one will require a separate pipeline, and that will cost you around $1 each.
Lambda on the other hand will probably be much less - you get 1M requests free, and they're $0.20 / million after that. You will also get charged based on the time & memory each task takes to run. There are some limits (5 min is the max time I think) so you'll have to take that into consideration.
But overall, I think Lambda will be much cheaper to run.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 months ago.
Improve this question
I got my first job as a BI support with AWS and the company has several glue jobs which is a very expensive game so I want to try and change it, instead of using glue jobs to use lambda function. The question is, how do I change a glue job to a lambda function? can anybody help? Thanks.
In general: you don't.
A glue Job can a) run for faaaar longer and b) can consume faaaar more resources and c) can have code and dependencies far exceeding the limits of Lambda. You can't replace a glue job with a lambda unless you did not need a glue job in the first place because you operate on few resources, for a short time with little code. If that is the case you would need to be a lot more specific how the current job is integrated. E.g. triggers will no longer work, network connectivity might no longer work, etc.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I need to build api service that would return response with in milliseconds. I am inclined towards using API gateway powered by Lambda.Say I keep my lambda warm by invoking it every 5 minutes or so. Is it slower to use API gate way powered by lambda instead of traditional web service hosted on ec2? Does any one have any experience on this matter?
Your mileage may vary, but after building a bunch of lambda functions to serve some needs I had, I ended up moving most of them back to EC2 in order to get acceptable performance. IN both cases they still use API gateway in front of them.
I still use Lambda for some functions where super-fast response is not needed, but for me it wasn't fast enough.
You should build a few examples and test yourself however, as I said - you results may be different than mine.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have an iOS application, which hits a backend we've set up on AWS. Basically, we have a Staging and a Production environment, with some basic Load Balancing across two AZs (in Production), a small RDS instance, a small Cache instance, some SQS queues processing background tasks, and S3 serving up assets.
The app is in beta, so "Production" has a limited set of users. Right now, it's about 100, but it could be double or so in the coming weeks.
My question is: we had been using t2.micro instances on Staging and for our initial beta users on Production, and they seemed to perform well. As far as I can see, the CPU usage averages less than 10%, and the maximum seems to be about 25 - 30%.
Judging by these metrics, is there any reason not to continue using the t2 instances for the time being and is there anything i'm overlooking as far as how the credit system works, or is it possible that i'm getting "throttled" by the T2s?
For the time being, traffic will be pretty predictable, so there won't be 10K users tomorrow :)
You just need to watch the CPU credits metric on the instances to make sure you dont get throttled. Set up alerts in CloudWatch for this and you should be fine.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I set the following parameters in my elastic beanstalk environment:
Do you think this settings are reasonable?
I didn't understand the breach duration parameter. What does it means? is 5 minutes is reasonable?
Thanks
This all depends. Different apps need to scale for different reasons. This is why Elastic Beanstalk lets you choose various ways to scale your application. Sure, this is reasonable. Does it work for your application? Not sure. Is your app CPU intensive or is it just serving static content? What is the number one factor to latency and 500's?
The breach duration parameter is the time frame of data to look at. For example, during the last 5 minutes, has the CPU been above 70%? Lower the number to be more "real time" and increase the number to be more "safe".
From the docs:
For Measurement period, specify how frequently Amazon CloudWatch measures the metrics for your trigger. Breach duration is the amount of time a metric can extend beyond its defined limit (as specified for Upper threshold and Lower threshold) before the trigger fires.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am looking for a way to pass log events from AWS application to my company site.
The thing is that the AWS application is 100% firewalled from everything except only one IP address because it's encryption related service.
I just don't know what service I should use to do this. There's so many services so I do really have no idea what is it.
I think I'd just use simple message service, does this makes sense? The thing is there's plenty of events (let's say 1M per day), so I don't want big extra costs for this.
Sorry for the generic question, but I think it's quite concrete - "What is the most optimal way to pass event message from AWS when volume is approx 1M per day each 256 bytes on average?".
I'd like to connect to AWS service instead to any of the EC2 hosts...
On both sides I have tomcats with AWS-SDK.
I just want to avoid rewriting. Maybe I should do it with S3? The files are immutable, but I could upload files every 1h. I don't need real-time events. I just need to have logfiles on site for analysis of user experience and that customers can access it, but having log in 1M chunks would either require further assembling etc, I am really confused, sorry.
Kinesis is good for streaming event data. S3 is good if you already have files that you want stored.