Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 days ago.
Improve this question
I am extremely new to Amazon Web Services. I would sincerely appreciate any help on finalising the architecture and arriving at a costing schedule. I am working on designing a AWS based solution for a dynamic website that we are designing. To begin with I need to have 2 High CPU Medium Utilisation EC2 instances (both will act as web servers), 1 High CPU Medium Utilisation EC2 instance for my database which will be Postgre SQL and 1 High CPU Medium Utilisation EC2 instance to serve as a read replica for my database. I will also be having a considerable volume of static content like images, videos or just .doc files for which I am contemplating using an S3 bucket. So, my website will be a dynamic + static kind of a website. I am expecting a rapid exponential scale up of users from 0 to say for example 1 million in a year. Hence I will need to scale up my EC2 combos (as described earlier) according to traffic. I am contemplatng using a CloudFormation stack for rapidly scaling my deployment. Also, to efficiently route traffic I will be using a single ELB to start with. Also, I would want to vertically partition my database based on user id's. For example, user ID 1 - 2000 on one EC2 database instance users 2001 to 4000 on second EC2 database instance etc... I will be auto scaling only the web server EC2 instances while my database EC2 instances will have a 100% uptime
My questions are:
What should be my auto scaling strategy for the web server EC2 instances and how do I know what will be the monthly costing when the scaling is so dynamic. I mean is there any way to predict so that I can do a cost break even analysis?
Do all the EC2 instances (web server and database) necessarily need an EBS backing or will Ephemeral storage suffice? I believe that for the database EC2 instances I will need an EBS backing. What about the web server Ec2 instances?
Suppose I end up scaling up to 100 EC2 instances. Will just one ELB suffice or do I need multiple ELBs?
How do I analyse as to how many HTTP requests can one High CPU Medium Utilisation EC2 instance handle before a breakdown?
Can CloudFront be used to host this kind of a dynamic + static site or is it used only for static sites?
Please help me with these questions as I have no clue on cloud solution architecting...
Thanks...
Vikram.
Here is a white paper we (RunSignUp) wrote about how we scaled on Amazon. Our use case was handling opening of an online race registration where 50,000 runners would want to sign up in less than 10 minutes. This shows how we configured everything including settings as well as some of the code we developed. We basically had the same issues as you and did not find anything that had a full use case of how to build a scalable app and how Amazon was used to scale with it. Hope it is helpful.
A quick attempt at an answer to get you started;
#1 is too big to answer without writing a book, it has too much to do with the system and its design to answer in a generic way.
#2 You'll definitely want backing for your database. The web servers on the other hand are probably better off with ephemeral storage, the more stateless/setup free they are, the easier to create/destroy instances dynamically.
#3 As far as I know, there's no upper limit on the number of machines "attached" to a single ELB. Under extreme load, you may experience slightly lesser latency if you split your site over multiple load balancers. You'll find more info that you can map against your actual architecture here.
#4 Only load testing of your system can tell you that.
#5 From the Cloudfront page;
Amazon CloudFront can be used to deliver your entire website, including dynamic, static and streaming content using a global network of edge locations.
In other words, its architecture does not prevent you from mixing static and dynamic content on your site.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 days ago.
Improve this question
I have a small mobile application which is based on microservice architecture. Two of the microservices are in java, one is in node. My application uses a MySQL database.
Presently I am using a VPS to host my services, all software (MySQL, tomcat and pm2) are installed in the same VPS.
Now I am planning to move to AWS and (as I have no prior AWS experience) I am overwhelmed by the services provided by AWS.
Can anyone please help me decide on this?
Since the usage is going to be very low at this point, I have to get this setup with the least monthly costs to be incurred.
For that I am thinking to get 1 EC2 instance and install all software in different docker containers (including the DB). Will this approach work? Or will I have to get another RDS instance? Is docker required? Or can I directly install all the software?
Such question is opinion based and community discourage such question but there is few things that are very clear to explain.
Will I have to get another RDS instance
I will not recommend to go with the container for DB, it will be hard to scale and maintain backup etc in container also there is a risk to lost data if no proper mounting were set in the container configuration.
I will recommend going with RDS free tier which is free for one year ( Term and condition apply)
It will be easy in future to upgrade, scale and maintain backup with RDS.
AWS Free Tier with Amazon RDS
750 hours of Amazon RDS Single-AZ db.t2.micro Instance usage running
MySQL, MariaDB, PostgreSQL, Oracle BYOL or SQL Server (running SQL
Server Express Edition) – enough hours to run a DB Instance
continuously each month
rds-free-tier
I am thinking to get 1 ec2 install and install all softwares in
different docker containers (including the DB),
At the initial level, it's fine to go with 1 instance. but here is the flow
Create ECS cluster
Create ECR registry and push your image to ECR
Create Task definition against each docker iamge
Create service for each task definition
As mentioned in the comment you can explore EKS, but in AWS I will prefer ECs.
You can explore this to start with gentle-introduction-to-how-aws-ecs-works-with-example
High-level look will be
It's been a while since I asked this question - and though I agree this is more of an opinion-based question - I still think this answer is something that would be a good start into cloud deployments.
Application hosting
For startups and small projects like these, the best approach would be to go with serverless lambda functions - and though it would add an overhead of lambda functions in code its worth the effort as it keeps the cost to almost zero until you start to get some tangible traffic.
Another approach for the application microservices could be docker - but docker containers are more for containerized deployment - to make sure code runs the same in the prod environment as it does in the dev- rather than this one should go with a small EC2 instance and deploy the microservices as separate processes (PM2 for node js). Though differential scaling would be tough at this point it doesn't matter - as soon as you start seeing CPU metrics touch the roof you can start decoupling the more used Microservices to another machine - and have a load balancer in front of it.
K8s is overkill at this point as again with one worker node even though the control plane is free to use - its just no point - until you have a sizable number of worker nodes
Database Deployment
Stateful deployments are trickier comparatively as there is a chance of data loss. Easier would be to go with managed DB hosting at this point such as AWS aurora/RDS or if you plan to use NoSQL then mongoDB atlas. Managing DB along with backups would be a painful task especially when you are saving every penny in infra costs.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
AWS provides services like Elasticache, redis, databases and all are charged on hourly basis. But these services are also available in form of docker containers in docker hub. All the AWS services liste above use an instance. Meaning, an independent instance for databases and all. But what if one starts using an ec2 instance, and start downloading all the images for all the dependancies on databases. That would save them a lot of money right?
I have used docker before and it has almost all the images for the services aws provides.
EC2 is not free. You can run, for example, MySQL on an EC2 instance. It will be cheaper than using RDS, but you still need to pay for the compute and storage resources it consumes. Even if you run a database on a larger shared EC2 instance you need to account for its storage and CPU cycles, and you might need more or larger instances to run more tasks there.
(As of right now, in the us-east-1 region, a MySQL db.m5.large instance is US$0.171 per hour or US$895 per year paid up front, plus US$0.115 per GB of capacity per month; the same m5.large EC2 instance is US$0.096 per hour or US$501 per year, and storage is US$0.10 per GB per month. [Assuming 1-year, all-up-front, non-convertible reserved instances.])
There are good reasons to run databases not-in-Docker. Particularly in a microservice environment, application Docker containers are stateless, replicated, update their images routinely, can be freely deleted, and can be moved across hosts (by deleting and recreating them somewhere else). (In Kubernetes/EKS, look at how a Deployment object works.) None of these are true of databases, which are all about keeping state, cannot be deleted, cannot be moved (the data has to come with), and must be backed up.
RDS has some useful features. You can change the size of your database instance with some downtime, but no data loss. AWS will keep managed snapshots for you, and it's straightforward (if slow) to create a new database from a snapshot of your existing database. Patch updates to the database are automatically applied for you. You can pay Amazon for these features, in effect, or pay your own DBA to do the same tasks for a database running on an EC2 instance.
None of this is to say you have to use RDS, you do in fact save on AWS by running the same software on EC2, and it may or may not be in Docker. RDS is a reasonable choice in an otherwise all-Docker world though. The same basic tradeoffs apply to other services like Elasticache (for Redis).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a web application that has very fluctuating traffic. I'm talking about 30 to 40 users daily to thousands of people simultaneously. It's a ticketing app so this kind of behavior is here to stay so I want to make a strategic choice I don't want to by a host with a high configuration because it's just going to be sitting around for most of the time. We're running a Node.js server so we usually run low on RAM. My question is this: what are my options and how difficult is it to go from a normal VPS to something like Microsoft Azure, Google Cloud, or AWS.
It's difficult to be specific without knowing more about your application architecture but both AWS Lambda and Google App Engine offer 'serverless architecture' and support Node.js. Serverless architectures allow you to host code directly rather than running servers and associated infrastructure. Scaling is given to you by the services, costs are based on consumption and you can configure constraints and alerts to prevent racking up huge unexpected bills. In both instances you would need to front the services with additional Google or AWS services to make them accessible to customers, but these offer a great way to scale and pay only for what you need.
A first step is to offload static content to Amazon S3 (or similar service). Those services will handle any load and will lessen the load on your web server.
If the load goes up/down gradually (eg over the course of 30 minutes), you can use Auto Scaling to add/remove Amazon EC2 servers based upon load metrics. For example, you probably don't need many servers at night.
However, for handling of spiky traffic, rewriting an application as Serverless will make it highly resilient, highly scalable and most likely a lot cheaper too!
I am about to launch an iOS app that will be communicating with my custom REST API. Right now I am running a single EC2 t2.micro instance running an Apache web server with MySQLi. Before I go ahead and launch it for the public, I want to hear what proper steps should be taken regarding the following.
Should I run two separate EC2 instances? One only for the web server and the other to handle only the database?
How should I approach setting up the database? Should I still use MySQLi or should I start using Amazon's RDS?
In relationship to number two, when the database and/or web server runs out of space, how is this issue handled so that it seamlessly adds space to allow the database/web server to continue growth? I also read something regarding auto-scale.
I will be expecting many requests per minute to my web server and want to take precaution.
The answer to these questions largely depends on the requirements of your application, your budget, and on what you decide to manage vs. what you'd prefer to allow AWS to manage. However, I'll answer these as best I can.
1) Yes. Separating the database from the web server (that is, 2 different EC2 instances) makes sense for a lot of reasons. This will allow you to tailor resources like memory, CPU, etc. to each layer of your application separately. You do not want your web and database competing for the same resources. Additionally, an issue that forces you to take down one (web or database) will not force you to also take down the other. If your database lives on one of the web servers and you need to perform maintenance, your app will effectively become offline, since down goes your database as you perform updates. Also, ideally you would protect your database server within a private subnet in your VPC. If you have the web and database on the same server, they will both be in a public subnet, since you're web will require access to an internet gateway.
2) Depends. If you want to maintain total control of the database server, than use an EC2 instance where you retain operating system control. If you want to take advantage of features like Multi-AZ for high availability or allowing AWS to manage things like updates for you, RDS can be a great option. Cost also plays a role. For things like read-replicas and Multi-AZ, you will pay more, but you are purchasing performance and high availability. Thus, depends on your requirements. You can find the features of RDS here: RDS Product Details
3) For anything running on an EC2 instance (database or web) or if you decide to use RDS, you may provision and attach additional storage volumes as necessary. The type of storage you select will depend on the performance requirements, your budget, and the kind of workload you expect your database to face. Amazon provides the storage options available to you as well as a section for adding more storage here: RDS Storage Options
If you are worried about too many requests overwhelming your EC2 t2.micro instance, consider creating an ELB load balancer and setting up an auto-scaling group which will allow you to expand your capacity as necessary while distributing traffic such that no one server gets overwhelmed.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I currently have a Linux, Apache, MySQL, PHP, Postfix web server that I setup on a spare computer at home that I am exploring transferring to Amazon Web Services. It's about as simple of a personal web server as it gets, I mainly use it for personal experimentation for PHP development, I have a blog, it hosts my e-mail, plus I do some C++ development on the server and run some small executable and networked personal applications.
The only traffic the server really sees is me (on a daily basis), plus some web crawlers, and the occasional hit from a Google search.
Is it reasonable to transfer my server to Amazon Web Services? Or is Amazon Web Services specifically targeted to larger scale servers? What's about the cheapest cost I can expect to pay for this hosting?
I tried using the AWS Simple Monthly Calculator but had a hard time estimating the numbers. Perhaps someone is doing something similar to my plans, and can inform me of what they are paying.
One of the reasons I am interested in AWS, is I am contemplating using my website as cloud storage for a mobile application I am working on, and if that application takes off quickly, I would like to be able to quickly scale to the traffic.
If you need a simple setup, it is sufficient to use a t1.micro instance. The monthly price for such an instance (depending on the location of the server) is about 15 US$. If you plan to run your server for a longer time, consider using reserved instances. You pay a one-time fee and get reduced hourly prices afterwards. If you run your server all the time, you should use a "High Utilization" instance. I think you won't get a lot of traffic and EBS requests, so I would focus on the main part regarding costs which is the EC2 instance hours.
Here is a basic example calculation with the above setup as a start. This calculation does not include a 1-year-free trial that Amazon offers.
If you need to scale, then you have a lot of options available. You can launch bigger instances if you need it. Have a look at the instance types page to get an overview (also includes details on the Micro instance). If scaling and possible upgrades are a main factor in your decision, then you should consider AWS.
Is it reasonable to transfer my server to Amazon Web Services
I think yes. Amazon has list of Linux versions where you can fetch free server with no payment. Bear in mind that, for example, for free server DB you can't connect to your DB from external IP (aka from external DB tool). But port redirection will work.
Usually I use Amazon for demo versions ( < 10k users). But its work great.