Amazon EC2 vs Shared Hosting for Website - amazon-web-services

I've a website that can have on an average half million views per month. Due to some limitations in shared hosting, I need to consider EC2 or some some VPS. However, among VPS, I'm feeling EC2 is reliable as I have to use it along with a Amazon SES (Will go more performance because of same location). I'm trying to find if I can host such a website with reasonable response time on EC2 t1.micro instance. Please let me know. If not, please suggest some alternatives

EC2 is in many ways like a VPS, with some tools that you probably wont find at most VPS providers.
You will probably not get half a million views on a micro unless they were very spread out during the day. Micros use a burstable architecture, you get 2 ECU worth of computing power, then you get locked to way less than 1 ECU for a while afterwards.

Related

Optimizing latency between application server (EC2) and RDS

here's how the story goes.
We started transforming a monolith, single-machine, e-commerce application (Apache/PHP) to cloud infrastructure. Obviously, the application and the database (MySQL) were on the same machine.
We decided to move to AWS. And as the first step of transformation, we decided to split the database and application. Hosting application on a c4.xlarge machine. And hosting database to RDS Aurora MySQL on a db.r5.large machine, with default options.
This setup performed well. Especially the database performance went up high.
Unfortunately, when the traffic spiked up, we started experiencing long response times. Looked like RDS, although being really fast for executing queries, wasn't returning results fast enough over the network to the EC2 machine.
So that was our conclusion after an in-depth analysis of the setup including Apache/MySQL/PHP tuning parameters. The delayed response time was definitely due to the network latency between EC2 and RDS/Aurora machine, both machines being in the same region.
Before adding additional resources (ex: ElastiCache etc) we'd first like to look into any default configuration we can play around with to solve this problem.
What do you think we missed there?
One of the bigest strength with the cloud is the scalability and you should always design your application to utilise it and it sounds like your RDS instance is getting chocked due to nr of request more than the process time for the queries. So rather go more small instances with load balancing than one big doing all the job. And with Load Balancers you will get away from a singel point of failure due to you can have replicas of your database and they can even be placed in different AZ.
Here is a blogpost you can read on the topic:
https://aws.amazon.com/blogs/database/scaling-your-amazon-rds-instance-vertically-and-horizontally/
Good luck in your aws journey.
The Best answer to your question is using read replicas, but remember only your read requests could be sent to your read replicas so you would need to design your application that way
Also for some cost savings, you should try aurora serverless
One more option is passing traffic between ec2 and rds through a private network rather than using the public internet to connect your ec2 to rds that can be one of the mistakes that might be happening

Suitable Google Cloud Compute instance for Website Hosting

I am new to cloud computing, but want to use it to host a website I am building. The website will be a data analytics site, and each user will interacting with a MySQL database and reading data from text files. I want to be able to accommodate about 500 users at a time. The site will likely have around 1000-5000 users fully scaled. I have chosen GCP, and am wondering if the e2-standard-2 VM instance would be enough to get started. I will also be using a GCP HA MySQL server, I am thinking that 2 vCPU's and 5GB memory will be enough, with 50GB high availability SDD storage. Any suggestions would be appreciated? Also, is there anything other service I will need? Thank you!!
Your question is irrelevant. On Google Cloud Platform you have real time monitoring for CPU and RAM usage so you if your website is gaining more users you can just upgrade or downgrade your CPU or RAM with 2 or 3 mouse clicks. Start small and upgrade later if you see CPU or RAM is getting close to 100% usage. Start with a N1 chip micro instance 600MB RAM.

AWS EC2 Immediate Scaling Up?

I have a web service running on several EC2 boxes. Based on the Cloudwatch latency metric, I'd like to scale up additional boxes. But, given that it takes several minutes to spin up an EC2 from an AMI (with startup code to download the latest application JAR and apply OS patches), is there a way to have a "cold" server that could instantly be turned on/off?
Not by using AutoScaling. At least not, instant in the way you describe. You could make it much faster however, by making your own modified AMI image where you place the JAR and the latest OS patches. These AMI's can be generated as part of your build pipeline. In that case, your only real wait time is for the OS and services to start, similar to a "cold" server.
Packer is a tool commonly used for such use cases.
Alternatively, you can mange it yourself, by having servers switched off, and start them by writing some custom Lambda scripts that gets triggered by Cloudwatch alerts. But since stopped servers aren't exactly free either, i would recommend against that for cost reasons.
Before you venture into the journey of auto scaling your infrastructure and spending time/effort. Perhaps you should do a little bit of analysis on the traffic pattern day over day, week over week and month over month and see if it's even necessary? Try answering some of these questions.
What was the highest traffic ever your app handled, How did the servers fare given the traffic? How was the user response time?
When does your traffic ramp up or hit peak? Some apps get traffic during business hours while others in the evening.
What is your current throughput? For example, you can handle 1k requests/min and two EC2 hosts are averaging 20% CPU. if the requests triple to 3k requests/min are you able to see around 60% - 70% avg cpu? this is a good indication that your app usage is fairly predictable can scale linearly by adding more hosts. But if you've never seen traffic burst like that no point over provisioning.
Unless you have a Zynga like application where you can see large number traffic at once perhaps better understanding your traffic pattern and throwing in an additional host as insurance could be helpful. I'm making these assumptions as I don't know the nature of your business.
If you do want to auto scale anyway, one solution would be to containerize your application with Docker or create your own AMI like others have suggested. Still it will take few minutes to boot them up. Next option is the keep hosts on standby but and add those to your load balancers using scripts ( or lambda functions) that watches metrics you define (I'm assuming your app is running behind load balancers).
Good luck.

Will my current AWS architecture scale to 20,000 visitors per day? How can I improve it?

The site I'm working on will potentially get 20,000 visitors per day. It's no guarantee, but it's supposed to be used everyday by each employee in an organisation.
In the past I've just used a single t2.micro EC2 instance with an attached EBS volume to host sites, which has always been enough because these sites don't get a lot of traffic. But with 20,000 visitors a day how could I improve my AWS architecture to scale?
The site is going to have a profile for each user, including a profile picture - so potentially 20,000 image files. Should I be writing these to an S3 bucket instead of to the EBS?
Would a t2.micro ec2 instance cope with the scale, or should I be using a t2.small, t2.medium or even t2.large?
My MySQL databases are currently on the EBS volume, but should I use RDS?
All the users are in the UK, so I'm assuming using CloudFront is overkill?
You're right to assume CloudFront is overkill since all your users are localized to UK.
Update: using a CDN will take a lot of stress off your servers by caching the files rather than processing them each time a call is made.
Look at it this way, if you get 100,000 hits a day, and 90% of those hits are cached and served by the CDN, then your server only has to process 10,000 hits a day. That could be the difference between needing a m4.xlarge versus just needing a t2.small.
#mark-b
Use the Ireland region (and soon you can copy over to the UK region)
If you want to keep your database on your instance I would highly recommend a bit bigger one. As for a quick and easy solution, start up the smallest T series instance with EBS, beta test with 1000-5000 users, see how that goes. Notify the select group all their crap will disappear so don't invest a bunch o' time.
Next, get your analytics on the system and see if that will work times 4-5 more users. For SQL DB stuff you'll eventually want a M series instance I believe.
Also, you could always create a load balanced fleet. You do this in EBS by hitting Load Balanced instead of Single Instance. Create an auto scaling group and boom sauce - check that off.
As for the images, yeah I would recommend S3. Don't really want to dump the whole amount in i/o cause DB, hits, i/o, all on one instance is a lot.
Lastly, if you do plan on going to the UK region (not positive if that's been rolled out yet) I would recommend sectioning all the pieces of your application. This is really good practice to use all the resources they provide.
For a very fault tolerant system:
EC2 fleet (m or c series) with an ELB
S3 the images
RDS the users
CloudWatch the stats
Tenecy the users with IAM groups
Authenticate with STS or AD or whatever (kinda been in the cognito only recently)
Store their session and authenticated crap in ElastiCache - Redis
Keep tabs on them with Kinesis (optional)
And let them search each other with CloudSearch (also optional)
Boss system right there!
And that's if you want to spend a bunch o' cash but have a sweet sweet system. If you want to spend next to nothing, make it serverless. A broad question asked with hundreds of combinations so this is up to interpretation.
Hope this helps!

EC2 Architecture design for Website

I have a site that I will be launching soon. Not entirely sure how heavy the traffic will get.
I am using Django+Nginx+Gunicorn+Mysql. There will be support for SSL/HTTPS.
As a starting point, I am thinking of having two micro instances balanced by Elastic Load Balancing.
The MySql database will be on one of the instances. If traffic gets heavy, I might move static files to a CDN. The micro instances serve as front-end servers responsible for only churning out HTML/JSON and serving static files. Static files are mainly CSS/js and several images (not many). I foresee database will be read-heavy and less writes.
Questions:
Assuming the traffic rises to 100k page views per day, will the 2 micro instances suffice?
Do I have to move the database to a separate instance? And what instance type would be good?
What if the traffic is only 1k page views per day?
How many gunicorn processes to run on a micro instance?
In general, what type of metrics will help me determine what kind and how many instances I would need? What is the methodology to decide what kind of architecture I would need?
Thanks a lot!
Completely dependant on how dynamic the site is planning to be. Do users generate content towards the service or is it mostly static? If the former you're going to get a lot from putting stuff like avatars, images etc. into S3 and putting that on Cloudfront. Same with your static files... keeping your servers stateless will allow you scale with ease.
At 100k page views a day you will definitely struggle with just micros... you really should only use those in a development environment and aren't meant to handle stuff like serving users. I'd use at a minimum a single small instance in-front of a Load Balancer, may sound strange but you will be able to throw in another instance when things get busy without having to mess with Route 53 or potentially having your site fail. The stateless stuff is quite important now as user-generated assets may only be reference able from one instance and not the other.
At 1k page views I'd still use a small for web serving and another small for MySQL. You can look into RDS which is great if you're doing this solo, forget about needing to upgrade versions and stuff like maintenance, backups etc.
You will also be able to one-click spin up read replicas for peak. Look into the Amazon CLI as well to help automate those tasks. Cronjobs will make it a cinch if you're time stressed otherwise Opsworks, Cloudformation and Auto-Scaling will all help with the above.
Oh and just as a comparison, an Application server of mine running Apache, PHP with APC to serve our users starts to struggle with about 80 concurrent users. Runs on a small EC2 Instance with a Small RDS (which sits at about 15% at the same time as the Application Server is going downhill)
Probably not. Micro instances are not designed for heavy production loads. They use a burstable CPU profile. They can run at 2 ECU for a couple of minutes, and then they get locked at 0.1-0.2 ECU. I tend to like c1.medium, but small may be enough for you.
Maybe, as long as they are spread out during the day and not all in a short window.
1-2 per core. Micro only has 1 core.
Every application is different. The best thing to do is run your own benchmarks using tools like ab (Apache Bench)
Following the AWS best practices architecture diagram is always a good start.
http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_web_01.pdf
I strongly advise you to store all your files on Amazon S3, and use a Route 53 DNS (or any other DNS if you want) in front of it to distribute the files, because later on if you decide to use CloudFront CDN it will be very easy to change. And, just to mention using CloudFront as CDN will increase your cost only a little bit, not a huge thing.
Doesn't matter the scenario, if we're talking a about production, you should definitely go for separate instances, at least 1 EC2 for web and 1 EC2/RDS for database.
If you are geek and like to get into the nitty gritty details, create your own infrastructure and feel free to use any automation tool (puppet, chef) or not. Or if you just want to collect the profit, or have scarce resources to take care of everything, you should try Elastic Beanstalk (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html)
Anyway, going to create your own infrastructure or choose elastic beanstalk, always execute stress tests to have a better overview of your capacity planning needs. After you choose your initial environment, stress it using apache bench, siege or whatever other tool you may like.
Hope this helps.
I would suggest to use small instances instead of micro as micro instances often stop responding on heavy load and then it requires a stop-start. Use s3 for static files which helps in faster loading and have a look over cloud front.
The region for instance also helps in serving requests and if you target any specific region, create the instance selecting that region.
Create the database in new instance and attach ebs volume to that instance. Automate backup script to copy database files and store in ebs to avoid any issues. The instance selected here can be iops for faster processing over standard. Aws services provide lot of flexibility but you need to have scripts running to scale up and down the servers as per the timings.
Spot instance can help in future as they come cheap in case you are scaling up.