What AWS instance should i use? - amazon-web-services

I have a single HTML landing page and I expect around 50,000 to 100,000 visitors per day
(no server side code)
Only HTML and a little bit JavaScript.
So what AWS instance type I should use so my webpage will not crash?? Right now I have the free tier : t2.micro with window server 2016 do I need to upgrade? or this is good enough?
thanks.

Using AWS S3 Only
For static page hosting you can use AWS S3. You need to create a S3 bucket and enable static website hosting. For more details refer Example Walkthroughs - Hosting Websites on Amazon S3.
Using AWS S3 & CloudFront
Since you are expecting more traffic, you can reduce the cost and improve the performance by using AWS CloudFront where it will cache the content at Edge locations of the content delivery network. You can also setup free AWS Certificate Manager issued SSL Certificates if you use CloudFront.

If there is no backend code, then you can do it using just S3 and CloudFront.

Related

How to use AWS for free hosting of a web app (Docker, Nginx, Angular, Django)?

I am a total beginner and just completed the first version of my web application.
I am using Docker, Nginx, Angular & Django. Note that the backend works on static files and uses a simple database for User Registration.
I want to deploy it to a free, cloud solution. I heard that I can use AWS Elastic Beanstalk but found a bit complicated both the configuration and the pricing policy.
Question
Can anybody guide me through what to consider or even better, what selection I have to make in order to host my web app on AWS without a charge?
PS: I don't know If I have to mention this, but in case the web app attracts a satisfying number of users, I would adjust the implementation in order the user to be able to upload and use my services upon their own data (and not upon the csv files). In this case, I might have to use other AWS services or migrate to another cloud solution. Just to say that both of them are welcome!
You can easily host an Angular app on AWS within the free tier (1 year) limits. I have hosted a handful of Angular apps so far using AWS S3 + AWS Cloudfront.
AWS S3 is used to host your static files. You first do a ng serve --prod where the Angular compiler will generate a /dist folder (in your project directory) containing all the static files (i.e. js, images, fonts, etc) required to run your angular app. Then you upload all your static files onto a AWS S3 bucket.
AWS Cloudfront is a cloud caching service. As the word "cache" suggests, it caches your static files. By setting up a Cloudfront cache in-front of your S3 bucket, it allows you to bypass the monthly 20,000 GET requests limited for free tier - because users' HTTP requests will be served from the Cloudfront cache instead of, directly from your S3 bucket. Cloudfront free tier gives you 2 million HTTP(S) requests per month.
The good thing about hosting on AWS S3 instead of an EC2 instance (P.S. Elastic beanstalk also creates EC2 instance) is that you can have multiple S3 buckets and Cloudfront distributions in 1 free tier account. As long as you stay within the limits, S3: 2,000 PUT Requests, 20,000 GET request; Cloudfront: 2million HTTPS requests. You can end up hosting several apps with one AWS free tier account. But if you're using EC2, you're almost limited to only 1 instance (because 31days x 24 hours = 744hrs which is 6 hours shy of the 750 hours limit). Unless you set up for your EC2 instances to turn on and off.
There are plenty of guides demonstrating how to do this, here are some of them:
Deploy an Angular with S3 and CloudFront
Use S3 and CloudFront to host Static Single Page Apps (SPAs) with HTTPs and www-redirects
Assuming you know how to deploy to a linux machine, Elastic Beanstalk is not useful for your use case (it's for automated scaling). I would do the following:
AWS S3 to deploy your angular app using Static Website hosting (one possible tutorial is this https://medium.com/#peatiscoding/here-is-how-easy-it-is-to-deploy-an-angular-spa-single-page-app-as-a-static-website-using-s3-and-6aa446db38ef
Amazon ECS( Container service) is used to deploy your containers. This will help with that step.
OR
Simple Amazon EC2 (its a linux server) with the entire environment set up with docker.

give public access to s3 bucket rather than serving static website for aws serverless application

i am new to aws serverless, and trying to host django app in aws serverless.
now aws serverless uses s3 bucket for static website hosting which cost around $0.50 (I am in free tier).
my question is instead of hosting static website can i not give public access to s3 bucket? as it would save me money. is it possible to use public bucket for aws serverless?
Yes, hosting static content on S3 is the most cost cost effective way to serve content. I would suggest to keep your bucket private and enable cloudfront as distribution (CDN) point in front of S3. That allows to keep a cache at the edge, close to your customers and to slightly lower the outgoing bandwidth costs (Cloudfront outgoing bandwidth costs is lower than S3 : in the US $0.085/Gb vs $0.090/GB)
This article will give you detailed instructions how to do so https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-match-made-in-the-cloud/
I explained the high level steps on my blog too : https://www.stormacq.com/2018/10/17/migrated-to-serverless.html

Use S3 for website in multi-regions

I want to host my website in S3 (because it seems cheaper and i don't have server side script). I have a domain, and i want my domain link to my S3 website. So far, what i do is enabling Static website hosting in my S3 website bucket, and set Route53 record set's Alias Target to my S3 website. it's working. But it's not good enough, i want it to deal with multi regions.
I know that Transfer acceleration can auto sync files to other regions so it's faster for other regions. But i don't know how to make it work with Route53. I hear that some people uses CloudFront to do that but i don't quite understand how. And i don't want to manually create buckets in several regions and manually set up for each region
do you have any suggestion for me?
If your goal is to reduce latency for users worldwide, then Amazon CloudFront is definitely the way to go.
Amazon CloudFront has over 100 edge locations globally, so it has more coverage than merely using AWS regions.
You simply create a CloudFront distribution, point it to your S3 bucket and then point your domain name to CloudFront.
Whenever somebody accesses your content, CloudFront will retrieve it from S3 and cache it in the edge location closest to that user. Then, other users who want the data will receive it from the local cache. Thus, your website appears fast for many users.
See also: Amazon CloudFront pricing

Can I use AWS S3 for hosting, and EC2 to process my form submits?

So I am thinking of migrating my website to Amazon S3 since it's super cheap and fast, however, I use PHP and AJAX to submit my contact forms. Would it be possible to host my site using AWS S3 and then send all HTTP POSTs to the EC2 instance?
Yes, this is very well possible. However, if you're running an EC2 instance anyways and your traffic is not enormous, you might as well serve your static files from your EC2 instance.
It is not possible to host php site on AWS S3 only static content like images, css or js can be put their.
For dynamic content you have to make use of aws instance.
https://forums.aws.amazon.com/message.jspa?messageID=453142
Correct Usage of Amazon Web Services S3 for Server - Side Scripting

Websites hosted on Amazon S3 loading very slowly

I have an application which is a static website builder.Users can create their websites and publish them to their custom domains.I am using Amazon S3 to host these sites and a proxy server nginx to route the requests to the S3 bucket hosting sites.
I am facing a load time issue.As S3 specifically is not associated with any region and the content being entirely HTML there shouldn't ideally be any delay.I have a few css and js files which are not too heavy.
What can be the optimization techniques for better performance? eg: Will setting headers ? or Leverage caching help? I have added an image of pingdom analysis for reference.
Also i cannot use cloudfront as when the user updates an image the edge locations have a delay of few minutes before the new image is reflected.It is not instant update,hence restricting the use for me. Any suggestions on improving it?
S3 HTTPS access from a different region is extremely slow especially TLS handshake. To solve the problem we invented Nginx S3 proxy which can be find over the web. S3 is the best as origin source but not as a transport endpoint.
By the way try to avoid your "folder" as a subdomain but specify only S3 regional(!) endpoint URL instead with the long version of endpoint URL, never use https://s3.amazonaws.com
One the good example that reduces number of DNS calls is the following below:
https://s3-eu-west-1.amazonaws.com/folder/file.jpg
Your S3 buckets are associated with a specific region that you can choose when you create them. They are not geographically distributed. Please see AWS doc about S3 regions: https://aws.amazon.com/s3/faqs/
As we can see in your screenshot, it looks like your bucket is located in Singapore (ap-southeast-1).
Are your clients located in Asia? If they are not, you should try to create buckets nearer, in order to reduce data access latency.
About cloudfront, it should be possible to use it if you invalide your objects, or just use new filenames for each modification, as tedder42 suggested.