I have an application which is a static website builder.Users can create their websites and publish them to their custom domains.I am using Amazon S3 to host these sites and a proxy server nginx to route the requests to the S3 bucket hosting sites.
I am facing a load time issue.As S3 specifically is not associated with any region and the content being entirely HTML there shouldn't ideally be any delay.I have a few css and js files which are not too heavy.
What can be the optimization techniques for better performance? eg: Will setting headers ? or Leverage caching help? I have added an image of pingdom analysis for reference.
Also i cannot use cloudfront as when the user updates an image the edge locations have a delay of few minutes before the new image is reflected.It is not instant update,hence restricting the use for me. Any suggestions on improving it?
S3 HTTPS access from a different region is extremely slow especially TLS handshake. To solve the problem we invented Nginx S3 proxy which can be find over the web. S3 is the best as origin source but not as a transport endpoint.
By the way try to avoid your "folder" as a subdomain but specify only S3 regional(!) endpoint URL instead with the long version of endpoint URL, never use https://s3.amazonaws.com
One the good example that reduces number of DNS calls is the following below:
https://s3-eu-west-1.amazonaws.com/folder/file.jpg
Your S3 buckets are associated with a specific region that you can choose when you create them. They are not geographically distributed. Please see AWS doc about S3 regions: https://aws.amazon.com/s3/faqs/
As we can see in your screenshot, it looks like your bucket is located in Singapore (ap-southeast-1).
Are your clients located in Asia? If they are not, you should try to create buckets nearer, in order to reduce data access latency.
About cloudfront, it should be possible to use it if you invalide your objects, or just use new filenames for each modification, as tedder42 suggested.
Related
I want to host my website in S3 (because it seems cheaper and i don't have server side script). I have a domain, and i want my domain link to my S3 website. So far, what i do is enabling Static website hosting in my S3 website bucket, and set Route53 record set's Alias Target to my S3 website. it's working. But it's not good enough, i want it to deal with multi regions.
I know that Transfer acceleration can auto sync files to other regions so it's faster for other regions. But i don't know how to make it work with Route53. I hear that some people uses CloudFront to do that but i don't quite understand how. And i don't want to manually create buckets in several regions and manually set up for each region
do you have any suggestion for me?
If your goal is to reduce latency for users worldwide, then Amazon CloudFront is definitely the way to go.
Amazon CloudFront has over 100 edge locations globally, so it has more coverage than merely using AWS regions.
You simply create a CloudFront distribution, point it to your S3 bucket and then point your domain name to CloudFront.
Whenever somebody accesses your content, CloudFront will retrieve it from S3 and cache it in the edge location closest to that user. Then, other users who want the data will receive it from the local cache. Thus, your website appears fast for many users.
See also: Amazon CloudFront pricing
I have a single HTML landing page and I expect around 50,000 to 100,000 visitors per day
(no server side code)
Only HTML and a little bit JavaScript.
So what AWS instance type I should use so my webpage will not crash?? Right now I have the free tier : t2.micro with window server 2016 do I need to upgrade? or this is good enough?
thanks.
Using AWS S3 Only
For static page hosting you can use AWS S3. You need to create a S3 bucket and enable static website hosting. For more details refer Example Walkthroughs - Hosting Websites on Amazon S3.
Using AWS S3 & CloudFront
Since you are expecting more traffic, you can reduce the cost and improve the performance by using AWS CloudFront where it will cache the content at Edge locations of the content delivery network. You can also setup free AWS Certificate Manager issued SSL Certificates if you use CloudFront.
If there is no backend code, then you can do it using just S3 and CloudFront.
I'm working on a website that contains photo galleries, and these images are stored on Amazon S3. Since Amazon charges like $0.01 per 10k GET-requests, it seems that a potential troll could seriously drive up my costs with a bot that makes millions of page requests per day.
Is there an easy way to protect myself from this?
The simplest strategy would be to create randomized URLs for your images.
You can serve these URLs with your page information. But they cannot be guessed by the bruteforcer and will usually lead to a 404.
so something like yoursite/images/long_random_string
Add aws Cloudfront service for your S3 object images. So it will retrieve the cached data from the edge location.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/MigrateS3ToCloudFront.html
As #mohan-shanmugam pointed out, you should use a CloudFront CDN with your origin as the S3 bucket. It is considered bad practice for external entities to hit S3 buckets directly.
http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html
With a CloudFront distribution, you can alter your S3 bucket's security policy to only allow access from the distribution. This will block direct access to S3 even if the URLs are known.
In reality, you would likely suffer from website performance way before needing to worry about additional charges as a direct DDOS attempt against S3 should result in AWS throttling API requests.
In addition, you can set up AWS WAF in front of your CloudFront distribution and use it for advanced control of security related concerns.
We want to host images on our application as fast as possible. As we already have an AWS setup we prefer to host our images on S3 buckets (but are open for alternatives).
The challenge is routing the request to the closest S3 bucket.
Right now we use Amazon Route 53 with geolocation routing policy to the closes EC2 instance wich redirects to the respective bucket. We find this inefficent as the request goes:
origin->DNS->EC2->S3 and would prefer
origin->DNS->S3. Is it possible to bind two static website S3 buckets to the same domain where request are routed based on Geolocation?
Ps: We have looked into cloudfront, but since many of the images are dynamic and are only viewed once we would like the origin to be as close to the user as possible.
It's not possible to do this.
In order for an S3 bucket to serve files as a static website, the bucket name must match the domain that is being browsed. Due to this restriction, it's not possible to have more than one bucket serve files for the same domain because you cannot create more than one bucket with the same name, even in different regions.
CloudFront can be used to serve files from S3 buckets, and those S3 buckets don't need to have their names match the domain. So at first glance, this could be a workaround. However, CloudFront does not allow you to create more than one distribution for the same domain.
So unfortunately, as of this writing, geolocating is not possible from S3 buckets.
Edit for a deeper explanation:
Whether the DNS entry for your domain is a CNAME, an A record, or an ALIAS is irrelevant. The limitation is on the S3 side and has nothing to do with DNS.
A CNAME record will resolve example.com to s3.amazonaws.com to x.x.x.x and the connection will be made to S3. But your browser will still send example.com in the Host header.
When S3 serves files for webpages, it uses the Host header in the HTTP request to determine from which bucket the files should be served. This is because there is a single HTTP endpoint for S3. So, just like when your own web server is hosting multiple websites from the same server, it uses the Host header to determine which website you actually want.
Once S3 has the Host that you want, it compares it against the buckets available. It decided that the bucket name would be used to match against the Host header.
So after a lot of research we did not find an answer to the problem. We did however update our setup. The scenario is that a user clicks a button and will view some images in an IOS app. The request when the user pushes the button is geo rerouted to the nearest EC2 instance for faster performance. Instead of returning the same imagelinks in EU and US we updated it so when clicking in US you get links to an American S3 bucket and the same for Europe. We also put up two cloud front distributions, one in front of each S3 bucket, to increase speed.
I am pretty new to aws. I want to store my video files in s3 bucket and host them on my website using cloudfront. Users should be able to download videos only after logging onto my website.
How do i go about implementing this? Since i am new to aws, a tutorial link would be very helpful. Thank You.
Also if you could suggest other cheaper but reliable CDNs for video files, it would be very helpful.
You can restrict your Amazon S3 content to accept requests only from Amaozn CloudFront and also use signed, temporary URLS for content delivery, thus serving private content through CloudFront.
See: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html