Do I need CDN in addition to Amazon S3? - amazon-web-services

Given that storing static content is the main use case for Amazon S3 service and considering the fact that many large players rely on CDN to scale distribution of such content, I want to know whether Amazon S3 provides some sort of CDN functionality? I can easily imagine them storing multiple copies of content for fault tolerance/scalability, but does that put it on par with CDNs? If not, why not?

From What Is Amazon CloudFront?:
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.
You can decide to put Amazon CloudFront in front of your application. This will enable static content to be cached in edge locations around the world.

Related

Amazon S3 as cdn copying images from my server

I have searched a lot on this but all I get is using CloudFront (CDN) in collaboration with S3.
I want to do something different.
CloudFront works as a CDN with its Origin set to either my domain where images are, or S3.
If I set it to my domain, there is an issue of having my hosting space used.
If I use it with S3, the question is, how to get my images to S3 without much hassle? In case of CDN, this is automatic, as every call to CloudFront copies the image from my server automatically.
Is it possible that CloudFront works with S3 but if image is not present on S3, it copies it from my server to S3?
Or may be S3 itself works as CDN (best solution). I have seen on some sites that they use s3 urls for hosting their images, like this:
https://retsimages.s3.amazonaws.com/14/A10363214_6.jpg
How is that possible?
If I set it to my domain, there is an issue of having my hosting space used.
More expensive than the storage space is the cost of having a server sitting there ready to handle the request. Your application logic konws when the images change; that's the time to put them in S3.
how to get my images to S3 without much hassle?
There's an SDK for just about every language, so upload the image as it comes in. Use s3cmd sync to move the images you have. Then you can just turn off your server.
Or may be S3 itself works as CDN
CloudFront can use a customer provided dns name and matching certificate so that you can use a custom domain with https. It can integrate into AWS WAF which S3 cannot directly. Otherwise, CDN behaves similarly to s3. CloudFront should provide better caching and endpoint locality, but you'll see little functional difference at low volumes. Neither is read-after-write consistent, but Cloudfront caches additionally. Pricing is unlikely to make CloudFront cheaper for most uses.
Is it possible that CloudFront works with S3 but if image is not present on S3, it copies it from my server to S3?
Close.
CloudFront does have a feature that would help move you in this direction -- origin groups. Create an origin group with S3 as primary and your server as secondary. Any time CloudFront encounters a cache miss, it will first check S3, and only if the image is not there, it will retry by sending the request to your server. It will cache the response, but it will not remember the source of the object -- so subsequent requests on future cache misses for the same object will always try S3 first.
This means something on your server needs to be responsible for ultimately moving images to S3 -- but as long as the image exists in one place or the other, the image will be served by CloudFront and cached in CloudFront in the edge or edges (up to two -- one global/outer, one regional/inner) that handled the request.

Multi region active-active using S3 bucket in each region

I'm trying to design an architecture for a "simple" problem but for the moment I did not found the solution.
The problem:
I have a S3 bucket (one in each region with bucket replication in order to have the same thing in each bucket) and I would like to have a CloudFront in front of it to cache objects.
My need: to have the lowest latency for each user in the world when displaying an object from S3 bucket.
I wanted to have a CloudFront distribution in front of each S3 bucket and a Route53 to route based on the latency to the nearest CF. The problem is that we cannot have many distribution for the same cname.
Here bellow the architecture I have so far (which is not good).
Any idea how to achieve this ?
Thanks.
C.C.
Just keep one of your buckets, AWS CloudFront does all of them for you.
How CloudFront Delivers Content to Your Users
After you configure CloudFront to deliver your content, here's what happens when users request your objects:
1-A user accesses your website or application and requests one or more objects, such as an image file and an HTML file.
2-DNS routes the request to the CloudFront edge location that can best serve the request—typically the nearest CloudFront edge location in terms of latency—and routes the request to that edge location.
3-In the edge location, CloudFront checks its cache for the requested files. If the files are in the cache, CloudFront returns them to the user. If the files are not in the cache, it does the following:
CloudFront compares the request with the specifications in your
distribution and forwards the request for the files to the applicable
origin server for the corresponding file type—for example, to your
Amazon S3 bucket for image files and to your HTTP server for the HTML
files.
The origin servers send the files back to the CloudFront edge
location.
As soon as the first byte arrives from the origin, CloudFront
begins to forward the files to the user. CloudFront also adds the
files to the cache in the edge location for the next time someone
requests those files.
For more info read the following doc:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowCloudFrontWorks.html
To deliver content to end users with lower latency, Amazon CloudFront uses a global network of 138 Points of Presence (127 Edge Locations and 11 Regional Edge Caches) in 63 cities across 29 countries. Amazon CloudFront Edge locations are located in:
One thing you can do is that you can create one single CloudFront distribution and you can attach a Lambda#Edge to it and use it to rewrite the host header in the request. Inside the Lambda you can access all the headers and you can rewrite them at will, based on any logic you want. When you rewrite the host header, the request will be sent to another bucket in another region.
We used this solution to build multi-region active-active delivery from replicated buckets from two regions.
The original idea is from here: https://medium.com/buildit/a-b-testing-on-aws-cloudfront-with-lambda-edge-a22dd82e9d12
This seems to be the same solution for a different problem: https://aws.amazon.com/blogs/apn/using-amazon-cloudfront-with-multi-region-amazon-s3-origins/
We presented our solution on the AWS Summit in Berlin this year, but haven't posted about it yet anywhere.
The answer seems to be pretty elaborate as provided by #Reza Mousavi. The point of AWS CloudFront distribution is to cache objects on the Edge locations worldwide (see options while configuring-attached snapshot).
Best practice (at least what I do -no complaints so far) is to configure a single distribution for each application origin. The option while configuring gives you the regions to choose based on your customer origin.
The AWS solutions has launched new solution to address the S3 replication across the regions.
For example, you can create objects in Oregon, rename them in Singapore, and delete them in Dublin, and the changes are replicated to all other regions. This solution is designed for workloads that can tolerate lost events and variations in replication speed.
https://aws.amazon.com/solutions/multi-region-asynchronous-object-replication-solution/

Difference between Amazon S3 cross region replication and Cloudfront

After reading some AWS documentations, I am wondering what's the difference between these different use cases if I want to delivery (js, css, images and api request) content in Asia (including China), US, and EU.
Store my images and static files on S3 US region and setup EU and Asia(Japan or Singapore) cross region replication to sync with US region S3.
Store my images and static files on S3 US region and setup cloudfront CDN to cache my content in different locations after initial request.
Do both above (if there is significant performance improvement).
What is the most cost effective solution if I need to achieve global deployment? And how to make request from China consistent and stable (I tried cloudfront+s3(us-west), it's fast but the performance is not consistent)?
PS. In early stage, I don't expect too many user requests, but users spread globally and I want them to have similar experience. The majority of my content are panorama images which I'd expect to load ~30MB (10 high res images) data sequentially in each visit.
Cross region replication will copy everything in a bucket in one region to a different bucket in another region. This is really only for extra backup/redundancy in case an entire AWS region goes down. It has nothing to do with performance. Note that it replicates to a different bucket, so you would need to use different URLs to access the files in each bucket.
CloudFront is a Content Delivery Network. S3 is simply a file storage service. Serving a file directly from S3 can have performance issues, which is why it is a good idea to put a CDN in front of S3. It sounds like you definitely need a CDN, and it sounds like you have tested CloudFront and are unimpressed. It also sounds like you need a CDN with a larger presence in China.
There is no reason you have to chose CloudFront as your CDN just because you are using other AWS services. You should look at other CDN services and see what their edge networks looks like. Given your requirements I would highly recommend you take a look at CloudFlare. They have quite a few edge network locations in China.
Another option might be to use a CDN that you can actually push your files to. I've used this feature in the past with MaxCDN. You would push your files to the CDN via FTP, and the files would automatically be pushed to all edge network locations and cached until you push an update. For your use case of large image downloads, this might provide a more performant caching mechanism. MaxCDN doesn't appear to have a large China presence though, and the bandwidth charges would be more expensive than CloudFlare.
If you want to serve your files in S3 buckets to all around the world, then I believe you may consider using S3 Transfer acceleration. It can be used in cases where you either upload to or download from your S3 bucket . Or you may also try AWS Global Accelerator
CloudFront's job is to cache content at hundreds of caches ("edge locations") around the world, making them more quickly accessible to users around the world. By caching content at locations close to users, users can get responses to their requests more quickly than they otherwise would.
S3 Cross-Region Replication (CRR) simply copies an S3 bucket from one region to another. This is useful for backing up data, and it also can be used to speed up content delivery for a particular region. Unlike CloudFront, CRR supports real-time updating of bucket data, which may be important in situations where data needs to be current (e.g. a website with frequently-changing content). However, it's also more of a hassle to manage than CloudFront is, and more expensive on a multi-region scale.
If you want to achieve global deployment in a cost-effective way, then CloudFront would probably be the better of the two, except in the special situation outlined in the previous paragraph.

How to cache the images stored in Amazon S3?

I have a RESTful webservice running on Amazon EC2. Since my application needs to deal with large number of photos, I plan to put them on Amazon S3. So the URL for retrieving a photo from S3 could look like this:
http://johnsmith.s3.amazonaws.com/photos/puppy.jpg
Is there any way or necessity to cache the images on EC2? The pros and cons I can think of is:
1) Reduced S3 usage and cost with improved image fetching performance. However on the other hand EC2 cost can rise plus EC2 may not have the capability to handle the image cache due to bandwidth restrictions.
2) Increased development complexity cuz you need to check the cache first and ask S3 to transfer the image to EC2 and then transfer to the client.
I'm using the EC2 micro instance and feel it might be better not to do the image cache on EC2. But the scale might grow fast and eventually will need a image cache.(Am I right?) If cache is needed, is it better to do it on EC2, or on S3? (Is there a way for caching for S3?)
By the way, when the client uploads an image, should it be uploaded to EC2 or S3 directly?
Why bring EC2 into the equation? I strongly recommend using CloudFront for the scenario.
When you use CloudFront in conjunction with S3 as origin; the content gets distributed to 49 different locations worldwide ( as of count of edge locations worldwide today ) directly working out as a cache globally and the content being fetched from nearest location based on the latency to your end users.
The way you don't need to worry about the scale and performance of Cache and EC2 can straightforward offload this to CloudFront and S3.
Static vs dynamic
Generally speaking, here are the tiers:
best CDN (cloudfront)
good static hosting (S3)
okay dynamic (EC2)
Why? There are a few reasons.
maintainability and scalability: cloudfront and S3 scale "for free". You don't need to worry about capacity or bandwidth or request rate.
price: approximately speaking, it's cheaper to use S3 than EC2.
latency: CDNs are located around the world, leading to shorter load times.
Caching
No matter where you are serving your static content from, proper use of the Cache-Control header will make life better. With that header you can tell a browser how long the content is good for. If it is something that never changes, you can instruct a browser to keep it for a year. If it frequently changes, you can instruct a browser to keep it for an hour, or a minute, or revalidate every time. You can give similar instructions to a CDN.
Here's a good guide, and here are some examples:
# keep for one year
Cache-Control: max-age=2592000
# keep for a day on a CDN, but a minute on client browsers
Cache-Control: s-maxage=86400, maxage=60
You can add this to pages served from your EC2 instance (no matter if it's nginx, Tornado, Tomcat, IIS), you can add it to the headers on S3 files, and CloudFront will use these values.
I would not pull the images from S3 to EC2 and then serve them. It's wasted effort. There are only a small number of use cases where that makes sense.
Few scenarios when EC2 caching instance:
your upload/download ratio is far from 50/50
you hit S3 limit 100req/sec
you need URL masking
you want to optimise kernel, TCP/IP settings, cache SSL session for clients
you want proper cache invalidating mechanism for all geo locations
you need 100% control where data is stored
you need to count number of requests
you have custom authentication mechanism
For number of reasons I recommend to take a look at Nginx S3 proxy.

Use AWS S3 vs Cloudfront

Since heroku file system is ephemeral , I am planning on using AWS for static assets for my django project on heroku
I am seeing two conflicting articles one which advises on using AWS S3. This one says to use S3
https://devcenter.heroku.com/articles/s3
While another one below says, S3 has disadvantages and to use Cloudfront CDN instead
https://devcenter.heroku.com/articles/using-amazon-cloudfront-cdn
Many developers make use of Amazon’s S3 service for serving static
assets that have been uploaded previously, either manually or by some
form of build process. Whilst this works, this is not recommended as
S3 was designed as a file storage service and not for optimal delivery
of files under load. Therefore, serving static assets from S3 is not
recommended.
Amazon CloudFront is a Content Delivery Network (CDN) that integrates with other Amazon Web Services like S3 that give us an easy way to distribute content to end users with low latency, high data transfer speeds.
CloudFront makes your static files available from data centers around the world (called edge locations). When a visitor requests a file from your website, he or she is invisibly redirected to a copy of the file at the nearest edge location (Now AWS has around 35 edge locations spread across the world), which results in faster download times than if the visitor had accessed the content from S3 bucket located in a particular region.
So if your user base is spread across the world its a better option to use CloudFront else if your users are localized you would not find much difference using CloudFront than S3 (but in this case you need to choose right location for your your S3 bucket: US East, US West, Asia Pacific, EU, South America etc)
Comparative features of Amazon S3 and CloudFront
My recommendation is to use CloudFront on top of Whitenoise. You will be serving the static assets directly from your Heroku app, but CloudFront as the CDN will take over once you reach scale.
Whitenoise radically simplifies build processes and the need to use convoluted caching headers.
Read http://whitenoise.evans.io/en/latest/ for the full manifesto.
(Note that Whitenoise is relevant only for static assets bundled with your app, not for user-uploaded files, which still require S3 for proper storage. You'd still want to use CF though.)
Actually, you should use both.
CloudFront only acts as a CDN, which basically means it caches resources in edge locations all over the world. In order for this to work, it has to initially download those resources from an origin location, whenever they expire or don't yet exist.
CloudFront distributions can have one of two possible origin types. S3 or EC2. In your case, you should store your assets in S3 and connect the bucket to a CloudFront distribution. Use the CloudFront links for actually serving the assets, and S3 for storage.
This will ensure the best possible performance, as well as correct and scalable load handling.
Hope this helps, let me know if you need additional info in the comments section.