Static website using S3 Route53 and CloudFront displaying outdated content - amazon-web-services

After initially uploading the website, it was working perfectly. After some time, however, I uploaded a new index.html to my S3 bucket and saved the changes. After opening my website, it's still showing me the content of my old index.html page. Why?

Eventually CloudFront will pick up the new version (it caches files at many locations, which is the whole point of using it), but it can take a while for updated files to be distributed - usually longer than I like, so I either issue an AWS CLI command to create a CloudFront invalidation, or do it thru the console.
Invalidating Files - Amazon CloudFront

Related

CloudFront serving outdated content from Amazon S3?

I'm storing a static web site on Amazon S3 and delivering by Cloudfront. I pushed changes to S3 but Cloudfront is still showing the old files. I made an invalidation in Cloudfront with this path /* which is supposed to clear tho whole cache but it still shows the old staff. I'm wondering how to solve this.

uploading images to S3 using SDK skipping cloudfront

Setup:
We are running a E-commerce website consists of Cloudfront-->ALB-->EC2. we are serving the images from S3 via cloudfront behaviour.
Issue:
Our admin URL is like example.com/admin. We are uploading product images via admin panel as a zip file that goes via cloudfront.Each zip file size around 100MB-150MB consists of around 100 images. While uploading the zip file we are facing 502 gateway error from cloudfront since it took more than 30sec, which is default time out value for cloudfront.
Expected solution:
Is there a way we can skip the cloudfront for only uploading images?
Is there any alternate way increasing timeout value for cloudfront??
Note: Any recommended solutions are highly appreciated
CloudFront is a CDN service to help you speed up your services by caching your static files in edge location. So it won't help you in uploading side
In my opinion, for the uploading images feature, you should use the AWS SDK to connect directly with S3.
If you want to upload files directly to s3 from the client, I can highly suggest using s3 presigned URLs.
You create an endpoint in your API to create the presigned URL for a certain object (myUpload.zip), pass it back to the client and use that URL to do the upload. It's safe, and you won't have to expose any credentials for uploading. Make sure to set the expiration time to a reasonable time (one hour).
More on presigned URLs's here https://aws.amazon.com/blogs/developer/generate-presigned-url-modular-aws-sdk-javascript/

AWS Cloudfront to EC2 with mixed PHP and static content

This may have been asked and answered elsewhere but I could not find the exact scenario.
I have an EC2 instance running a LAMP stack and serving PHP content. This all works.
I wanted to cache this content as it doesn't often change. It's Wordpress, and Cloudfront cache speeds things up significantly. So I've set up a distribution that points to the EC2 instance.
I also have a subdirectory that is all static HTML. For example, the base URL is mysite.com serving PHP content and mysite.com/data serves HTML pages with standard index.html pages in each subdirectory.
Hitting the Cloudfront URL, the PHP content loads without fail. But hitting mysite.com/data returns the standard 502 error as if the endpoint can't be reached.
Any ideas?
Is there a better way to set this up?
The more common AWS way to set this up is to use an S3 bucket static site and a CORS-S3Origin inside of Cloudfront with a /data routing rule in front of that second origin.
I believe Cloudfront will provide example bucket policies in the web console, but getting all the permissions correct is a bit time consuming.

AWS CloudFront not able to point to specific url subpage

I am using reactJS to develop our website, which I uploaded to S3 bucket with both index and error documents pointing to "index.html".
If I use the s3 bucket's url, say http://assets.s3-website-us-west-2.amazonaws.com", I get served my index.html. So far, so good. If I then go to specific subpage by deliberately appending /merchant, it goes there to without any problem although there is no folder called /merchant in my s3 bucket.
However, if I now attach this S3 bucket to my CloudFront distribution, and I try to directly address "https://blah.cloudfront.net/merchant", it responds with "access denied" because it could not find the subfolder /merchant in s3 bucket.
How do people get around this issue with CloudFront? I have so many virtual subpages that don't map to physical folders.
Thank you!
I have the answer.
In the cloudfront, set a custom error response like this

Does cloudfront support questionsmarks at URLs to fetch a new resource

We are porting a website to a AWS cloudfront backed infrastructure which serves static resource from an S3 bucket.
We access the resource like this
http://example.com/static/foo.css?hash=1
Our CMS generates a new hash when the files changed (cachebusting).
So we upload the file foo.css to the S3 bucket and access
http://example.com/static/foo.css?hash=2
But still the old contents are displayed. It seems to be cached in cloudfront?
Can this be avoided?
Or does the cms need to be modified?
You have to configure your CloudFront distribution to forward query parameters and use them as part of the cache key. Documented here.