Amazon AWS S3 Site Update - amazon-web-services

I've looked through just about every related question on here that I can find and none of the suggested solutions seem to resolve my problem.
I'm currently hosting a website on Amazon AWS using strictly the S3 and Route 53 tools to host a static website and re-route from a couple of different URL queries to our site. This morning I attempted to update the CSS files being used to style the webpage, as well as a bunch of new image files and some minor updates to the HTML pages, and noticed that all of my changes showed up immediately on the webpage except the changes I had made to my CSS file. I've checked, and the version of the file on the S3 is the correct/updated version, but when I attempt to use the Developer Tools in my web browser to inspect the webpage displayed, it's still showing an older version of the css file. This doesn't make any sense to me, as all of the other changes show up immediately except for this particular file. Does anyone have any thoughts on how to fix this/what could be going wrong?
NOTE: I'm not using AWS CloudFront on this webpage at all so I don't believe that any of the "invalidation" suggested elsewhere will help me. In the past, I've updated the files and seen immediate changes when loading my webpage.

You already know this is a browser cache issue - which you can clear the cache, but if you want to force everyone to automatically get the new CSS, what I usually do is add a query parameter to the file include, i.e. instead of
<link href="~/css/plugins/thickbox/thickbox.css" rel="stylesheet" />
do this:
<link href="~/css/plugins/thickbox/thickbox.css?v=1001" rel="stylesheet" />
and you can up the 1001 each time you push out an update - the browser will automatically grab the new file.
Google 'cache-busting' for other options.

Related

Redirect one file in Amazon S3 to external domain

Let's say I have an object at http://mybucket.s3.amazonaws.com/folder1/folder2/pdf/something.pdf
There are NO other files in the bucket as this was setup for a legacy link that is being used in an old marketing campaign.
I want that link to redirect to https://example.com (the screenshot has a different domain but SO won't let me post it here).
Tried to setup static hosting with variations of this but can't seem to get this to work. Is there a way to do a redirect for this one file?
In the interim I had to add a blank pdf that just has a link for the user to click which isn't ideal.
I think this guide will work for you.
Basically it replaces the obsolete file with an HTML file, while retaining the old name something.pdf. The HTML contains meta header <meta http-equiv="Refresh" content="0; URL=http://www.example.com/target/"> that should force an instant redirect to the desired location. Make sure to edit the file metadata ContentType to text/html so that the browser is able to read it.
I tried this myself as well, it works!
The other option seems to be "Redirect requests for an object". Instructions are in the official documentation.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-page-redirect.html

New requirements for SharedArrayBuffers on https://example.com/

Today I received an email from Google:
New requirements for SharedArrayBuffers on https://example.com/
Google systems have recently detected that SharedArrayBuffers (SABs)
are used on https://example.com/, but COOP and/or COEP headers are
not served.
For web compatibility reasons, Chrome is planning to require COOP/COEP
for the use of SABs from Chrome 91 (2021-25-05) onwards. Please
implement 'cross-origin-isolated' behaviour on your site.
I have been reading up about this, this afternoon, but am totally lost!
I make a lot of use on my site of things like:
Adverts from Freestar.io
Static content (JS, CSS and some images) hosted in an AWS bucket
Content from Youtube and Vimeo in iframes
Bootstrap CSS and JS and jQuery from various CDNs
I have checked the headers from the CDNs, and can see the cross-origin-resource-policy is set to cross-origin and so, if I set these headers on my site:
Cross-Origin-Embedder-Policy = require-corp
Cross-Origin-Opener-Policy = same-origin
Then the content from CDNs where content that is served whose headers contain the cross-origin-resource-policy: cross-origin header, can be displayed as long as I include the crossorigin option e.g. here:
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.0/css/bootstrap.min.css" crossorigin>
However, I have looked at various other sites, and they do not have those headers. Those sites include the following:
AWS
Freestar.io Advert
Youtube and Vimeo
My questions are:
Does anyone know if it is possible to configure an AWS bucket so that the content served by the bucket includes the cross-origin-resource-policy header? I have searched but cannot find anything to explain how to do that.
Will adverts and videos no longer be displayed once the Chrome change to require COOP/COEP for the use of SABs is implemented, and if so, is that just something I am stuck with and can do nothing about since I have no way to make the external sites include that header in the content they serve?

Next.js: How to make links work with exported sites when hosted on AWS Cloudfront?

I'm trying to get a prototype Next.js project up by doing a Static html export (i.e. next export) and then copying the generated output to AWS S3 and serving it via Cloudfront.
I've got the following two pages in the /pages directory:
index.tsx
Pricing.tsx
Then, following along from the routing doco I added a Link to the pricing page from the index page, like so:
<Link href="/Pricing">
<a>Pricing</a>
</Link>
This results in a link that looks like example.com/Pricing (when you hover over it and when you click the link, the page does change to the pricing page and the browser shows example.com/Pricing in the URL bar).
The problem is, that link is not real - it cannot be bookmarked or navigated to directly via the url bar.
The problem seems to be that when I do a next export, Next.js generates a .html file for each page, but the router doesn't use those .html suffixes.
So when using the site, if the user tries to bookmark example.com/Pricing; loading that bookmark later will fail because Cloudfront will return a 404 (because CF only knows about the .html file).
I then tried changing my Link to look like:
<Link href="/Pricing.html">
<a>Pricing</a>
</Link>
That causes the router to use example.com/Pricing.html and that works fine with Cloudfront - but it actually causes a 404 during local development (i.e. using next dev)!
Other workarounds I could try are renaming all the .html files and removing the extension before I upload them to S3 (and make sure they get a content-type: text/html header) - or introducing a Cloudfront lambda that does the renaming on the fly when .html resources are requested. I don't really want to do the lambda thing, but the renaming before uploading shouldn't be too difficult.
But it feels like I'm really working uphill here. Am I doing something wrong at a basic level? How is Next.js linking supposed to work with a static html export?
Next.js version: 9.5.3-canary.23
Alternate answer if you want your URLs to be "clean" and not have .html on the end.
To get Next.js default URL links working properly with S3/Cloudfront, you must configure the "add a trailing slash" option in your next.config.js:
module.exports = {
trailingSlash: true,
}
As per the documentation
export pages as index.html files and require trailing slashes, /about becomes /about/index.html and is routable via /about/. This was the default behavior prior to Next.js 9.
So now you can leave your Link definition as:
<Link href="/Pricing">
<a>Pricing</a>
</Link>
This causes Next.js to do two things:
use the url example.com/Pricing/ - note the / on the end
generate each page as index.html in it's own directory - e.g. /Pricing/index.html
Many HTML servers, in their default configuration, will serve up the index.html from inside the matching directory if they see a trailing / character in the URL.
S3 will do this also, if you have it set up to serve as a website and IFF you access the URL through the website endpoint, as opposed to the REST endpoint.
So your Cloudfront distribution origin must be configured as a Origin type = Custom Origin pointing at a domain something like example.com.s3-website.us-east-1.amazonaws.com, not as an S3 Origin.
If you have your Cloudfront/S3 mis-configured, when you hit a "trailing slash" style URL - you will probably see your browser download a file of type binary/octet-stream containing 0 bytes.
Edit: Beware pages with . characters, as per issue 16617.
Followup to Shorn's self-answer of using the as field in the next/link component. This worked for me, however it would fail if I refreshed the page I was on.
Instead, I used exportPathMap to link my pages to a page.html equivalent that would be created when running next export.
The downside of this approach is that when running next start, those .html files will not be created or accessible. They will, however, from next dev. As I am creating a purely static website, I've now just been using next dev.
While making this change I was validating by manually copying my built assets from next export into S3 and hosting in CloudFront as Shorn was doing -- I no longer do this to validate and haven't had issues so far.
If anyone knows, let me know what I else may be missing by ignoring next start as part of development. This solution has worked for me so far though.
After writing this question, I found a reasonable workaround - though I'm not sure if it's the "right" answer.
Change the Link to:
<Link href="/Pricing" as="/Pricing.html">
<a>Pricing</a>
</Link>
This seems to work in both local dev and for bookmarking the site as served by Cloudfront. Still feels kind of wonky though. I kind of like those non .html urls better too. Oh well, maybe I'll do the renaming workaround instead.

Drupal 8 Generate all images for all styles

I'm working on a Drupal 8.6 multi site installation, where every site has it's own database, and I'm having a problem where the first time a content is shared on Facebook it uses the wrong image.
The meta tag is configured right, it is something like this:
<meta property="og:image" content="https://xxxx.com/image.jpg?itok=w8tMeCC0" />
This image problem happens only at the first share and I believe it happens because the image has not been created yet at the moment of the first share.
I would like to know what I could do to force the image to be generated as soon as the content is published and if there is a way to create all the missing images.
I found this post and I'm trying to implement in a module (I never worked on Drupal before) but I don't even know how to schedule this piece of script to be executed.
Is there an existing module or setting that does that?
Thanks for any help!
Have you tried the facebook Debugger?
https://developers.facebook.com/tools/debug/
Facebook usually stores the metatags in cache during shares. What I usually do is debug the webpage at least once with the right metatags configured in the debugger and ensure the page loads correctly there.
Afterwards, the share will be loading all the assets correctly.

S3 Static Website Only Displays Index.html (but not other dependent files)

I've been messing around with AWS lately and it definitely great. As a first test I'm trying to host the most basic static website via S3. The site is simply just one html file and a few javascript, css and image files.
Whenever I load the static URL the only thing that loads is the index.html file, its contents and for some strange reason the only image that loads is my avatar, yet all the images are stored in the same folder. All of the css, js and image files are also written as relative links too of course.
I've made sure all the files and folders permissions are set to "world" multiple times.
I also looked at the network tab in dev tools and its giving me 200's on every GET request.
I'm completely stumped as to why this is happening. Does anyone have an idea of what I'm missing?
The url is available at http://www.mikefisher.io.s3-website-us-east-1.amazonaws.com/
I should add that the site works perfectly locally as well as on a traditional web server.
I checked my browser console and it gives me this error which I think might have something to do with it.
Resource interpreted as Stylesheet but transferred with MIME type binary/octet-stream:
Fixed it!
The issue I was having is the metadata for the CSS files in Amazon S3 were set to 'binary/octet-stream' by default.
The way I fixed this was selecting the individual files in the bucket, clicking the properties tab, then in the meta-data section typing in 'text/css' as the value.