In AWS S3 you can upload a file and make it public. You get a URL to access the same. Also, you can enable "Static Website Hosting". Can someone clarify the difference between these 2 approaches? If I can simply upload my html pages and make them public and access them over HTTP through browsers, why do I need to enable static website hosting ??
Enabling Static Website Hosting on S3 allows you to use a custom domain name, custom error pages, index.html documents for paths that end in /, and 301 redirects.
For others who are just stumbling across this, one disadvantage of enabling Static Website Hosting is the HTTP only endpoint you get.
See relevant docs. If you can work with the limitations of simply making the files public such as no custom domain name, you get TLS for free since some browsers block HTTP links on pages served over HTTPS.
Related
I'm hosting a static (NextJS) site on a GCP bucket, with my domain CNAME (let's say example.com) pointing to GCP. When Javascript is disabled, NextJS links in my generated content point to URLs like:
Page 1
but the actual file stored in the bucket is:
pages/1.html
which generates a 404 error when Javascript is disabled and <Link> doesn't capture the click.
I'm aware of the specialty page option MainPageSuffix in GCP, but I have it set as index.html and I don't think it can be set to rewrite someaddress to someaddress.html (and even if it could, it would not serve my root index.html corrctly when I point my browser to example.com)
I'm also aware of the as option in NextJS, but if I use it like:
<Link
href={`/pages/1`}
as={`/pages/1.html`}
>
it will not work when Javascript is enabled and I'm serving the site locally with npm run dev (I suppose it confuses <Link>?).
Is there any way to make this work? I'm using Next.js v13.0.7
(Alternatively, is there any other (free tier) option to host my site? I thought I could use Cloudflare Pages, but my static site has a lot of small pages - in the order of 100k - and Pages has a file limit of 20k)
I've read the Traffic management overview for global external HTTP(S) load balancers URL maps overview but do not see how to do the following:
https://example.com/page ----> https://example.com/page.html
Is it possible to "remove" the .html extension from my URL with Google's global external HTTP(S) load balancer?
My website is hosted on Google Cloud Storage (bucket). I understand that I can use gsutil to set metadata on files to type:text/html and that is a viable workaround, however I would need to script that and I spent a couple of hours looking at that but never got it figured out. The script would basically need to recursively list all files with .html extension then rename them removing the file extension then set the metadata.
URL rewrites allow you to present external users with URLs that are different from the URLs that your services use. Although it says that it provides URL shortening, extension removal isn't done through the Load Balancer, but rather by setting the file's Content-Type metadata to "text/html" or using App engine or Firebase hosting to serve a static HTML website and hide HTML extension. The latter suggestion was discussed in another stackoverflow post
url: /contact
static_files: www/contact.html
upload: www/contact.html
Many people have received 100s of links to PoCs that are on an internal facing bucket and the links are in this structure.
https://s3.amazonaws.com/bucket_name/
I added a redirect using AWS's Static website hosting section in Properties and it ONLY redirects when the domain is formatted like this:
https://bucket_name.s3-website-us-east-1.amazonaws.com
Is this a bug with S3?
For now, how do I make it redirect using both types of links? My current workaround is to add a meta redirect tag in each html file.
The s3-website is the only endpoint that supports redirects unfortunately. Using the s3.amazonaws.com supposes that you will be using S3 as a storage layer, instead of a website. If the link is to a specific object, you can place an HTML file at that url with a JS redirect, but other than that there is really no way to achieve what you are trying to do.
In the future, i would recommend always setting up a Cloudfront distribution for those kinds of usecases, as that will allow you to change the origin later on.
I am using Amazon S3 to serve static files for application hosted on heroku.I have made s3 bucket public and enabled static website hosting. Issue is i don't have any ssl certificate so i need to access it without https but when static tag creates urls for my application static files in templates it is automatically prepending http headers.How should i avoid it so i can access static files on my website without purchasing ssl?
settings.py
Custom_domain='xxx.s3-website-us-west-2-amazonaws.com'
STATIC_URL="%s/"%Custom_domain
STATICFILES_STORAGE='storages.backends.s3boto.S3BotoStorage'
Similar for media_url and default_file_storage
This might help
Django AWS S3 tutorial
You need to give the full url including protocol.
STATIC_URL="http://%s/" % Custom_domain
In fact, it wouldn't work at all without the protocol; the browser would just interpret it as a relative path in the current domain.
Note though that you can easily get a free ssl certificate from Let's Encrypt.
This is a simple question that possibly applies to all CDNs but I have not been able to find an answer to on the web or on the AWS site (http://aws.amazon.com/cloudfront/). Hopefully it is a simple answer for anyone familiar with CDNs or CloudFront and this might help others with understanding how this works.
If I were to use CloudFront for whole site delivery and set up an origin server (e.g. origin is www.myexample.com) then if I have an html file (example1.html) being served at www.myexample.com/example1.html and example1.html were to contain element that include an img tag with a src http://www.anothersite.com/anotherExample.jpg or even a S3 bucket source, then does this jpg from another source get cached in the cdn too?
You should connect the S3 bucket to a CloudFront distribution and use that links in the html code itself. I may be wrong, but I don't see how a CDN might be able to cache those links, as the client browser themselves request these resources based on the downloaded html code from the CDN.
Hence, in your example, only requests for myexample.com would go through your CloudFront distribution without any additional origin settings.
Edit: see #Skill M2 comment regarding adding multiple origin's for the same CloudFront distribution