Django s3 static files serve - django

I am using Amazon S3 to serve static files for application hosted on heroku.I have made s3 bucket public and enabled static website hosting. Issue is i don't have any ssl certificate so i need to access it without https but when static tag creates urls for my application static files in templates it is automatically prepending http headers.How should i avoid it so i can access static files on my website without purchasing ssl?
settings.py
Custom_domain='xxx.s3-website-us-west-2-amazonaws.com'
STATIC_URL="%s/"%Custom_domain
STATICFILES_STORAGE='storages.backends.s3boto.S3BotoStorage'
Similar for media_url and default_file_storage

This might help
Django AWS S3 tutorial

You need to give the full url including protocol.
STATIC_URL="http://%s/" % Custom_domain
In fact, it wouldn't work at all without the protocol; the browser would just interpret it as a relative path in the current domain.
Note though that you can easily get a free ssl certificate from Let's Encrypt.

Related

Configure Google Cloud Platform bucket to serve example.com/page.html when user accesses example.com/page

I'm hosting a static (NextJS) site on a GCP bucket, with my domain CNAME (let's say example.com) pointing to GCP. When Javascript is disabled, NextJS links in my generated content point to URLs like:
Page 1
but the actual file stored in the bucket is:
pages/1.html
which generates a 404 error when Javascript is disabled and <Link> doesn't capture the click.
I'm aware of the specialty page option MainPageSuffix in GCP, but I have it set as index.html and I don't think it can be set to rewrite someaddress to someaddress.html (and even if it could, it would not serve my root index.html corrctly when I point my browser to example.com)
I'm also aware of the as option in NextJS, but if I use it like:
<Link
href={`/pages/1`}
as={`/pages/1.html`}
>
it will not work when Javascript is enabled and I'm serving the site locally with npm run dev (I suppose it confuses <Link>?).
Is there any way to make this work? I'm using Next.js v13.0.7
(Alternatively, is there any other (free tier) option to host my site? I thought I could use Cloudflare Pages, but my static site has a lot of small pages - in the order of 100k - and Pages has a file limit of 20k)

Google Cloud HTTPS Load Balancer URL Rewrite To Remove .html Extension

I've read the Traffic management overview for global external HTTP(S) load balancers URL maps overview but do not see how to do the following:
https://example.com/page ----> https://example.com/page.html
Is it possible to "remove" the .html extension from my URL with Google's global external HTTP(S) load balancer?
My website is hosted on Google Cloud Storage (bucket). I understand that I can use gsutil to set metadata on files to type:text/html and that is a viable workaround, however I would need to script that and I spent a couple of hours looking at that but never got it figured out. The script would basically need to recursively list all files with .html extension then rename them removing the file extension then set the metadata.
URL rewrites allow you to present external users with URLs that are different from the URLs that your services use. Although it says that it provides URL shortening, extension removal isn't done through the Load Balancer, but rather by setting the file's Content-Type metadata to "text/html" or using App engine or Firebase hosting to serve a static HTML website and hide HTML extension. The latter suggestion was discussed in another stackoverflow post
url: /contact
static_files: www/contact.html
upload: www/contact.html

AWS S3 Redirect only works on bucket as a subdomain not bucket as a directory

Many people have received 100s of links to PoCs that are on an internal facing bucket and the links are in this structure.
https://s3.amazonaws.com/bucket_name/
I added a redirect using AWS's Static website hosting section in Properties and it ONLY redirects when the domain is formatted like this:
https://bucket_name.s3-website-us-east-1.amazonaws.com
Is this a bug with S3?
For now, how do I make it redirect using both types of links? My current workaround is to add a meta redirect tag in each html file.
The s3-website is the only endpoint that supports redirects unfortunately. Using the s3.amazonaws.com supposes that you will be using S3 as a storage layer, instead of a website. If the link is to a specific object, you can place an HTML file at that url with a JS redirect, but other than that there is really no way to achieve what you are trying to do.
In the future, i would recommend always setting up a Cloudfront distribution for those kinds of usecases, as that will allow you to change the origin later on.

How to control the URL that Django generates?

How can I get Django to generate the static URL's using static.mywebsite.com, instead of static.mywebsite.com.s3.amazonaws.com?
The S3 bucket used for static files works when I am using a bucket name with no periods. But with a bucket using periods, it gives Failed to load resource: net::ERR_INSECURE_RESPONSE
The reason I am trying to use a bucket with periods is because this guide and two StackOverflow posts say that this is required to get an address of static.mywebsite.com, instead of mywebsite-static.s3.amazaonaws.com:
http://carltonbale.com/how-to-alias-a-domain-name-or-sub-domain-to-amazon-s3/
how appoint a subdomain for a s3 bucket?
Amazon S3: Static Web Sites: Custom Domain or Subdomain
And it seems to work; when I browse to static.mywebsite.com, I get an XML from S3 saying AccessDenied. In the last link above, a user comments to ask if this will work with HTTPS. It initially appears that it does not.
Upon inspection, the URL's of the static files generated by Django still use the static.mywebsite.com.s3.amazonaws.com address. If I understand correctly, this is a HTTPS certificate issue due to the periods creating additional layers of subdomains.
So, I changed the settings to use:
STATIC_URL = 'https://%s.mywebsite.%s/' % ('static', 'com')
But that resulted in it generating static.s3.amazonaws.com.
How can I get Django to generate the URL's using static.mywebsite.com?

Difference between S3 public files and static websites

In AWS S3 you can upload a file and make it public. You get a URL to access the same. Also, you can enable "Static Website Hosting". Can someone clarify the difference between these 2 approaches? If I can simply upload my html pages and make them public and access them over HTTP through browsers, why do I need to enable static website hosting ??
Enabling Static Website Hosting on S3 allows you to use a custom domain name, custom error pages, index.html documents for paths that end in /, and 301 redirects.
For others who are just stumbling across this, one disadvantage of enabling Static Website Hosting is the HTTP only endpoint you get.
See relevant docs. If you can work with the limitations of simply making the files public such as no custom domain name, you get TLS for free since some browsers block HTTP links on pages served over HTTPS.