"Loading failed for the <script>" 404 Error - Svelte - github-pages

I am having an error with Svelte that prevents the loading of bundle.js, bundle.css, global.css and favicon.png on a website hosted on GitHub pages.
I have read through this question but the issue persists on other browsers, without any VPN, a clean Firefox profile or no extensions, and seems to be an issue with the acceptance of the content-type of the files.
The error message in the Firefox console is:
GET https://path/to/global.css [HTTP/2 404 Not Found 9ms]
GET https://path/to/bundle.css [HTTP/2 404 Not Found 9ms]
GET https://path/to/bundle.js [HTTP/2 404 Not Found 7ms]
Loading failed for the <script> with source “https://path/to/bundle.js”
GET https://path/to/favicon.png [HTTP/2 404 Not Found 7ms]
And the Network tab returns:
With the response and request headers for favicon.png:
// Request
GET /favicon.png HTTP/2
Host: kitchefs.github.io
User-Agent: Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:81.0) Gecko/20100101 Firefox/81.0
Accept: image/webp,*/*
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Referer: https://kitchefs.github.io/beta/
TE: Trailers
// Response
HTTP/2 404 Not Found
content-type: text/html; charset=utf-8
server: GitHub.com
strict-transport-security: max-age=31556952
etag: W/"5f4de496-313"
access-control-allow-origin: *
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: EAA2:03EC:1BFD04:221937:5F816340
accept-ranges: bytes
date: Sat, 10 Oct 2020 07:39:07 GMT
via: 1.1 varnish
age: 475
The other response headers all show the content-type type of text/html, despite all of them not accepting text/html.
I'm using GitHub pages on a gh-pages branch, and I can confirm that the files exist. I'm wordering if this is an issue with GitHub pages or Svelte and what I can do to fix/prevent this problem from occuring in the future.
If required, here is the code and the website can be accessed at https://kitchefs.github.io/beta.

Remove the leading / in your public/index.html:
<link rel='icon' type='image/png' href='favicon.png'>
<link rel='stylesheet' href='global.css'>
<link rel='stylesheet' href='build/bundle.css'>
<script defer src='build/bundle.js'></script>
Gh-pages will be resolved with your repos name in the URL. Thus you are not on base path.
https://kitchefs.github.io/beta/

I followed this guide: https://sveltesaas.com/articles/sveltekit-github-pages-guide/
It recommends using the docs folder as build folder (via svelte.config.js) and setting up GitHub Pages to host from the docs folder.
I tried to host from the root folder in GitHub Pages, but just got the 404 error messages. Using the docs folder solved the issue.

Related

Flask app hosted by CherryPy: OPTIONS returns 404

I have a restful API created using Flask and Flask-Restful. Everything works fine using the development server. All routes are in a Blueprint, though the particular route we're dealing with here is not a Flask-Restful one. It's just a normal Flask route.
I am also using Flask-CORS.
To deploy, everything is dockerized and I'm using CherryPy as the WSGI host. So a CherryPy app hosts the Flask app in a container.
I'm using Traefik as a reverse proxy in another container.
If I make the following request in Chrome by pasting the URL in, the GET request works:
https://api.my-app.new/api/admin/user?_end=10&_order=DESC&_sort=id&_start=0
However, if I attemtp to make the same GET request from a React app, an OPTIONS preflight request is made and it is failing with a 404.
I've traced through it as best I can in PyCharm and the problem seems to be in the following code in Flask's app.py:
def preprocess_request(self):
bp = _request_ctx_stack.top.request.blueprint
Basically, the blueprint is not found.
What is actually getting sent is seen in this curl call:
curl 'https://api.my-app.new/api/admin/user?_end=10&_order=DESC&_sort=id&_start=0' \
-X OPTIONS -H 'access-control-request-method: GET' -H 'origin: https://admin.my-app.new' \
-H 'accept-encoding: gzip, deflate, br' -H 'accept-language: en-US,en;q=0.9' \
-H 'user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36' \
-H 'accept: */*' -H 'referer: https://admin.my-app.new/' -H 'authority: api.my-app.new' \
-H 'access-control-request-headers: authorization,content-type' --compressed
And if I print out the headers received by the Flask-app (in #app.before_request), I get the following:
[2018-01-25 04:31:48,438] INFO - X-Forwarded-Server: cb5d56692c6d
Referer: https://admin.my-app.new/
Accept-Language: en-US,en;q=0.9
Origin: https://admin.my-app.new
X-Real-Ip: 172.19.0.1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36
Access-Control-Request-Headers: authorization,content-type
X-Forwarded-Proto: https
Host: api.my-app.new
Accept: */*
Access-Control-Request-Method: GET
X-Forwarded-Host: api.my-app.new
X-Forwarded-For: 172.19.0.1
X-Forwarded-Port: 443
Accept-Encoding: gzip, deflate, br
[2018-01-25 04:31:48,440] INFO - Error 404:/api/admin/user?_end=10&_order=DESC&_sort=id&_start=0: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again. ("/app/app/routes.py:87")
Now, if I make the same request with the app running without the traefik reverse proxy, it works. The only thing that is different in the curl
request is that I'm using http, instead of https.
Here are the headers that Flask receives in this case:
[2018-01-24 21:43:43,199] INFO - Referer: https://admin.my-app.new/
Origin: https://admin.my-app.new
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36
Authority: api.my-app.new
Access-Control-Request-Headers: authorization,content-type
Host: api.my-app.new:8000
Accept: */*
Access-Control-Request-Method: GET
Accept-Language: en-US,en;q=0.9
Accept-Encoding: gzip, deflate, br
[24/Jan/2018 21:43:43] "OPTIONS /api/admin/user?_end=10&_order=DESC&_sort=id&_start=0 HTTP/1.1" 200 -
I figure that there is something wrong with the headers or something that is confusing Flask. I saw another post from 2014 that mentioned needing the SERVER_NAME, but that didn't help.
Finally, I had originally used NGinx as a reserve proxy and had gotten everything working. One of the the things
that was hard to get working with NGinx was redirects and OPTION requests. I did get it working after madly chasing down various blog
posts, but as I look back on it as I right this, I notice a curious thing: I wound up writing script in nginx.conf
to automatically return 200 for all OPTIONS requests!
Any idea why the OPTION requests are failing?
As webKnjaZ notes: "It's a bug."
After digging in deeper, I discovered that the problem that Flask was getting different URL for the GET request than for the OPTIONS request, and that if CherryPy was removed the problem went away. That led me to this CherryPy issue which describes my situation exactly.
https://github.com/cherrypy/cherrypy/issues/1662
The author noted that the bug appeared when moving from CherryPy 11 to 12 (I was on 13.x) so I tried downgrading to 11.0.0 and that fixed it.

Nginx proxy Amazon S3 resources

I´m performing some WPO tasks, so PageSpeed suggested me to leverage browser caching. I have improved it successfully for some static files in my Nginx server, however my image files stored in Amazon S3 server are still missing.
I have read an approach regarding update each file in S3 to include some header metatags (Expires and Cache-Control). I think this is not a good approach. I have thousands of files, so this is not feasible for me.
I think a most convenient approach is to configure my Nginx 1.6.0 server to proxy the S3 files. I have read about this, but I´m not skilled at all on server config, so I got a couple examples from these sites: https://gist.github.com/benjaminbarbe/1961db5ffbaad57eff12
I added this location code inside my server block in my nginx config file:
#inside server block
location /mybucket.s3.amazonaws.com/ {
proxy_http_version 1.1;
proxy_set_header Host mybucket.s3.amazonaws.com;
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers "Set-Cookie";
proxy_buffering off;
proxy_intercept_errors on;
proxy_pass http://mybucket.s3.amazonaws.com;
}
For sure, this is not working for me. No header is included in my requests. So, first I think the requests are not matching the locations.
Accept-Ranges:bytes
Content-Length:90810
Content-Type:image/jpeg
Date:Fri, 23 Jun 2017 04:53:56 GMT
ETag:"4fd0be549fbcaf9b47c18a15146cdf16"
Last-Modified:Tue, 09 Jun 2015 09:47:13 GMT
Server:AmazonS3
x-amz-id-2:cKsq1qRra74DqVsTewh3P3sgzVUoPR8aAT2NFCuwA+JjCdDZfk7/7x/C0WPjBa51GEb4C8LyAIc=
x-amz-request-id:94EADB4EDD3DE1C1
Your approach to proxy S3 files via Nginx makes a lot of sense. It solves number of problems and comes with extra benefits such masking URLs, proxy cache, speed up transferring by offload SSL/TLS. You do it almost right, let me show what is left to make it perfect.
For sample queries I use the S3 bucket and an image URL mentioned in the public comment to the original question.
We start with inspecting of Amazon S3 files' headers
curl -I http://yanpy.dev.s3.amazonaws.com/img/blog/sailing-routes-around-croatia-central-dalmatia-islands/yachts-anchored-paradise-cove-croatia-3.jpg
HTTP/1.1 200 OK
Date: Sun, 25 Jun 2017 17:49:10 GMT
Last-Modified: Wed, 21 Jun 2017 07:42:31 GMT
ETag: "37a907fc5dd7cfd0c428af78f09e95a9"
Expires: Fri, 21 Jul 2018 07:41:49 UTC
Accept-Ranges: bytes
Content-Type: binary/octet-stream
Content-Length: 378843
Server: AmazonS3
We can see missing Cache-Control but Conditional GET headers have already been configured. When we reuse E-Tag/Last-Modified (that's how a browser's client side cache works), we get HTTP 304 alongside with empty Content-Length. An interpretation of that is client (curl in our case) queries the resource saying that no data transfer required unless file has been modified on the server:
curl -I http://yanpy.dev.s3.amazonaws.com/img/blog/sailing-routes-around-croatia-central-dalmatia-islands/yachts-anchored-paradise-cove-croatia-3.jpg
--header "If-None-Match: 37a907fc5dd7cfd0c428af78f09e95a9"
HTTP/1.1 304 Not Modified
Date: Sun, 25 Jun 2017 17:53:33 GMT
Last-Modified: Wed, 21 Jun 2017 07:42:31 GMT
ETag: "37a907fc5dd7cfd0c428af78f09e95a9"
Expires: Fri, 21 Jul 2018 07:41:49 UTC
Server: AmazonS3
curl -I http://yanpy.dev.s3.amazonaws.com/img/blog/sailing-routes-around-croatia-central-dalmatia-islands/yachts-anchored-paradise-cove-croatia-3.jpg
--header "If-Modified-Since: Wed, 21 Jun 2017 07:42:31 GMT"
HTTP/1.1 304 Not Modified
Date: Sun, 25 Jun 2017 18:17:34 GMT
Last-Modified: Wed, 21 Jun 2017 07:42:31 GMT
ETag: "37a907fc5dd7cfd0c428af78f09e95a9"
Expires: Fri, 21 Jul 2018 07:41:49 UTC
Server: AmazonS3
"PageSpeed suggested to leverage browser caching" that means
Cache=control is missing. Nginx as proxy for S3 files solves
not only problem with missing headers but also saves traffic
using Nginx proxy cache.
I use macOS but Nginx configuration works on Linux exactly the same way without modifications. Step by step:
1.Install Nginx
brew update && brew install nginx
2.Setup Nginx to proxy S3 bucket, see configuration below
3.Request the file via Nginx. Please take a look at the Server header, we see Nginx rather than Amazon S3 now:
curl -I http://localhost:8080/s3/img/blog/sailing-routes-around-croatia-central-dalmatia-islands/yachts-anchored-paradise-cove-croatia-3.jpg
HTTP/1.1 200 OK
Server: nginx/1.12.0
Date: Sun, 25 Jun 2017 18:30:26 GMT
Content-Type: binary/octet-stream
Content-Length: 378843
Connection: keep-alive
Last-Modified: Wed, 21 Jun 2017 07:42:31 GMT
ETag: "37a907fc5dd7cfd0c428af78f09e95a9"
Expires: Fri, 21 Jul 2018 07:41:49 UTC
Accept-Ranges: bytes
Cache-Control: max-age=31536000
4.Request the file using Nginx proxy with Conditional GET:
curl -I http://localhost:8080/s3/img/blog/sailing-routes-around-croatia-central-dalmatia-islands/yachts-anchored-paradise-cove-croatia-3.jpg
--header "If-None-Match: 37a907fc5dd7cfd0c428af78f09e95a9"
HTTP/1.1 304 Not Modified
Server: nginx/1.12.0
Date: Sun, 25 Jun 2017 18:32:16 GMT
Connection: keep-alive
Last-Modified: Wed, 21 Jun 2017 07:42:31 GMT
ETag: "37a907fc5dd7cfd0c428af78f09e95a9"
Expires: Fri, 21 Jul 2018 07:41:49 UTC
Cache-Control: max-age=31536000
5.Request the file using Nginx proxy cache, please take a look at X-Cache-Status header, its value is MISS until cache warmed up after first request
curl -I http://localhost:8080/s3_cached/img/blog/sailing-routes-around-croatia-central-dalmatia-islands/yachts-anchored-paradise-cove-croatia-3.jpg
HTTP/1.1 200 OK
Server: nginx/1.12.0
Date: Sun, 25 Jun 2017 18:40:45 GMT
Content-Type: binary/octet-stream
Content-Length: 378843
Connection: keep-alive
Last-Modified: Wed, 21 Jun 2017 07:42:31 GMT
ETag: "37a907fc5dd7cfd0c428af78f09e95a9"
Expires: Fri, 21 Jul 2018 07:41:49 UTC
Cache-Control: max-age=31536000
X-Cache-Status: HIT
Accept-Ranges: bytes
Based on Nginx official documentation I provide the Nginx S3 configuration with optimised caching settings that supports the following options:
proxy_cache_revalidate instructs NGINX to use conditional GET
requests when refreshing content from the origin servers
the updating parameter to the proxy_cache_use_stale directive instructs NGINX to deliver stale content when clients request an item
while an update to it is being downloaded from the origin server,
instead of forwarding repeated requests to the server
with proxy_cache_lock enabled, if multiple clients request a file that is not current in the cache (a MISS), only the first of those
requests is allowed through to the origin server
Nginx configuration:
worker_processes 1;
daemon off;
error_log /dev/stdout info;
pid /usr/local/var/nginx/nginx.pid;
events {
worker_connections 1024;
}
http {
default_type text/html;
access_log /dev/stdout;
sendfile on;
keepalive_timeout 65;
proxy_cache_path /tmp/ levels=1:2 keys_zone=s3_cache:10m max_size=500m
inactive=60m use_temp_path=off;
server {
listen 8080;
location /s3/ {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Authorization '';
proxy_set_header Host yanpy.dev.s3.amazonaws.com;
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-server-side-encryption;
proxy_hide_header x-amz-server-side-encryption;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_intercept_errors on;
add_header Cache-Control max-age=31536000;
proxy_pass http://yanpy.dev.s3.amazonaws.com/;
}
location /s3_cached/ {
proxy_cache s3_cache;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Authorization '';
proxy_set_header Host yanpy.dev.s3.amazonaws.com;
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-server-side-encryption;
proxy_hide_header x-amz-server-side-encryption;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_cache_revalidate on;
proxy_intercept_errors on;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_lock on;
proxy_cache_valid 200 304 60m;
add_header Cache-Control max-age=31536000;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://yanpy.dev.s3.amazonaws.com/;
}
}
}
Without the details of which modules Nginx is compiled with, we can say two ways for adding Expires and Cache-Control headers to all files.
Nginx S3 proxy
This is what you asked about -- using Nginx to add expire, cache-control headers on S3 files.
Nginx this set-misc-nginx-module needed to support Nginx S3 proxy & change/add expire, cache-control on the fly. This is a standard full guide from compilation to usage, this is great guide for nginx-extras for Ubuntu server. This is full guide with example with WordPress.
There are more S3 modules for extra things. Without those modules Nginx will not understand and config test (nginx -t) will pass test with wrong config. set-misc-nginx-module is minimum for your need. What you want has better example on this Github gist.
As not all are used with compilation and the setup is really slightly difficult, I am also writing the way to set Expires and Cache-Control header for all files in one Amazon S3 bucket.
Amazon S3 Bucket Expires and Cache-Control Header
Also, it is possible to set Expires and Cache-Control headers for all objects in one AWS S3 bucket with script or command line. There are several such free libraries and scripts on Github like this one, bucket explorer, Amazon's tool, Amazon's this doc and this doc. Command will be like this for that cp CLI tool :
aws s3 cp s3://mybucket/ s3://mybucket/ --recursive --metadata-directive REPLACE \
--expires 2027-09-01T00:00:00Z --acl public-read --cache-control max-age=2000000,public
From an architectural review, what you're trying to do is a wrong way to go about:
Amazon S3 is presumably optimised to be a highly available cache; by introducing a hand-rolled proxying layer on top of it, you're merely introducing an unnecessary extra delay and a huge point of failure, and also losing all the benefits that would come out of S3
Your performance analysis with regards to the number of files is incorrect. If you have thousands of files on S3, the correct solution would be to write a one-time script to change the requisite attributes on S3, instead of hand-rolling a proxying mechanism that you don't fully understand, and that would be executed many times over (ad nauseam). Doing the proxying would likely be a band-aid, and, in reality, will likely decrease the performance, not increase it (even if you'd get to have a stateless automated tool tell you otherwise). Not to mention that it would also be an unnecessary resource drain, and may contribute to actual performance issues and heisenbugs down the line.
That said, if you're still up for proxying with adding the headers, the correct way to do so with nginx would be by using the expires directive.
E.g., you may place expires max; before or after your proxy_pass directive within the appropriate location.
The expires directive automatically takes care of setting a correct Cache-Control header for you, too; but you could also use add_header directive should you wish to add any custom response headers manually.

Origin Cache-Control not working on AWS Cloudfront

I am serving my images using AWS Cloudfront. Origin images headers include Cache-Control settings but these header are not being transfered to AWS. I have checked the AWS documentation and I think that my Cloudfront settings are correct:
Settings Object Caching: Use Origin Cache Headers
I have created a page where you can see the same image, loaded directly from its origin, and loaded by Cloudfront. As you can see, the second image doesn't include the Cache-Control header setting:
https://www.fanaticguitars.com/cache-control-test.php
Any suggestion?
Thank you.
The misconfiguration is on your server, not on CloudFront.
If I connect to your www server but then lie to it and tell it I'm asking for img rather than www by setting the HTTP Host: header (which is what CloudFront is doing when it fetches content, if you have the Host: header whitelisted in the cache behavior), your server doesn't return Cache-Control headers in this case even though it does (twice!) when the request is targeted to www.
This is a connection to your server, not to CloudFront:
$ curl -v https://www.fanaticguitars.com/v2/avatar.png -H 'Host: img.fanaticguitars.com' > /dev/null
> GET /v2/avatar.png HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Accept: */*
> Host: img.fanaticguitars.com
>
< HTTP/1.1 200 OK
< Server: nginx
< Date: Thu, 09 Mar 2017 16:49:31 GMT
< Content-Type: image/png
< Content-Length: 9915
< Last-Modified: Wed, 01 Mar 2017 21:46:59 GMT
< Connection: close
< Accept-Ranges: bytes
<
* Closing connection #0

Maxmind geoipupdate mmdb.gz is not a valid gzip file

I'am try to update my GeoIP Databases using the geoipupdater.
Using the following ProductIds
ProductIds GeoIP2-City GeoIP2-Connection-Type GeoIP2-Country GeoIP2-ISP
Geoipupdate Version 2.3.1 is installed
geoipupdate -V
geoipupdate 2.3.1
When I run the geoipupdate the City and Country Databases are getting updated/downloaded. But for the ISP and Connection-Type Updates I got the following output from the updater.
url: https://updates.maxmind.com/app/update_secure?db_md5=00000000000000000000000000000000&challenge_md5=0926a7ab0bf38eafe43622a25fd6e7e2&user_id=*****&edition_id=GeoIP2-Connection-Type
/usr/share/GeoIP/GeoIP2-Connection-Type.mmdb.gz is not a valid gzip file
url: https://updates.maxmind.com/app/update_secure?db_md5=00000000000000000000000000000000&challenge_md5=0926a7ab0bf38eafe43622a25fd6e7e2&user_id=*****&edition_id=GeoIP2-ISP
/usr/share/GeoIP/GeoIP2-ISP.mmdb.gz is not a valid gzip file
Why are the downloaded .gz files aren't valid?
thanks in advance!
I got the same error, but the reason was an expired subscription.
/my_path/data/GeoIP2-City.mmdb.gz is not a valid gzip file
I'd suggest you call the URL generated by geoipupdate to see the "real error".
curl -i "https://updates.maxmind.com/app/update_secure?db_md5=00000000000000000000000000000000&challenge_md5=f2d60fa3afdaa26b18dd94457a999999&user_id=******&edition_id=GeoIP2-City"
HTTP/1.1 200 OK
Date: Tue, 21 Feb 2017 08:38:13 GMT
Content-Length: 59
Content-Type: text/plain; charset=utf-8
Invalid product ID or subscription expired for GeoIP2-City

Scrapy use proxy and get twisted error

I found that some page I crawling is slow, and using Goagent to visit the page is relatively fast, so I run this before I start my spider:
export http_proxy=http://192.168.1.102:8087
Yet, when I start the spider it report this:
[<twisted.python.failure.Failure <class 'twisted.web._newclient.ParseError'>>]
to validate the proxy I run this curl command:
curl -I -x 192.168.1.102:8087 http://www.blabla.com/target/page.php
and the output header seems quite normal for me:
HTTP/1.1 200
Content-Length: 0
Via: HTTP/1.1 GWA
Content-Encoding: gzip
X-Powered-By: PHP/5.3.3
Vary: Accept-Encoding
Server: Apache/2.2.15 (CentOS)
Connection: close
Date: Sun, 30 Mar 2014 16:49:29 GMT
Content-Type: text/html
I tried add this to scrapy's settings.py:
DOWNLOADER_MIDDLEWARES = {
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware':100
}
Still, no luck. Is it some problem with scrapy or am I missing something else?
My scrapy version is Scrapy 0.22.2
You could have a try to enable both http_proxy and https_proxy.
export http_proxy=http://192.168.1.102:8087
export https_proxy=http://192.168.1.102:8087
and I guess your Twisted is 15.0.0, this version has something wrong with https throw proxy.