Maxmind geoipupdate mmdb.gz is not a valid gzip file - geoip

I'am try to update my GeoIP Databases using the geoipupdater.
Using the following ProductIds
ProductIds GeoIP2-City GeoIP2-Connection-Type GeoIP2-Country GeoIP2-ISP
Geoipupdate Version 2.3.1 is installed
geoipupdate -V
geoipupdate 2.3.1
When I run the geoipupdate the City and Country Databases are getting updated/downloaded. But for the ISP and Connection-Type Updates I got the following output from the updater.
url: https://updates.maxmind.com/app/update_secure?db_md5=00000000000000000000000000000000&challenge_md5=0926a7ab0bf38eafe43622a25fd6e7e2&user_id=*****&edition_id=GeoIP2-Connection-Type
/usr/share/GeoIP/GeoIP2-Connection-Type.mmdb.gz is not a valid gzip file
url: https://updates.maxmind.com/app/update_secure?db_md5=00000000000000000000000000000000&challenge_md5=0926a7ab0bf38eafe43622a25fd6e7e2&user_id=*****&edition_id=GeoIP2-ISP
/usr/share/GeoIP/GeoIP2-ISP.mmdb.gz is not a valid gzip file
Why are the downloaded .gz files aren't valid?
thanks in advance!

I got the same error, but the reason was an expired subscription.
/my_path/data/GeoIP2-City.mmdb.gz is not a valid gzip file
I'd suggest you call the URL generated by geoipupdate to see the "real error".
curl -i "https://updates.maxmind.com/app/update_secure?db_md5=00000000000000000000000000000000&challenge_md5=f2d60fa3afdaa26b18dd94457a999999&user_id=******&edition_id=GeoIP2-City"
HTTP/1.1 200 OK
Date: Tue, 21 Feb 2017 08:38:13 GMT
Content-Length: 59
Content-Type: text/plain; charset=utf-8
Invalid product ID or subscription expired for GeoIP2-City

Related

"Loading failed for the <script>" 404 Error - Svelte

I am having an error with Svelte that prevents the loading of bundle.js, bundle.css, global.css and favicon.png on a website hosted on GitHub pages.
I have read through this question but the issue persists on other browsers, without any VPN, a clean Firefox profile or no extensions, and seems to be an issue with the acceptance of the content-type of the files.
The error message in the Firefox console is:
GET https://path/to/global.css [HTTP/2 404 Not Found 9ms]
GET https://path/to/bundle.css [HTTP/2 404 Not Found 9ms]
GET https://path/to/bundle.js [HTTP/2 404 Not Found 7ms]
Loading failed for the <script> with source “https://path/to/bundle.js”
GET https://path/to/favicon.png [HTTP/2 404 Not Found 7ms]
And the Network tab returns:
With the response and request headers for favicon.png:
// Request
GET /favicon.png HTTP/2
Host: kitchefs.github.io
User-Agent: Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:81.0) Gecko/20100101 Firefox/81.0
Accept: image/webp,*/*
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Referer: https://kitchefs.github.io/beta/
TE: Trailers
// Response
HTTP/2 404 Not Found
content-type: text/html; charset=utf-8
server: GitHub.com
strict-transport-security: max-age=31556952
etag: W/"5f4de496-313"
access-control-allow-origin: *
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: EAA2:03EC:1BFD04:221937:5F816340
accept-ranges: bytes
date: Sat, 10 Oct 2020 07:39:07 GMT
via: 1.1 varnish
age: 475
The other response headers all show the content-type type of text/html, despite all of them not accepting text/html.
I'm using GitHub pages on a gh-pages branch, and I can confirm that the files exist. I'm wordering if this is an issue with GitHub pages or Svelte and what I can do to fix/prevent this problem from occuring in the future.
If required, here is the code and the website can be accessed at https://kitchefs.github.io/beta.
Remove the leading / in your public/index.html:
<link rel='icon' type='image/png' href='favicon.png'>
<link rel='stylesheet' href='global.css'>
<link rel='stylesheet' href='build/bundle.css'>
<script defer src='build/bundle.js'></script>
Gh-pages will be resolved with your repos name in the URL. Thus you are not on base path.
https://kitchefs.github.io/beta/
I followed this guide: https://sveltesaas.com/articles/sveltekit-github-pages-guide/
It recommends using the docs folder as build folder (via svelte.config.js) and setting up GitHub Pages to host from the docs folder.
I tried to host from the root folder in GitHub Pages, but just got the 404 error messages. Using the docs folder solved the issue.

trying to update geoip gets error -21 - It has been working until recently

I have been downloading the geoip lite database for a long time. However, something has changed somewhere that is causing an error -21. This is the verbose output:
sudo geoipupdate -v
Opened License file /etc/GeoIP.conf
Read in license key 000000000000
number of product ids 2
Connecting to MaxMind GeoIP server
via Host or Proxy Server: updates.maxmind.com:80
sending request GET /app/update_getfilename?product_id=506 HTTP/1.0
Host: updates.maxmind.com
database product id 506 database file name /usr/share/GeoIP/GeoLiteCountry.dat
/usr/share/GeoIP/GeoLiteCountry.dat can't be opened, proceeding to download database
MD5 sum of database /usr/share/GeoIP/GeoLiteCountry.dat is 0000000000000000000000000000000
Connecting to MaxMind GeoIP Update server
sending request GET /app/update_getipaddr HTTP/1.0
Host: updates.maxmind.com
client ip address: 162.230.29.192
md5sum of ip address and license key is b2e7d4d48d92ec691a3f67b6d861e1bb
sending request GET /app/update_secure?db_md5=0000000000000000000000000000000&challenge_md5=b2e7d4d48d92ec691a3f67b6d861e1bb&user_id=999999&edition_id=506 HTTP/1.0
Host: updates.maxmind.com
Downloading gzipped GeoIP Database...
Done
Updating /usr/share/GeoIP/GeoLiteCountry.dat
Saving gzip file to /usr/share/GeoIP/GeoLiteCountry.dat.gz ... download data to a gz file named /usr/share/GeoIP/GeoLiteCountry.dat.gz
Done
Uncompressing gzip file ... Done
Performing sanity checks ... Database type is 1
database_info FAIL null
Received Error -21 (Sanity check database_info string failed) when attempting to update GeoIP Database
Connecting to MaxMind GeoIP server
via Host or Proxy Server: updates.maxmind.com:80
sending request GET /app/update_getfilename?product_id=533 HTTP/1.0
Host: updates.maxmind.com
database product id 533 database file name /usr/share/GeoIP/GeoLiteCity.dat
/usr/share/GeoIP/GeoLiteCity.dat can't be opened, proceeding to download database
MD5 sum of database /usr/share/GeoIP/GeoLiteCity.dat is 0000000000000000000000000000000
md5sum of ip address and license key is b2e7d4d48d92ec691a3f67b6d861e1bb
sending request GET /app/update_secure?db_md5=0000000000000000000000000000000&challenge_md5=b2e7d4d48d92ec691a3f67b6d861e1bb&user_id=999999&edition_id=533 HTTP/1.0
Host: updates.maxmind.com
Downloading gzipped GeoIP Database...
Done
Updating /usr/share/GeoIP/GeoLiteCity.dat
Saving gzip file to /usr/share/GeoIP/GeoLiteCity.dat.gz ... download data to a gz file named /usr/share/GeoIP/GeoLiteCity.dat.gz
Done
Uncompressing gzip file ... Done
Performing sanity checks ... Database type is 1
database_info FAIL null
Received Error -21 (Sanity check database_info string failed) when attempting to update GeoIP Database
It is not clear to me if the dat files are not downloading (each gets that "can't be opened") message which may be normal. Or if there is something going on with the unzip that causes the update to fail the sanity check. Can someone help me figure this out? TIA.
GeoLite Legacy has been discontinued. The error given by your version of geoipupdate does not seem particularly useful. Newer versions of geoipupdate say 404 Not Found: Database edition not found.

How to hide python version from being displayed when sending request to a py webserver

I've been running few py webservers in my host. A request to the host, has useful information like the python version in its response header.
For example, lets take websockify,
A curl request to the port return with
< HTTP/1.1 200 OK
< Server: WebSockify Python/2.7.5
< Date: Tue, 18 Jul 2017 00:13:59 GMT
How do I hide the python version details in the headers?

Scrapy use proxy and get twisted error

I found that some page I crawling is slow, and using Goagent to visit the page is relatively fast, so I run this before I start my spider:
export http_proxy=http://192.168.1.102:8087
Yet, when I start the spider it report this:
[<twisted.python.failure.Failure <class 'twisted.web._newclient.ParseError'>>]
to validate the proxy I run this curl command:
curl -I -x 192.168.1.102:8087 http://www.blabla.com/target/page.php
and the output header seems quite normal for me:
HTTP/1.1 200
Content-Length: 0
Via: HTTP/1.1 GWA
Content-Encoding: gzip
X-Powered-By: PHP/5.3.3
Vary: Accept-Encoding
Server: Apache/2.2.15 (CentOS)
Connection: close
Date: Sun, 30 Mar 2014 16:49:29 GMT
Content-Type: text/html
I tried add this to scrapy's settings.py:
DOWNLOADER_MIDDLEWARES = {
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware':100
}
Still, no luck. Is it some problem with scrapy or am I missing something else?
My scrapy version is Scrapy 0.22.2
You could have a try to enable both http_proxy and https_proxy.
export http_proxy=http://192.168.1.102:8087
export https_proxy=http://192.168.1.102:8087
and I guess your Twisted is 15.0.0, this version has something wrong with https throw proxy.

Google Cloud Storage propagation

How long does it take for a change to a file in Google Cloud Storage to propagate?
I'm having this very frustrating problem where I change the contents of a file and re-upload it via gsutil, but the change doesn't show up for several hours. Is there a way to force a changed file to propagate everything immediately?
If I look at the file in the Google Cloud Storage console, it sees the new file, but then if I hit the public URL it's an old version and in some cases, 2 versions ago.
Is there a header that I'm not setting?
EDIT:
I tried gsutil -h "Cache-Control: no-cache" cp -a public-read MyFile and it doesn't help, but maybe the old file needs to expire before the new no-cache version takes over?
I did a curl -I on the file and get this back:
HTTP/1.1 200 OK
Server: HTTP Upload Server Built on Dec 12 2012 15:53:08 (1355356388)
Expires: Fri, 21 Dec 2012 19:58:39 GMT
Date: Fri, 21 Dec 2012 18:58:39 GMT
Last-Modified: Fri, 21 Dec 2012 18:53:41 GMT
ETag: "66d820174d6de17a278b327e4c3e9b4e"
x-goog-sequence-number: 3
x-goog-generation: 1356116021512000
x-goog-metageneration: 1
Content-Type: application/octet-stream
Content-Language: en
Accept-Ranges: bytes
Content-Length: 160
Cache-Control: public, max-age=3600, no-transform
Age: 3449
Which seems to indicate it will expire in an hour, despite the no-cache.
Google Cloud Storage provides strong data consistency: once a write completes, a read from anywhere in the world will get the most recent data.
However, if you enable caching (which by default is true for any publicly readable object), reads of that object can see a version of the object as old as the Cache-Control max-age specified on the object. If, for example, you uploaded the file like this:
gsutil cp -a public-read file gs://my_bucket/file
You can see that the max-age is 1 hour (3600 seconds):
gsutil ls -L gs://my_bucket/file
gs://my_bucket/file:
Creation time: Fri, 21 Dec 2012 19:59:57 GMT
Cache-Control: public, max-age=3600, no-transform
Content-Length: 1065
Content-Type: text/plain
ETag: eb3fb83beedf1efffe5b8e32e8d6a65a
...
If you want to prevent a publicly readable object from being cached you could do:
gsutil setmeta -h Cache-Control:no-cache gs://my_bucket/file
Alternatively, you could set a shorter max-age on the object:
gsutil setmeta -h 'Cache-Control:public, max-age=600, no-transform'
Mike Schwartz, Google Cloud Storage team