How to convert MaxMinds MMDB GeoIP to DAT format so that I can use with modsecurity+Apache. Modsecurity supports only DAT format.
As of February 2019, the following Python script is the best option for converting GeoIP2 MMDB format to legacy .dat format:
https://github.com/sherpya/geolite2legacy
Using this script, somebody has done the conversion and made the resulting .dat files available for download:
https://www.miyuru.lk/geoiplegacy
The Legacy GeoIP builds (.dat) are not going away in the near future. If they do ever go away, you could build off of the .dat build program that Debian uses for its GeoLite databases (copy of it on GitHub) or this (untested) Python script.
Firstly, what I have to say to some here: You are required by MaxMind to update to new databases until 30 days after they get released (EULA point 4.c), so using old databases is actually not legit; also, the data from old databases is simply outdated (probably not valid anymore), so why use it in the first place?
Related
I just installed django-dbbackup.. All working as per the doc (linked).
One thing slightly puzzles me. Why does it dump into a binary format which I don't know how to read? (.psql.bin). Is there a Postgres command to de-bin it?
I found by Googling, that it's possible to get a text dump by adding to settings.py
DBBACKUP_CONNECTOR_MAPPING = {
'django.db.backends.postgresql':
'dbbackup.db.postgresql.PgDumpConnector',
}
This is about 4x bigger as output, but after gzip'ping the file it's about 0.7x the size of the binary and after bzip2, about 0.5x
However, this appears to be undocumented, and I don't like using undocumented for backups! (same reason I want to be able to look at the file :-)
Why does it dump into a binary format which I don't know how to read? (.psql.bin).
You'll get a .psql.bin when using PgDumpBinaryConnector, which is the default for Postgres databases.
Is there a Postgres command to de-bin it?
The magic difference between PgDumpConnector and PgDumpBinaryConnector is the latter passes --format=custom to pgdump which is documented as (emphasis mine)
Output a custom-format archive suitable for input into pg_restore. Together with the directory output format, this is the most flexible output format in that it allows manual selection and reordering of archived items during restore. This format is also compressed by default.
IOW, I don't think there's an off-the-shelf de-binning command for it other than pg_restoreing and pg_dumping back out as regular SQL, because you're not supposed to read it if you're not PostgreSQL.
Of course, the format is de-facto documented in the source for pg_dump...
I have made a few corrections to location names in a GeoLite2 CSV file.
My site only retrieves locations from the MMDB file, so how can I compile back the changed CSV file into the MMDB binary again.
I searched everywhere for a solution but can't find it.
Thanks for any tip.
Carlos
Currently there are only 2 open source MMDB file writers:
MaxMind::DB::Writer (Perl language)
Go MaxMind DB Writer (Go language)
The second one unfortunately has only a subset of the features available for the Perl one, but it should be enough for writing a program that creates the MMDB file reading line by line the CSV one and creating the mmdbtype instances.
You can check out our mmdbctl utility tool.
To convert a CSV file to an MMDB file use the import command:
$ mmdbctl import --in data.csv --out data.mmdb
Instructions, features, and documentation are available here: github.com/ipinfo/mmdbctl.
Right now it only supports string data types, and not nested data types. See this issue for more information.
I have installed a software in my system and I have a lot of data from client in it. All the files which are inside DB folder of this software are with extensions for each individual party.
I want to to use these files to get converted to a MySqli Database.
Sample file from DB folder can be download from here
I have tried understanding for firebird service which this software uses to connect with these database files to get the things.
I want to extract database and import it inside MySqli (PhpMyAdmin)
The linked file seems to be a renamed Firebird database with structure version ODS 11.2 which corresponds to Firebird 2.5.x line.
For making a quick peep into the database you can use
IBSurgeon First Aid -- http://ib-aid.com
IB Expert (the Database Explorer feature) -- http://ibexpert.net
Free mode of FirstAID would let you peep into the data, but not extract it out, probably not even scroll ALL the tables. It also would most probably ignore all database structures that are not tables (UDF functions, procedures, VIEWs, auto-computed columns in tables) - afterall it is just low-level format parser, not an SQL engine.
IB Expert has as a non-commercial Personal edition, but it probably does not include DB Exp, however you may try a trial period of full version. However IBE's DBExp would probably also only show basic structures of the database, maybe it would be enough.
Alternatively you can install Firebird 2.5.8 - either a standalone version or maybe embedded (a set of DLLs used instead of FB server process) if your application can use it, then use any DB IDE suit to explore it. Most often mentioned for Firebird would be IBExpert, FlameRobin, Firebird Maestro or any other. Then you would be able to try different SQL queries, including SPs, VIEWs and UDF-functions if there were any registered for the database and actually used.
BTW IBExpert comes bundled with FB 2.5 Embedded, which one can use to open the database file.
After you figure out the format, you can either export required data into some intermediate format like CSV (for example: http://fbutils.sourceforge.net/ ) or use your C++ application (though why would anyone develop web-application in C++) using libraries like IB++ or OLE DB, etc. Maybe it would be better to just use the Firebird server and original DB files from PHP or what would you write the application in.
I want to use some of the datasets available at the website of the Weka to perform some
experiments with Neural Networks.
What do I have to do to read the data?
I downloaded the datasets and they were saved as .arff.txt so I deleted the extension of .txt to have only .arff. So I used this file as an ipnut but an error occurs.
Which is the right way to read data?
Do I have to write code?
Please help me.
Thank you
I'm using Weka 3.6.6 and coc81.arff opens just fine. You are using Weka 3.7.x, which is the development branch of Weka. I suggest that you download 3.6.6 or 3.6.7 (the latest stable release) and try to open the file again.
There is also another simple throw...
open your dataset file in excel in my case MS Excel2010, format fields intype.
and save as 'csv',
then reload that csv file in the weka explorer and save on the local drive as arff format.
may be this help.
There are gov't data files: http://www.cdc.gov/EpiInfo/
Available in this weird SAS format. How can I convert them into XML/CSV, something much simpler that can be read by scripts/etc.???
I had the same problem, so i made a simple SAS data viewer. You download it from the downloads section here: http://code.google.com/p/sasquatch
It has alot of the same features as SAS Universal Viewer, but its still a work in progress.
You need to have Adobe AIR installed, you can get that on the adobe website.
Are the data in the SAS XPORT (.xpt) or .sas7bdat format?
For future reference, SAS XPORT files can be read and written using the 'SASxport' package for R (http://cran.r-project.org/web/packages/SASxport/index.html).
(Already posted this to superuser.com)
SAS Institute (the company that makes SAS) produces a viewer for SAS data sets.
Note that SAS program files usually have the extension .sas, whereas the data files themselves usually have the extension .sas7bdat.
(EDIT: I notice belatedly that your title says on a Mac, so this may not help much as I believe the tool is Windows only.)
Here a quick-and-dirty python five-liner to convert a SAS .xpt (aka XPORT) file to .csv
import pandas as pd
FILE_PATH = "(fully qualified name of directory containing file)"
FILE = "ABC" # filename itself (without suffix)
# Note: might need to substitute the column name of the index (in quotes) for "None" here
df = pd.read_sas(FILE_PATH + FILE + '.XPT', index=None)
df.to_csv(FILE_PATH + FILE + '.csv')
Hopefully this might help someone
JMP runs on MAC and can read sas files. Visit jmp.com for more information.
There are two parts to your question
1. Read these files
2. Convert these files
I looked into the link you shared there are no directly downloadable files, but I am assuming that you mean the files for windows.
For viewing you can use the folloiwng
a. SAS Universal viewer: https://support.sas.com/downloads/package.htm?pid=667
b. Use SAS on mac to directly read the files
For conversion you can do the following
a. Use SAS proc import to export and proc export to export the files feature,
b. Use third party softwares, e.g., DBMSCopy for this;
c. Download trial version of JMP and convert the files to desired format, e.g., CSV/txt etc and get done with it.