GeoIPCity.dat file where do I find it? - geoip

I have a file GeoIPCity.day on one of my sites. The original developer told me it was a free file from maxmind.com and I should update it every month. I have looked on maxmind.com and haven't been able to find a file with the exact same name. Any idea what file I should used as the update? Here are the list of files I was able to find on the website: http://dev.maxmind.com/geoip/legacy/geolite/#Downloads

The Legacy "dat" format DBs are discontinued:
https://support.maxmind.com/geolite-legacy-discontinuation-notice/

Free version of the GEO IP City was rename from GeoIPCity.dat TO GeoLiteCity.dat.
Download the data. This should work on CentOS 7/7.1
wget -N http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz -O /usr/share/GeoIP/GeoLiteCity.dat.gz && gunzip --force /usr/share/GeoIP/GeoLiteCity.dat.gz
this maintenance a backward compatible version
ln -s /usr/share/GeoIP/GeoLiteCity.dat /usr/share/GeoIP/GeoIPCity.dat

The old files can be download from the following: https://mirrors-cdn.liferay.com/geolite.maxmind.com/download/geoip/database/
A copy of the file "GeoLiteCity.dat" is also available from github at the following url:
https://github.com/mbcc2006/GeoLiteCity-data
Click on Code button then download

Related

Github Actions path does not update

Right now, I'm trying to build a tool from source and use it to build a C++ project. I'm able to extract the tar file (gcc-arm-none-eabi). But, when I try to add it to path (using $GITHUB_PATH, not add-path), the path doesn't apply on my next action and I can't build the file. The error states that it can't find the gcc-arm-none-eabi toolset, which means that it didn't go to path.
Here's the script for the entrypoint of the first function (make is ran in the next action to allow for path to apply)
echo "Downloading ARM Toolchain"
# The one from apt isn't updated so I have to build from source
curl -L https://developer.arm.com/-/media/Files/downloads/gnu-rm/10-2020q4/gcc-arm-none-eabi-10-2020-q4-major-x86_64-linux.tar.bz2 -o gcc-arm-none-eabi.tar.bz2
tar -xjf gcc-arm-none-eabi.tar.bz2
echo "/github/workspace/gcc-arm-none-eabi-10-2020-q4-major/bin" >> $GITHUB_PATH
I can't even debug by seeing what's in the path because running echo $(PATH) just says that PATH cannot be found. What should I do?
I can't even debug by seeing what's in the path because running echo $(PATH) just says that PATH cannot be found. What should I do?
First, PATH is not a command so if you want to print its value, it would be something like echo "${PATH}" or echo "$PATH"
Then, if you want to add a value to an existing environment variable, it would be something like
export PATH="${PATH}:/github/workspace/gcc-arm-none-eabi-10-2020-q4-major/bin"
EDIT: seems not a valid way to add something to the path using Github Actions, meanwhile it seems correct in the question. To get more details: https://docs.github.com/en/free-pro-team#latest/actions/reference/workflow-commands-for-github-actions#adding-a-system-path . Thanks to Benjamin W. for pointing this out in the comments.
Finally I think it would be a better fit if you use a docker image that already contains that kind of dependancies (you could easily write your own Dockerfile if this image doesn't already exists). Github action is designed to use docker (or OCI containers) image that contains the dependancies you need to perform your build actions. You should take a look here: https://docs.github.com/en/free-pro-team#latest/actions/creating-actions/dockerfile-support-for-github-actions

Wget returns images in an unknown format (jpg#f=)

I am running the following line:
wget -P "C:\My Web Sites\REGEX" -r --no-parent -A jpg,jpeg https://www.mywebsite.com/directory1/directory2/
and it stops (no errors) without returning more than a small amount of the website (two files). I am then running this:
wget -P "C:\My Web Sites\REGEX" https://www.mywebsite.com/directory1/directory2/ -m
and expecting to see data only from the directory. As a start, I found out that the script downloaded everything from the website as if I gave the https://www.mywebsite.com/ url. Also, the images are returned with an additional string in the extension (e.g. instead of .jpg I get something like .jpg#f=l=q)
Is there anything wrong in my code that causes that? I only want to get the images from the links that are shown in the directory given initially.
If there is nothing I can change, then I want to only download the files that contain .jpg in their names. Then, I have a prepared script in Python that can rename the files to have the original extension. Worst case, I can try Python instead of the CMD in Windows (page scraping)?
Note that --no-parent doesn't work in this case because the images are saved in a different directory. --accept-regex can be used if there is no way to get the correct extension.
PS: I do this thing in order to learn more about the wget options and protect my future hobby website.
UPD: Any suggestions regarding a Python script are welcome.

I need a bash script which needs to download a tar file from a website, this site has multiple files which needs to be filtered

I have a situation where I need to use curl/wget to download a tar file from a website, based on users input. If they mention a build I need to download a tar file based on the release version, I have a logic already to switch between builds, Questions is how can i filter out a particular tar file from multiple files.
curl -s https://somewebsite/somerepo-folder/os-based.tar | grep os-based* > sample.txt
curl -s https://somewebsite/somerepo-folder/os-based2.tar
curl -s https://somewebsite/somerepo-folder/os-based2.tar
first curl downloads all files. Regex helps here, how can I place this along with curl?
if there is a mapping between the user-input and the tar file that you can think of, you can do something like this:
userInput=1
# some logic to map user-input with the tar filename to download
$tarFileName="os-based$userInput.tar"
wget "https://somewebsite/somerepo-folder/$tarFileName"

Failed to import appliance . Could not find a SCSI controller

I am on a mac and am trying to import a virtual machine image (.ova file). I try to import the file on a VM and get the following error.
Could not find a storage controller named 'SCSI Controller'
Any solutions out there that already exists for this problem.
I got a clue to the answer from here: https://ctors.net/2014/07/17/vmware_to_virtualbox
basically you need to change the virtual disk controller eg change ddb.adapterType from "buslogic" or "lsilogic" to "ide"
However if you don't have VMware to boot the original image and remove the vmware tools and remove the hard disk, you can hack the .ovf file in the .ova file to switch the virtual SCSI controller to an IDE controller.
here's how.
First open the ova archive, lets assume its in the current dir called vm.ova
mkdir ./temp
cd temp
tar -xvf ../vm.ova
This will extract 3 files, an *.ovf file, a virtual disk *.vmdk file, and a manifest .mf file.
edit the .ovf file, find the SCSI reference, it will be lsilogicsas or "buslogic" or "lsilogic". replace that word with ide.
While you are at it you may want to rename all the files so that they don't have spaces or strange chars in the name, this males it more UNIX friendly. Of course if you rename the files you need to modify the references in the .ovf and .mf files.
because you've modified the files the you need to recompute the sha1 values in the .mf file. eg run sha1sum to get the value and replace the old ones in the mf file.
$ sha1sum vm.ovf
4806ebc2630d9a1325ed555a396c00eadfc72248 vm.ovf
now that you've swapped the disk controller and fixed up the manifest's sha1 values you can pack the .ova back up. The files have to be in order inside the archive so do this (use your file names)
tar -cvf ../vm-new.ova ./vm.ovf
tar -rvf ../vm-new.ova ./vm.vmdk
tar -rvf ../vm-new.ova ./vm.mf
done. Now you can open Virtualbox and click File -> Import Appliance then point it at the vm-new.ova file. once done you should be able to start the vm.
hope that helps.
Cheers Karl
I run through a similar problem and I just extracted the.ova file and create new VM with my own settings using the .vmdk file.
tar -xvf vm.ova
vm.ovf
vm.vmdk
vm.mf

How can i Extract Files From VDI

I was using VirtualBox on my PC(WIN 7)
I managed to View some files in my .VDI file..
How can I open or view the contents of my .vdi file and retrieve the files from there?
I had a corrupted VDI file (according to countless VDI-viewer programs I've used with cryptic errors like invalid handle, no file selected, please format disk) and I was not able to open the file, even with VirtualBox. I tried to convert it using the VirtualBox command line tools, with no success. I tried mounting it to a new virtual machine, tried mounting it with ImDisk, no dice. I read four Microsoft TechNet articles, downloaded their utilities and tried countless things; no success.
However, when I tried 7Zip (https://www.7-zip.org/download.html) I was able to view all of the files, and extract them selectively. Here's how I did it:
install 7zip (make sure that you also install the context-menu items, if prompted.)
right-click on the VDI file, select "Open Archive"
when the window appears, right click on the largest file in the archive (there should be two files, one is "Basic Microsoft Data Partition" and the other one something else, called system or something.) Left click on the largest one and click "Open inside". The file size is listed to the right of each file in bytes.
you should see all of the files inside of the archive. You can drag files that you'd like to extract right to your desktop. You can double click on folders to view inside them too.
If 7zip gives you a cryptic error after extracting the files, it means that you closed the folder's window that you are copying files to in Windows Explorer.
If you didn't close the window and you're still getting an error, try extracting each sub-folder individually. Also make sure that you have enough local hard drive space to copy the files to, even if you are copying them just to an external disk, as 7zip copies them first to your local disk. If the files are highly compressible, you might be able to get away with using NTFS compression for the AppData/temp folder so that when 7zip extracts the files locally, it'll compress them so that it can copy them over to your other disk.
You can mount partitions from .vdi images using qemu-nbd:
sudo apt install qemu-utils
sudo modprobe nbd
vdi="/path/to/your.vdi" # <<== Edit this
sudo qemu-nbd -c /dev/nbd0 "$vdi"
# view partitions and select the one you want to mount.
# Using parted here, but you can also use cfdisk, fdisk, etc.
sudo parted /dev/nbd0 print
part=nbd0p2 # <<== partition you want to mount
sudo mkdir /mnt/vdi
sudo mount /dev/$part /mnt/vdi
Some users seem to need to add a parameter to the modprobe command. I didn't with Ubuntu 16.04, but if it doesn't work for you, try adding max_part=16 :
sudo modprobe nbd max_part=16
When done:
sudo umount /dev/$part
sudo qemu-nbd --disconnect /dev/nbd0
Try out VMXray.
You can explore your vmdk image right inside your browser. Select the files that you want to extract and extract them to the desired location. Not just vmdk, you can use VMXRay for looking into and extracting files from RAW, QEMU/KVM QCOW2, Virtualbox VDI, and ISO images. ext2, ext3, FAT and NTFS are current supported file systems. You can also use this to recover deleted photos from raw dumps of your camera's SD card, for example.
And, do not worry, no data from your files is ever sent over the network. Data never leaves your machine. VMXRay works completely inside your browser.
As a first approach you can simply try any archive viewer to open .vdi file.
I tried 7zip to open Ubuntu Mate .vdi file and it shown all Linux file system like below.
An easy way is to attach the VDI as a second disk in another Virtual Machine.
The drive does not appear immediately; in Windows go to Disk Manager, bring the disk online and assign it a drive letter.
You can use ImDisk to mount VDI file as a local drive in Windows. Follow this virtualbox forum thread and become happy )) Also you can convert VDI to VHD and use default Windows Disk manager to mount VHD (described here)