Converting GeoLite2 data for use with xtables geoip - geoip

My apologies if this has been covered here or elsewhere. I read the postings back to 2016.
My debian system stopped updating the xtables geoip database. On investigation it developed that this is because Maxmind dropped support for legacy GeoIP databases. I have got as far as installing and configuring Maxmind's geoipupdate program for the GeoLite2 database and scheduling it weekly in crontab.
At this point I am stumped. geoipupdate returns a .mmdb database. This is not usable by the debian-supplied scripts which convert .CSV files to the country code files in /usr/share/xt_geoip/LE and /usr/share/xt_geoip/BE.
The debian package xtables-addons has not been updated to deal with this situation.
Assistance or a pointer to a solution will be gratefully received. At present I am still using the last valid database which is now getting to be over six months old.

I eventually ended up writing this script, which now runs weekly. So far (three months on) it appears to be satisfactory.
cat update-geoip.sh
#!/bin/bash -e
GEOLITE_URL="https://geolite.maxmind.com/download/geoip/database/GeoLite2-Country-CSV.zip"
GEOLITE_ZIP="GeoLite2-Country-CSV.zip"
COUNTRY_URL="http://download.geonames.org/export/dump/countryInfo.txt"
#
# Switch to the GeoIP directory if not already there
#
echo "--> cd /usr/share/xt_geoip"
cd /usr/share/xt_geoip
#
# Remove anything remaining from previous failed runs
#
# Note: DO NOT delete the existing BE and LE subfolders at this
# time. If the download fails the result would be no
# database at all.
#
echo "--> rm -r GeoLite2*"
rm -r -f GeoLite2*
echo "--> rm countryInfo.txt"
rm -f countryInfo.txt
echo "--> rm GeoIP-legacy.csv"
rm -f GeoIP-legacy.csv
#
# Get the GeoIP ZIP file
#
echo "--> wget --no-check-certificate $GEOLITE_URL"
wget --no-check-certificate $GEOLITE_URL
#
# See if the ZIP file now exists
#
if [ ! -e $GEOLITE_ZIP ]; then
echo "--> GeoIP ZIP file did not download"
echo "--> Send email to root and stop here"
/usr/sbin/sendmail root << EOM
From: Update_GeoIP
To: root
Subject: GeoIP update failed
GeoIP update failed.
Unable to download GeoIP ZIP file
$GEOLITE_ZIP
EOM
exit
fi
#
# Unzip the ZIP file
#
echo "--> unzip $GEOLITE_ZIP"
unzip $GEOLITE_ZIP
#
# Delete the ZIP file
#
#echo "--> rm $GEOLITE_ZIP"
rm $GEOLITE_ZIP
#
# Move the received data directory to a standard name
#
echo "--> mv GeoLite2-Country-CSV_* GeoLite2"
mv GeoLite2-Country-CSV_* GeoLite2
#
# See if the critical GeoIP data files now exist
#
if [ ! -e "GeoLite2/GeoLite2-Country-Blocks-IPv4.csv" ] ||
[ ! -e "GeoLite2/GeoLite2-Country-Blocks-IPv6.csv" ]; then
echo "--> GeoIP data files are missing"
echo "--> Send email to root and stop here"
/usr/sbin/sendmail root << EOM
From: Update_GeoIP
To: root
Subject: GeoIP update failed
GeoIP update failed.
GeoIP data file(s) are missing
GeoLite2/GeoLite2-Country-Blocks-IPv4.csv
GeoLite2/GeoLite2-Country-Blocks-IPv6.csv
EOM
exit
fi
#
# Get the country info data file
#
echo "--> wget --no-check-certificate $COUNTRY_URL"
wget --no-check-certificate $COUNTRY_URL
#
# See if the country info data file now exists
#
if [ ! -e "countryInfo.txt" ]; then
echo "--> Country info data file did not download"
echo "--> Send email to root and stop here"
/usr/sbin/sendmail root << EOM
From: Update_GeoIP
To: root
Subject: GeoIP update failed
GeoIP update failed.
Unable to download country info data file
$COUNTRY_URL
EOM
exit
fi
#
# Build an old format data file from the new format data files
#
echo "--> cat ./GeoLite2/GeoLite2-Country-Blocks-IPv{4,6}.csv | ./convert_GeoLite2.pl ./countryInfo.txt > /usr/share/xt_geoip/GeoIP-legacy.csv"
cat ./GeoLite2/GeoLite2-Country-Blocks-IPv{4,6}.csv | ./convert_GeoLite2.pl ./countryInfo.txt > /usr/share/xt_geoip/GeoIP-legacy.csv
#
# Delete the downloaded data files
#
echo "--> rm -r GeoLite2"
rm -r GeoLite2
echo "--> rm countryInfo.txt"
rm country_Info.txt
#
# Preserve the old BE and LE directories just in case
#
echo "--> rm -r -f LastBE LastLE"
rm -r -f LastBE LastLE
echo "--> mv BE LastBE"
mv BE LastBE
echo "--> mv LE LastLE"
mv LE LastLE
#
# Convert the generated database to the xtables GeoIP format
#
echo "--> /usr/lib/xtables-addons/xt_geoip_build -D /usr/share/xt_geoip ./GeoIP-legacy.csv"
/usr/lib/xtables-addons/xt_geoip_build -D /usr/share/xt_geoip ./GeoIP-legacy.csv
#
# Delete the remaining data files
#
echo "--> rm countryInfo.txt"
rm countryInfo.txt
echo "--> rm GeoIP-legacy.csv"
rm GeoIP-legacy.csv
#
# Notify root that the update succeeded
#
echo "--> Send notification email to root"
/usr/sbin/sendmail root << EOM
From: Update_GeoIP
To: root
Subject: Weekly update of xtables GeoIP completed
Weekly update of xtables GeoIP database successful.
EOM
echo "xtables GeoIP database update completed"

You can also download the source from the xtable-addon's project (either directly or from the sid version of the xtables-addons-common package) and grab updated versions of the scripts.
https://sourceforge.net/projects/xtables-addons/files/Xtables-addons/
See the following askubuntu answer:
https://askubuntu.com/questions/1117669/xtables-addons-issues-with-maxmind-geolite2

Have a look at GeoLite2xtables :-
https://github.com/mschmitt/GeoLite2xtables
You can download a zip (or git clone).
It has example workflow (shell commands) for legacy GeoLite CSV (which is probably what you have which stopped working in early Jan 2019) and GeoLite2 CSV (which you can use instead).

Related

Best way to automatically move backups of web server to an AWS sever

I have a web server that produces .tar.zg backup files that I want to automatically transfer to an AWS server.
To accomplish this I have tried to write a bash script on the AWS server that will automatically check for a new backup at the web server and make a copy of the backup if it is more recent (preserving timestamps).
Is there an easier or more robust way to go about this?
Am I correct in my FTP script syntax?
# Credentials to access other machine
HOST=xxxxxx
USER=xxxxx
PASSWD=xxxxxxx
# path to the remoteBackups
remoteBackups=/home/ubuntu/testBackups
# Loops indefinitly
#while [[ true ]]
#do
# FTP to remote host and get the name most recent backup
ftp -inv $HOST<<-EOT
user $USER $PASSWD
#Store name of most recent backup to FILE
# does this work or will it just save it to a variable FILE on the
remote machine
FILE=`ls -t ~/Desktop/backups/*.tar.gz | head -1`
bye
EOT
# For testing
echo $FILE
# Copy (preserving modification dates) file to the local remote
backups folder on aws server
#scp -p -i <.pem> $FILE $remoteBackups
# Get the most recent back up from both directories
latestLocal=`ls -t ~/intranetBackups/*.tar.gz | head -1`
latestRemote=`ls -t $remoteBackups/*.tar.gz | head -1`
# For testing
echo $latestLocal
echo $latestRemote
# If the backup from the remote is newer then save to backups and
sleep for 15 days
if [[ $latestLocal -ot $latestRemote ]]
then
echo Transferring backup from $latestRemote to $latestLocal
sleep 15d
else
echo No new backup file found
sleep 1d
fi
# If there are more than 20 backups delete the oldest
if [[ `ls -1 ~/intranetBackups | wc -l` -ge 20 ]]
then
rm `ls -t ~/intranetBackuos | tail -1`
echo removed the oldest backup
else
echo no file to be removed
fi
#done

autoenv printing "will run" /path/to/.env everytime I cd

My specific problem is not the fact that source /venv/bin/activate is being executed everytime I change to a sub folder. The problem is everytime I do it prints "will run" /path/to/.env
I installed autoenv through homebrew in version 0.2.0. This is my .env file:
if [ -z "$VIRTUAL_ENV" ]; then
CUR_DIR=$(pwd)
# search for the next .env
while [[ "$PWD" != "/" && "$PWD" != "$home" ]]; do
env_file="$PWD/.env"
if [[ -f "$env_file" ]]; then
BASE_DIR=$(dirname $env_file)
break
fi
builtin cd ..
done
if [ ! -z "$BASE_DIR" ]; then
echo "Activating that virtualenv"
source ${BASE_DIR}/venv/bin/activate
fi
cd $CUR_DIR
fi
The output in the terminal is something like:
MacBook-Pro:~ llamasramirez$ cd Desktop/oficios/
Will run /Users/llamas/Desktop/oficios/.env
Activating that virtualenv
Will run /Users/llamas/Desktop/oficios/.env
(venv) MacBook-Pro:oficios llamasramirez$ cd django_sites/polls/
Will run /Users/llamas/Desktop/oficios/.env
(venv) MacBook-Pro:polls llamasramirez$
Turns out the problem wasn't in the .env file. The problem was at the activate.sh file.
Opening the autoenv_init function there's a for loop that makes the annoying echo instruction:
for _file in ${_orderedfiles}; do
echo "Will run ${_file}"
autoenv_check_authz_and_run "${_file}"
done
When I comment this the echo doesn't pops up again.

How to build ACE for MingW-64 no MakeFile in Ace Root

I am attempting to build the ACE library for Mingw GCC 64 bit on Windows. The instructions here state the following:
Install the MinGW tools (including the MinGW Development toolkit) into a common directory, say c:/mingw.
Install the MSYS tools into a common directory, say c:/msys.
Open a MSYS shell. Set your PATH environment variable so your MinGW's bin directory is first:
% export PATH=/c/mingw/bin:$PATH
Add an ACE_ROOT environment variable pointing to the root of your ACE wrappers source tree:
% export ACE_ROOT=/c/work/mingw/ACE_wrappers
From now on, we will refer to the root directory of the ACE source tree as $ACE_ROOT.
Create a file called config.h in the $ACE_ROOT/ace directory that contains:
#include "ace/config-win32.h"
Create a file called platform_macros.GNU in the $ACE_ROOT/include/makeinclude directory containing:
include $(ACE_ROOT)/include/makeinclude/platform_mingw32.GNU
In the above text, don't replace $(ACE_ROOT) with the actual directory, GNU make will take the value from the environment variable you defined previously.
If you lack Winsock 2, add the line
winsock2 = 0
before the previous one.
If you want to install ACE (using "make install") and want all the .pc files generated, set the installation prefix in platform_macros.GNU.
INSTALL_PREFIX=/c/ACE
Headers will be installed to $INSTALL_PREFIX/include, documentation and build system files to $INSTALL_PREFIX/share and libraries to $INSTALL_PREFIX/lib. With INSTALL_PREFIX set, RPATH will be enabled. To disable RPATH (for example, if $INSTALL_PREFIX/$INSTALL_LIB is already a system-known location for shared libraries), set the make macro install_rpath to 0 by adding install_rpath=0 to platform_macros.GNU.
Issue here:
In the MSYS shell, change to the $ACE_ROOT/ace directory and run make:
% cd $ACE_ROOT/ace
% make
Now I noticed that there is no MakeFile in ACE_ROOT/ace which is C:\mingw64\Other\ACE_wrappers\ace
I downloaded my ACE stuff from here.
Any suggestions on what I might be doing wrong ? did I download something wrong ?
You seem to have downloaded the source only distribution, please download the full package, that includes also the GNU makefiles, see http://download.dre.vanderbilt.edu/
ACE comes in the full version with GNUmakefile-s,
In MSYS you give make -f GNUmakefile
EDIT 1
Though building 64-bit binaries is supported for numerous platforms and compilers, the ACE team did not provide it for MINGW. There is something to do ...
EDIT 2
Following script should do the configuration for 64-bit binaries in MingW-64
#! /bin/bash
#
# Configure ACE/TAO for 32/64 bit build with MINGW64
#
# Precondition:
# This script is located in the parent folder of ACE_Wrappers
# File access permissions in ACE_Wrappers allow editing of files (sed):
# Easyest, delivered full ACE/TAO ZIP was extracted using Windows Explorer.
# When extracting with 7z, it will correctly preserve access rights and
# they need to be granted for the user, explicitly
#
# Postcondition:
# ACE is configured for MINGW build
# Script is involutoric
#
# Author: Sam Ginrich
# No warranty of any kind!
#
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#
#
# Definition of Setup parameters
# these are entered into configuration files, if not already there, never modified!
#
#
#buildbits= # does nothing
#buildbits=32 # configure 32-bit build
#buildbits=64 # configure 64-bit build
buildbits=64
#winsock2=0 # configure parameter to exclude winsock2 library
#winsock2=1 # configure parameter to include winsock2 library
#winsock2= # does nothing, same effect as winsock2=1
winsock2=
# Issue with header "$ACE_ROOT/ace/OS_NS_stdlib.h"
# In some MINGW installation, the compiler is confused with a defined 'rand_r' macro
# This takes effect when building TAO, not ACE
#
#rand_r_issue= # does nothing, suggested initial value
#rand_r_issue=1 # modifies "$ACE_ROOT/ace/OS_NS_stdlib.h" to #undef-ine macro 'rand_r',
# before impact, suggested when issue occurs
rand_r_issue=
#
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
echo "+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
echo "ACE/TAO Build Target Values"
echo ""
echo "buildbits=$buildbits"
echo "winsock2=$winsock2"
echo "rand_r_issue=$rand_r_issue"
echo "+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
echo ""
echo "STEP: Enter ACE_Wrappers and define locations"
cd ./ACE_Wrappers
# Check, whether we arrives there ...
if [ ! -f "./ACE-INSTALL.html" ]; then
echo "ACE_Wrappers missing or invalid ... STOP"
exit 1
fi
export ACE_ROOT=${PWD}
export TAO_ROOT=${ACE_ROOT}/TAO
# Set C-Config Header for Windows
echo '#include "ace/config-win32.h"' > ace/config.h
# Set Platform MINGW
pl_macro=$ACE_ROOT/include/makeinclude/platform_macros.GNU
pl_mingw='$(ACE_ROOT)/include/makeinclude/platform_mingw32.GNU'
echo "include $pl_mingw" > $pl_macro
if [ "$buildbits" != "" ];
then
echo "------------------------------------------------------------"
echo ""
echo "STEP: Provide support for 64-bit build in 'platform_g++_common.GNU'"
pl_gpp=$ACE_ROOT/include/makeinclude/platform_g++_common.GNU
donetag2=" FLAGS_C_CC += -m64"
marker2="CCFLAGS += -Wnon-virtual-dtor"
buildbitsSwitch="ifeq (\$(buildbits),32)\n FLAGS_C_CC += -m32\n LDFLAGS += -m32\nendif\nifeq (\$(buildbits),64)\n FLAGS_C_CC += -m64\n LDFLAGS += -m64\nendif"
case `grep -Fx "$donetag2" "$pl_gpp" >/dev/null; echo $?` in
0)
echo "File $pl_gpp already modified "
;;
1)
# anyway store a copy
cp $pl_gpp /tmp
echo "copy of original $pl_gpp stored in \\tmp"
echo "insert compiler switches for buildbits rule"
sed -i "s/$marker2/$marker2\n\n$buildbitsSwitch/g" $pl_gpp
;;
*)
echo "Error scanning file $pl_gpp"
;;
esac
echo "------------------------------------------------------------"
echo ""
pl_mingw=$ACE_ROOT/include/makeinclude/platform_mingw32.GNU
echo "STEP: Set parameter for 64-bit build in $pl_mingw"
donetag3="buildbits =.*"
marker3="mingw32 = 1"
buildbitsDef="# 32\/64-bit build\n# parameter 'buildbits' is applied in platform_gnuwin32_common.GNU\nbuildbits = $buildbits"
case `grep -Ex "$donetag3" "$pl_mingw" >/dev/null; echo $?` in
0)
echo "File $pl_mingw already modified "
echo "Verify value! "
grep "buildbits =" $pl_mingw
;;
1)
# anyway store a copy
cp $pl_mingw /tmp
echo "copy of original $pl_mingw stored in \\tmp"
echo "insert buildbits=$buildbits"
sed -i "s/$marker3/$marker3\n\n$buildbitsDef/g" $pl_mingw
;;
*)
echo "Error scanning file $pl_mingw"
;;
esac
fi
if [ "$winsock2" != "" ];
then
echo "------------------------------------------------------------"
echo ""
#pl_mingw=$ACE_ROOT/include/makeinclude/platform_mingw32.GNU
echo "STEP: Winsock lack control"
donetag4="winsock2 =.*"
marker4=$marker3
winsockDef="winsock2 = $winsock2"
# $donetag4 is regular expression, -E
case `grep -Ex "$donetag4" "$pl_mingw" >/dev/null; echo $?` in
0)
echo "File $pl_mingw already modified "
echo Verify Value!
grep "winsock2 =" $pl_mingw
;;
1)
# anyway store a copy
cp $pl_mingw /tmp
echo "copy of original $pl_mingw stored in \\tmp"
echo insert $winsockDef
sed -i "s/$marker4/$marker4\n\n$winsockDef/g" $pl_mingw
;;
*)
echo "Error scanning file $pl_mingw"
;;
esac
fi
if [ "$rand_r_issue" == "1" ];
then
echo "------------------------------------------------------------"
echo ""
echo "STEP: Handle issue with defined C-macro rand_r"
onsll=$ACE_ROOT/ace/OS_NS_stdlib.h
donetag1="//#rand_undefined"
case `grep -Fx "$donetag1" "$onsll" >/dev/null; echo $?` in
0)
echo "File $onsll already modified"
;;
1)
# anyway store a copy
cp $onsll /tmp
echo "copy of original $pl_gpp stored in \\tmp"
echo "insert '#undef rand_r'"
sed -i 's/#if !defined (ACE_LACKS_RAND_R)/\/\/#rand_undefined\n#undef rand_r\n#if !defined (ACE_LACKS_RAND_R)/g' $onsll
;;
*)
echo "Error scanning file $onsll"
;;
esac
fi
echo "============================================================"
echo ""
echo "Content of "$ACE_ROOT/ace/config.h" is"
cat "ace/config.h"
echo ""
echo Content of "$pl_macro" is
cat $pl_macro
echo "-------------------------------------------------------------"
echo ""
echo ""
echo ""
echo "Suggested BUILD STEPS:"
echo ""
echo ""
echo "# 1. Define context"
echo "export ACE_ROOT=${PWD}"
echo "export TAO_ROOT=${ACE_ROOT}/TAO"
echo ""
echo "# 2. Build ACE"
echo 'cd ${ACE_ROOT}/ace'
echo "make -f GNUmakefile"
echo ""
echo "# 3. Verify ACE"
echo 'cd ${ACE_ROOT}/tests'
echo "make -f GNUmakefile"
echo "perl run_test.pl"
echo "#NOTE: Windows Firewall will ask for permission for each upcoming server instance"
echo ""
echo "# 4. Build TAO"
echo 'cd ${TAO_ROOT}'
echo "make -f GNUmakefile"
echo ""
echo "# 5. Basic TAO verification"
echo 'cd ${TAO_ROOT}/tests'
echo "make -f GNUmakefile"
echo 'cd ${TAO_ROOT}/tests/Param_Test'
echo "perl run_test.pl"

C++ Execution Script Aprog Command Not Found

we have a C++ project in which we need to redirect the standard output to a text file using the following script:
#!/bin/bash
echo "Descend into 'workdirectory' directory"
cd workdirectory
#
for item in *
do
echo " "
echo "EXECUTING" $item
cd $item
Aprog >zoutput02.txt
cd ..
echo "EXECUTION COMPLETE"
done
echo "Return from 'testdirectory' directory"
cd ..
echo " "
When I try to run this script using bash ./scriptname.txt, it returns:
EXECUTING work
./scriptname.txt: line 10: Aprog: command not found
EXECUTION COMPLETE
Return from 'workdirectory' directory
What does this error mean?
Thanks!
You are CD ing around. Aprog is almost certainly not on your path.
Either add Aprog to your PATH or define it in the script.
aprog=/path/to/aprog
$aprog > zoutput02.txt

Vagrant VM not getting Django and others requirements

I'm using Vagrant and Chef solo to setup my django dev environment. Using Chef Solo I successfully install my packages (vim, git, apt, python, mysql) but then when I setup my project using pip to download/install my requirements (django, south, django-registration, etc), these ones are not correctly downloaded/found in my fresh VM.
I'm not sure if it's a location issue, but it's downloading and I have only warnings, never errors, but then it's not at the supposed location (I have another project setup exactly the same and it works, so maybe I'm missing something here...).
Here is my Vagrantfile:
Vagrant::Config.run do |config|
config.vm.define :djangovm do |django_config|
# Every Vagrant virtual environment requires a box to build off of.
django_config.vm.box = "lucid64"
# The url from where the 'config.vm.box' box will be fetched if it
# doesn't already exist on the user's system.
django_config.vm.box_url = "http://files.vagrantup.com/lucid64.box"
# Forward a port from the guest to the host, which allows for outside
# computers to access the VM, whereas host only networking does not.
django_config.vm.forward_port 80, 8080
django_config.vm.forward_port 8000, 8001
# Enable provisioning with chef solo, specifying a cookbooks path (relative
# to this Vagrantfile), and adding some recipes and/or roles.
django_config.vm.provision :chef_solo do |chef|
chef.json = {
python: {
install_method: 'source',
version: '2.7.5',
checksum: 'b4f01a1d0ba0b46b05c73b2ac909b1df'
},
mysql: {
server_root_password: 'root',
server_debian_password: 'root',
server_repl_password: 'root'
},
}
chef.cookbooks_path = "vagrant_resources/cookbooks"
chef.add_recipe "apt"
chef.add_recipe "build-essential"
chef.add_recipe "git"
chef.add_recipe "vim"
chef.add_recipe "openssl"
chef.add_recipe "mysql::client"
chef.add_recipe "mysql::server"
chef.add_recipe "python"
end
django_config.vm.provision :shell, :path => "vagrant_resources/vagrant_bootstrap.sh"
end
end
And here the bootstrap file to download Django and continue setting up things:
#!/usr/bin/env bash
eval vagrantfile_location="~/.vagrantfile_processed"
if [ -f $vagrantfile_location ]; then
echo "Vagrantfile already processed. Exiting..."
exit 0
fi
#==================================================================
# install dependencies
#==================================================================
/usr/bin/yes | pip install --upgrade pip
/usr/bin/yes | pip install --upgrade virtualenv
/usr/bin/yes | sudo apt-get install python-software-properties
#==================================================================
# set up the local dev environment
#==================================================================
if [ -f "/home/vagrant/.bash_profile" ]; then
echo -n "removing .bash_profile for user vagrant..."
rm /home/vagrant/.bash_profile
echo "done!"
fi
echo -n "creating new .bash_profile for user vagrant..."
ln -s /vagrant/.bash_profile /home/vagrant/.bash_profile
source /home/vagrant/.bash_profile
echo "done!"
#==================================================================
# set up virtual env
#==================================================================
cd /vagrant;
echo -n "Creating virtualenv..."
virtualenv myquivers;
echo "done!"
echo -n "Activating virtualenv..."
source /vagrant/myquivers/bin/activate
echo "done!"
echo -n "installing project dependencies via pip..."
/usr/bin/yes | pip install -r /vagrant/myquivers/myquivers/requirements/dev.txt
echo "done!"
#==================================================================
# install front-endy things
#==================================================================
echo -n "adding node.js npm repo..."
add-apt-repository ppa:chris-lea/node.js &> /dev/null || exit 1
echo "done!"
echo -n "calling apt-get update..."
apt-get update &> /dev/null || exit 1
echo "done!"
echo -n "nodejs and npm..."
apt-get install nodejs npm &> /dev/null || exit 1
echo "done!"
echo -n "installing grunt..."
npm install -g grunt-cli &> /dev/null || exit 1
echo "done!"
echo -n "installing LESS..."
npm install -g less &> /dev/null || exit 1
echo "done!"
echo -n "installing uglify.js..."
npm install -g uglify-js &> /dev/null || exit 1
echo "done!"
#==================================================================
# cleanup
#==================================================================
echo -n "marking vagrant as processed..."
touch $vagrantfile_location
echo "done!"
My requirements dev.txt looks like this:
Django==1.5.1
Fabric==1.7.0
South==0.8.2
Pillow==2.1.0
django-less==0.7.2
paramiko==1.11.0
psycopg2==2.5.1
pycrypto==2.6
wsgiref==0.1.2
django-registration==1.0
Any idea why I can't find Django and my other things in my VM?
This is a whole 'nother path, but I highly recommend using Berkshelf and doing it the Berkshelf way. There's a great guide online for rolling them this way.
That is, create a cookbook as a wrapper that will do everything your script does.
So the solution was to remove the dependency with Postgre psycopg2==2.5.1 I have in my requirements (from the setup in my other project), because here I'll be having a MySQL database instead.