Issue - Dockerfile - CentOS and Postfix - SMTP Relay SendGrid - dockerfile

Having issues to start a container that build using Dockerfile.
I am trying to simply run a CentOS image that has Postfix configured with SendGrid or any SMTP Relay service # KB # Integrate SendGrid with Postfix - SendGrid Documentation | SendGrid | Twilio
Dockerfile below:
FROM centos:7
# Update package repository and install postfix and SASL
RUN yum update -y && yum install -y postfix cyrus-sasl-plain
# Configure postfix to listen on all interfaces and enable SASL
RUN sed -i 's/inet_interfaces = localhost/inet_interfaces = all/' /etc/postfix/main.cf \
&& echo "broken_sasl_auth_clients = yes" >> /etc/postfix/main.cf \
&& echo "smtpd_sasl_auth_enable = yes" >> /etc/postfix/main.cf \
&& echo "smtpd_sasl_security_options = noanonymous" >> /etc/postfix/main.cf \
&& echo "smtp_sasl_tls_security_options = noanonymous" >> /etc/postfix/main.cf \
&& echo "smtp_tls_security_level = encrypt" >> /etc/postfix/main.cf \
&& echo "header_size_limit = 4096000" >> /etc/postfix/main.cf \
&& echo "relayhost = [smtp.sendgrid.net]:587" >> /etc/postfix/main.cf
# Expose the SMTP port (25)
EXPOSE 25
# Run a script to create a SASL password file for the specified user with the specified password
CMD ["/bin/bash", "-c", "echo $USERNAME $PASSWORD > /etc/postfix/sasl_passwd && chmod 600 /etc/postfix/sasl_passwd && echo 'smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd' >> /etc/postfix/main.cf && postfix start"]
Then I do a docker build
docker build -t user/centos-postfix-sendgrid . --no-cache=true --platform=linux/amd64
Then docker run
docker run -itd --cap-add=NET_BIND_SERVICE -p 2513:25 -e USERNAME=apikey -e PASSWORD=passwordhere --name centos7_postfix_sendgrid user/centos-postfix-sendgrid
The container doesn’t start, not sure what issue is here. Any ideas?
Thanks

Related

AWS Elastic Beanstalk ebextensions configuration doesn't work when new instance gets spawned

In my Laravel application, I have created a folder .ebextensions and it has the configuration to install supervisor in the EC2 instance.
When I deploy the application for the first time and the instance gets created, everything works fine. Supervisor gets installed.
But when the instance scales and a new EC2 gets spawned, it doesn't take the same configuration. I need to install supervisor manually on the newer instance.
Is there a way, where the newer instances would take the configuration from .ebextensions and run it in the similar way it did the first time?
This is the structure of the .ebextensions folder
.ebextensions
- supervisor
- setup.sh
- supervisor_laravel.conf
- supervisord.conf
- supervisor.config
setup.sh
#!/bin/bash
echo "Supervisor - starting setup"
. /opt/elasticbeanstalk/deployment/env
if [ ! -f /usr/bin/supervisord ]; then
echo "installing supervisor"
easy_install supervisor
else
echo "supervisor already installed"
fi
if [ ! -d /etc/supervisor ]; then
mkdir /etc/supervisor
echo "create supervisor directory"
fi
if [ ! -d /etc/supervisor/conf.d ]; then
mkdir /etc/supervisor/conf.d
echo "create supervisor configs directory"
fi
. /opt/elasticbeanstalk/deployment/env && cat .ebextensions/supervisor/supervisord.conf > /etc/supervisor/supervisord.conf
. /opt/elasticbeanstalk/deployment/env && cat .ebextensions/supervisor/supervisord.conf > /etc/supervisord.conf
. /opt/elasticbeanstalk/deployment/env && cat .ebextensions/supervisor/supervisor_laravel.conf > /etc/supervisor/conf.d/supervisor_laravel.conf
if ps aux | grep "[/]usr/bin/supervisord"; then
echo "supervisor is running"
else
echo "starting supervisor"
/usr/bin/supervisord
fi
/usr/bin/supervisorctl reread
/usr/bin/supervisorctl update
echo "Supervisor Running!"
yum -y install http://cbs.centos.org/kojifiles/packages/beanstalkd/1.9/3.el7/x86_64/beanstalkd-1.9-3.el7.x86_64.rpm
if ps aux | grep "[/]usr/bin/beanstalkd"; then
echo "beanstalkd is running"
else
echo "starting beanstalkd"
/bin/systemctl start beanstalkd.service
fi
echo "Beanstalkd Running..."
supervisor_laravel.conf
[program:worker]
process_name=%(program_name)s_%(process_num)02d
command=/usr/bin/php /var/www/html/artisan queue:work --tries=3 --timeout=0
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
;user=forge
numprocs=3
redirect_stderr=true
stderr_logfile=/var/log/supervisor_laravel.err.log
stdout_logfile=/var/log/supervisor_laravel.out.log
supervisor.conf
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
environment=SYMFONY_ENV=prod
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[include]
files = /etc/supervisor/conf.d/*.conf
[inet_http_server]
port = 9000
username = user
password = pw
supervisor.config
container_commands:
01_install_supervisor:
command: ".ebextensions/supervisor/setup.sh"
I have found the answer. But slightly in a different way.
Initially, I had all the files in .ebextensions folder, I moved some of the files into a different folder called "awsconfig" (The name doesn't matter), and moved some of the commands of setup.sh file to .platform/hooks/postdeploy and gave the permission as chmod +x to setup.sh
Important
AWS cleans up the .ebextensions folder after they are executed. I read that here
I did this because supervisor was getting installed in the newer instance (the instance which gets created after scaling), but the configuration files weren't getting copied because there was no .ebextensions folder in the newer instance
To copy the configuration files after the instance gets deployed, I used platform hooks postdeploy.
You can read about platform-hooks-postdeploy here
This is the structure I have now
.ebextensions
- supervisor
- setup.sh
- supervisor.config
.platform
- hooks
- postdeploy
- setup.sh
awsconfig
- supervisor
- supervisor_laravel.conf
- supervisord.conf
These are the changes in setup.php file which is split into two files in different folders
.ebextensions/supervisor/setup.sh
#!/bin/bash
echo "Supervisor - starting setup"
. /opt/elasticbeanstalk/deployment/env
if [ ! -f /usr/bin/supervisord ]; then
echo "installing supervisor"
easy_install supervisor
else
echo "supervisor already installed"
fi
if [ ! -d /etc/supervisor ]; then
mkdir /etc/supervisor
echo "create supervisor directory"
fi
if [ ! -d /etc/supervisor/conf.d ]; then
mkdir /etc/supervisor/conf.d
echo "create supervisor configs directory"
fi
.platform/hooks/postdeploy/setup.sh
#!/bin/bash
. /opt/elasticbeanstalk/deployment/env && cat /var/www/html/awsconfig/supervisor/supervisord.conf > /etc/supervisor/supervisord.conf
. /opt/elasticbeanstalk/deployment/env && cat /var/www/html/awsconfig/supervisor/supervisord.conf > /etc/supervisord.conf
. /opt/elasticbeanstalk/deployment/env && cat /var/www/html/awsconfig/supervisor/supervisor_laravel.conf > /etc/supervisor/conf.d/supervisor_laravel.conf
if ps aux | grep "[/]usr/bin/supervisord"; then
echo "supervisor is running"
else
echo "starting supervisor"
/usr/bin/supervisord
fi
/usr/bin/supervisorctl reread
/usr/bin/supervisorctl update
echo "Supervisor Running!"
yum -y install http://cbs.centos.org/kojifiles/packages/beanstalkd/1.9/3.el7/x86_64/beanstalkd-1.9-3.el7.x86_64.rpm
if ps aux | grep "[/]usr/bin/beanstalkd"; then
echo "beanstalkd is running"
else
echo "starting beanstalkd"
/bin/systemctl start beanstalkd.service
fi
echo "Beanstalkd Running..."
The content of all other files remains the same as what I have posted in the question.

Deploy shiny with shinyproxy - no app showing

I've developed a shiny app and i'm trying to do a first lightweight deploy using shinyproxy.
All installation seems fine.
I've installed docker, java.
I thought that building a package that wraps the app and other function would be a good idea.
So I developed a package (CI) and CI::launch_application is basically a wrapper around RunApp function of shiny package. This is the code:
launch_application <- function(launch.browser = interactive(), ...) {
runApp(
appDir = system.file("app", package = "CI"),
launch.browser = launch.browser,
...
)
}
I succesfully built the docker image with this Dockerfile
FROM rocker/r-base:latest
## Install required dependencies
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
## for R package 'curl'
libcurl4-gnutls-dev \
apt-utils \
## for R package 'xml2'
libxml2-dev \
## for R package 'openssl'
libssl-dev \
zlib1g-dev \
default-jdk \
default-jre \
&& apt-get clean \
&& R CMD javareconf \
&& rm -rf /var/lib/apt/lists/
## Install major fixed R dependencies
# - they will always be needed and we want them in a dedicated layer,
# as opposed to getting them dynamically via `remotes::install_local()`
RUN install2.r --error \
shiny \
dplyr \
devtools \
rJava \
RJDBC
# copy the app to the image
RUN mkdir /root/CI
COPY . /root/CI
# Install CI
RUN install2.r --error remotes \
&& R -e "remotes::install_local('/root/CI')"
# Set host and port
RUN echo "options(shiny.port = 80, shiny.host = '0.0.0.0')" >> /usr/local/lib/R/Rprofile.site
EXPOSE 80
CMD ["R", "-e", "CI::launch_application()"]
This is my application.yml file
proxy:
title:
logo-url: http://www.openanalytics.eu/sites/www.openanalytics.eu/themes/oa/logo.png
landing-page: /
heartbeat-rate: 10000
heartbeat-timeout: 60000
port: 8080
admin-groups: scientists
users:
- name: jack
password: password
groups: scientists
- name: jeff
password: password
groups: mathematicians
authentication: simple
# Docker configuration
docker:
cert-path: /home/none
url: http://localhost:2375
port-range-start: 20000
specs:
- id: home
display-name: Customer Intelligence
description: Segment your customer
container-cmd: ["R", "-e", "CI::launch_application()"]
container-image: company/image
access-groups: scientist
logging:
file:
shinyproxy.log
When I launch java shinyproxy.jar and i visited the url with the port I exposed, I see a login mask.
I logged in with simple authentication ( login is successful from shinyproxy.log) but neither an app is showing nor a list of app.
When I launch the app locally everything is fine.
Thanks
There is a misprint in the allowed user group in application.yml (should be scientists over scientist):
access-groups: scientists
Dzimitry is right. It was a typo error: scientists over scientist.

Jenkinsfile to automatically deploy to EKS

How do I pass my aws credentials when I am running a Jenkinsjob
taking this as an example https://github.com/PaulMaddox/amazon-eks-kubectl
$ docker run -v ~/.aws:/home/kubectl/.aws -e CLUSTER=demo maddox/kubectl get services
The above works on my laptop , but I want to pass aws credentials on the file.I have aws configured in my Jenkins-->credentials .I also have a bitbucket repo which contains a Jenkinsfile and a yam file for "service" and "deployment"
the way I do it now is run the kubectl create -f filename.yaml and it deploys to eks .. just want to do the same thing but automate it with a Jenkinsfile , suggestions on how to do it either with kubectl or with helm
In your Jenkinsfile you should include similar section:
stage('Deploy on Dev') {
node('master'){
withEnv(["KUBECONFIG=${JENKINS_HOME}/.kube/dev-config","IMAGE=${ACCOUNT}.dkr.ecr.us-east-1.amazonaws.com/${ECR_REPO_NAME}:${IMAGETAG}"]){
sh "sed -i 's|IMAGE|${IMAGE}|g' k8s/deployment.yaml"
sh "sed -i 's|ACCOUNT|${ACCOUNT}|g' k8s/service.yaml"
sh "sed -i 's|ENVIRONMENT|dev|g' k8s/*.yaml"
sh "sed -i 's|BUILD_NUMBER|01|g' k8s/*.yaml"
sh "kubectl apply -f k8s"
DEPLOYMENT = sh (
script: 'cat k8s/deployment.yaml | yq -r .metadata.name',
returnStdout: true
).trim()
echo "Creating k8s resources..."
sleep 180
DESIRED= sh (
script: "kubectl get deployment/$DEPLOYMENT | awk '{print \$2}' | grep -v DESIRED",
returnStdout: true
).trim()
CURRENT= sh (
script: "kubectl get deployment/$DEPLOYMENT | awk '{print \$3}' | grep -v CURRENT",
returnStdout: true
).trim()
if (DESIRED.equals(CURRENT)) {
currentBuild.result = "SUCCESS"
return
} else {
error("Deployment Unsuccessful.")
currentBuild.result = "FAILURE"
return
}
}
}
}
}
which will be responsible for automating deployment proccess.
I hope it helps.

Running supervisord in AWS Environment

I'm working on adding Django Channels on my elastic beanstalk enviorment, but running into trouble configuring supervisord. Specifically, in /.ebextensions I have a file channels.config with this code:
container_commands:
01_copy_supervisord_conf:
command: "cp .ebextensions/supervisord/supervisord.conf /opt/python/etc/supervisord.conf"
02_reload_supervisord:
command: "supervisorctl -c /opt/python/etc/supervisord.conf reload"
This errors on the 2nd command with the following error message, through the elastic beanstalk CLI:
Command failed on instance. Return code: 1 Output: error: <class
'FileNotFoundError'>, [Errno 2] No such file or directory:
file: /opt/python/run/venv/local/lib/python3.4/site-
packages/supervisor/xmlrpc.py line: 562.
container_command 02_reload_supervisord in
.ebextensions/channels.config failed.
My guess would be supervisor didn't install correctly, but because command 1 copies the files without an error, that leads me to think supervisor is indeed installed and I have an issue with the container command. Has anyone implemented supervisor in an AWS environment and can see where I'm going wrong?
You should be careful about python versions and exact installation paths ,
Here is how did it maybe it can help
packages:
yum:
python27-setuptools: []
container_commands:
01-supervise:
command: ".ebextensions/supervise.sh"
Here is the supervise.sh
#!/bin/bash
if [ "${SUPERVISE}" == "enable" ]; then
export HOME="/root"
export PATH="/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin"
easy_install supervisor
cat <<'EOB' > /etc/init.d/supervisord
# Source function library
. /etc/rc.d/init.d/functions
# Source system settings
if [ -f /etc/sysconfig/supervisord ]; then
. /etc/sysconfig/supervisord
fi
# Path to the supervisorctl script, server binary,
# and short-form for messages.
supervisorctl=${SUPERVISORCTL-/usr/bin/supervisorctl}
supervisord=${SUPERVISORD-/usr/bin/supervisord}
prog=supervisord
pidfile=${PIDFILE-/var/run/supervisord.pid}
lockfile=${LOCKFILE-/var/lock/subsys/supervisord}
STOP_TIMEOUT=${STOP_TIMEOUT-60}
OPTIONS="${OPTIONS--c /etc/supervisord.conf}"
RETVAL=0
start() {
echo -n $"Starting $prog: "
daemon --pidfile=${pidfile} $supervisord $OPTIONS
RETVAL=$?
echo
if [ $RETVAL -eq 0 ]; then
touch ${lockfile}
$supervisorctl $OPTIONS status
fi
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "
killproc -p ${pidfile} -d ${STOP_TIMEOUT} $supervisord
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -rf ${lockfile} ${pidfile}
}
reload() {
echo -n $"Reloading $prog: "
LSB=1 killproc -p $pidfile $supervisord -HUP
RETVAL=$?
echo
if [ $RETVAL -eq 7 ]; then
failure $"$prog reload"
else
$supervisorctl $OPTIONS status
fi
}
restart() {
stop
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status -p ${pidfile} $supervisord
RETVAL=$?
[ $RETVAL -eq 0 ] && $supervisorctl $OPTIONS status
;;
restart)
restart
;;
condrestart|try-restart)
if status -p ${pidfile} $supervisord >&/dev/null; then
stop
start
fi
;;
force-reload|reload)
reload
;;
*)
echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload}"
RETVAL=2
esac
exit $RETVAL
EOB
chmod +x /etc/init.d/supervisord
cat <<'EOB' > /etc/sysconfig/supervisord
# Configuration file for the supervisord service
#
# Author: Jason Koppe <jkoppe#indeed.com>
# orginal work
# Erwan Queffelec <erwan.queffelec#gmail.com>
# adjusted to new LSB-compliant init script
# make sure elasticbeanstalk PARAMS are being passed through to supervisord
. /opt/elasticbeanstalk/support/envvars
# WARNING: change these wisely! for instance, adding -d, --nodaemon
# here will lead to a very undesirable (blocking) behavior
#OPTIONS="-c /etc/supervisord.conf"
PIDFILE=/var/run/supervisord/supervisord.pid
#LOCKFILE=/var/lock/subsys/supervisord.pid
# Path to the supervisord binary
SUPERVISORD=/usr/local/bin/supervisord
# Path to the supervisorctl binary
SUPERVISORCTL=/usr/local/bin/supervisorctl
# How long should we wait before forcefully killing the supervisord process ?
#STOP_TIMEOUT=60
# Remove this if you manage number of open files in some other fashion
#ulimit -n 96000
EOB
mkdir -p /var/run/supervisord/
chown webapp: /var/run/supervisord/
cat <<'EOB' > /etc/supervisord.conf
[unix_http_server]
file=/tmp/supervisor.sock
chmod=0777
[supervisord]
logfile=/var/app/support/logs/supervisord.log
logfile_maxbytes=0
logfile_backups=0
loglevel=warn
pidfile=/var/run/supervisord/supervisord.pid
nodaemon=false
nocleanup=true
user=webapp
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[program:process-ipn-api-gpsfsoft]
command = -- command that u want to run ---
directory = /var/app/current/
user = webapp
autorestart = true
startsecs = 0
numprocs = 10
process_name = -- process name that u want ---
EOB
# this is now a little tricky, not officially documented, so might break but it is the cleanest solution
# first before the "flip" is done (e.g. switch between ondeck vs current) lets stop supervisord
echo -e '#!/usr/bin/env bash\nservice supervisord stop' > /opt/elasticbeanstalk/hooks/appdeploy/enact/00_stop_supervisord.sh
chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/00_stop_supervisord.sh
# then right after the webserver is reloaded, we can start supervisord again
echo -e '#!/usr/bin/env bash\nservice supervisord start' > /opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
fi
PS: You have define SUPERVISE as Enable in Elasticbeanstalk environment value to get this run.

Vagrant VM not getting Django and others requirements

I'm using Vagrant and Chef solo to setup my django dev environment. Using Chef Solo I successfully install my packages (vim, git, apt, python, mysql) but then when I setup my project using pip to download/install my requirements (django, south, django-registration, etc), these ones are not correctly downloaded/found in my fresh VM.
I'm not sure if it's a location issue, but it's downloading and I have only warnings, never errors, but then it's not at the supposed location (I have another project setup exactly the same and it works, so maybe I'm missing something here...).
Here is my Vagrantfile:
Vagrant::Config.run do |config|
config.vm.define :djangovm do |django_config|
# Every Vagrant virtual environment requires a box to build off of.
django_config.vm.box = "lucid64"
# The url from where the 'config.vm.box' box will be fetched if it
# doesn't already exist on the user's system.
django_config.vm.box_url = "http://files.vagrantup.com/lucid64.box"
# Forward a port from the guest to the host, which allows for outside
# computers to access the VM, whereas host only networking does not.
django_config.vm.forward_port 80, 8080
django_config.vm.forward_port 8000, 8001
# Enable provisioning with chef solo, specifying a cookbooks path (relative
# to this Vagrantfile), and adding some recipes and/or roles.
django_config.vm.provision :chef_solo do |chef|
chef.json = {
python: {
install_method: 'source',
version: '2.7.5',
checksum: 'b4f01a1d0ba0b46b05c73b2ac909b1df'
},
mysql: {
server_root_password: 'root',
server_debian_password: 'root',
server_repl_password: 'root'
},
}
chef.cookbooks_path = "vagrant_resources/cookbooks"
chef.add_recipe "apt"
chef.add_recipe "build-essential"
chef.add_recipe "git"
chef.add_recipe "vim"
chef.add_recipe "openssl"
chef.add_recipe "mysql::client"
chef.add_recipe "mysql::server"
chef.add_recipe "python"
end
django_config.vm.provision :shell, :path => "vagrant_resources/vagrant_bootstrap.sh"
end
end
And here the bootstrap file to download Django and continue setting up things:
#!/usr/bin/env bash
eval vagrantfile_location="~/.vagrantfile_processed"
if [ -f $vagrantfile_location ]; then
echo "Vagrantfile already processed. Exiting..."
exit 0
fi
#==================================================================
# install dependencies
#==================================================================
/usr/bin/yes | pip install --upgrade pip
/usr/bin/yes | pip install --upgrade virtualenv
/usr/bin/yes | sudo apt-get install python-software-properties
#==================================================================
# set up the local dev environment
#==================================================================
if [ -f "/home/vagrant/.bash_profile" ]; then
echo -n "removing .bash_profile for user vagrant..."
rm /home/vagrant/.bash_profile
echo "done!"
fi
echo -n "creating new .bash_profile for user vagrant..."
ln -s /vagrant/.bash_profile /home/vagrant/.bash_profile
source /home/vagrant/.bash_profile
echo "done!"
#==================================================================
# set up virtual env
#==================================================================
cd /vagrant;
echo -n "Creating virtualenv..."
virtualenv myquivers;
echo "done!"
echo -n "Activating virtualenv..."
source /vagrant/myquivers/bin/activate
echo "done!"
echo -n "installing project dependencies via pip..."
/usr/bin/yes | pip install -r /vagrant/myquivers/myquivers/requirements/dev.txt
echo "done!"
#==================================================================
# install front-endy things
#==================================================================
echo -n "adding node.js npm repo..."
add-apt-repository ppa:chris-lea/node.js &> /dev/null || exit 1
echo "done!"
echo -n "calling apt-get update..."
apt-get update &> /dev/null || exit 1
echo "done!"
echo -n "nodejs and npm..."
apt-get install nodejs npm &> /dev/null || exit 1
echo "done!"
echo -n "installing grunt..."
npm install -g grunt-cli &> /dev/null || exit 1
echo "done!"
echo -n "installing LESS..."
npm install -g less &> /dev/null || exit 1
echo "done!"
echo -n "installing uglify.js..."
npm install -g uglify-js &> /dev/null || exit 1
echo "done!"
#==================================================================
# cleanup
#==================================================================
echo -n "marking vagrant as processed..."
touch $vagrantfile_location
echo "done!"
My requirements dev.txt looks like this:
Django==1.5.1
Fabric==1.7.0
South==0.8.2
Pillow==2.1.0
django-less==0.7.2
paramiko==1.11.0
psycopg2==2.5.1
pycrypto==2.6
wsgiref==0.1.2
django-registration==1.0
Any idea why I can't find Django and my other things in my VM?
This is a whole 'nother path, but I highly recommend using Berkshelf and doing it the Berkshelf way. There's a great guide online for rolling them this way.
That is, create a cookbook as a wrapper that will do everything your script does.
So the solution was to remove the dependency with Postgre psycopg2==2.5.1 I have in my requirements (from the setup in my other project), because here I'll be having a MySQL database instead.