Fabric - Force password prompt on Production Deploy - django

Is it possible to force the user to enter their password when they deploy to production?
I was deploying to staging, but accidentally hit tab on the CL to production instead and almost made a huge mistake! Needless to say I will never use autocomplete for fab ... ever ever again.
UPDATE:
Below is what our fabfile essentially looks like. Each host, like application-staging or application-production, is saved in the ssh config.
from fabric import colors
from fabric.api import *
from fabric.contrib.project import *
import git
env.app = '{{ project_name }}'
env.dest = "/var/www/%(app)s" % env
env.use_ssh_config = True
def reload_processes():
sudo("kill -HUP `cat /tmp/%(app)s.pid`" % env)
def sync():
repo = git.Repo(".")
sha = repo.head.commit.hexsha
with cd(env.dest):
run("git fetch --all")
run("git checkout {} -f".format(sha))
if "production" in env.host_string:
with cd(env.dest):
run("compass compile")
with prefix(". /home/ubuntu/environments/%(app)s/bin/activate" % env):
run("%(dest)s/manage.py syncmedia" % env)
def deploy():
sync()
link_files()
reload_processes()
add_commit_sha()
def link_files():
print(colors.yellow("Linking settings."))
env.label = env.host_string.replace("%(app)s-", "")
with cd(env.dest):
sudo("rm -f local_settings.py")
sudo("ln -s conf/settings/%(label)s.py local_settings.py" % env)
sudo("rm -f conf/gunicorn/current.py")
sudo("ln -s %(label)s.py conf/gunicorn/current.py" % env)
sudo("rm -f celeryconfig.py")
sudo("ln -s conf/settings/celery/%(label)s.py celeryconfig.py" % env)
sudo("rm -f conf/supervisor/programs.ini" % env)
sudo("ln -s %(label)s.ini conf/supervisor/programs.ini" % env)
def reload_processes(reload_type="soft"):
print(colors.yellow("Reloading processes."))
env.label = env.host_string.replace("%(app)s-", "")
with cd(env.dest):
sudo("kill -HUP `cat /tmp/gunicorn.%(app)s.%(label)s.pid`" % env)
def add_commit_sha():
repo = git.Repo(".")
sha = repo.head.commit.hexsha
sed("{}/settings.py".format(env.dest), "^COMMIT_SHA = .*$", 'COMMIT_SHA = "{}"'.format(sha), backup="\"\"", use_sudo=True)

I use this pattern, where you set up the staging/prod configurations in their own tasks:
#task
def stage():
env.deployment_location = 'staging'
env.hosts = ['staging']
#task
def prod():
env.deployment_location = 'production'
env.hosts = ['prod1', 'prod2']
#task
def deploy():
require('deployment_location', used_for='deployment. \
You need to prefix the task with the location, i.e: fab stage deploy.')
confirm("""OK. We're about to deploy to:
Location: {env.deployment_location}
Is that cool?""".format(env=env))
# deployment tasks down here
In this case, you have to type fab prod deploy and say yes to the confirmation message in order to deploy to production.
Just typing fab deploy is an error, because the deployment_location env variable isn't set.
It doesn't prevent total idiocy, but it does prevent accidental typos and so far it's worked well.

I mean yeah. You could remove all of their ssh keys and make them use passwords every time. You could also use stdlib prompts to ask the user if they meant production. You can also have only certain users write to production using basic ACLs. There are any number of ways of slowing the deployment process down, it's mostly going to come down to what you and your devs prefer.

Related

How to get all the VM's information for all Projects in GCP

How to get all the VM's information for all Projects in GCP.
I have multiple Projects in My GCP account and I need the Operating System, Version of Operating of System and Build Version of the Operating System for All the VM's for all Project in GCP.
I didn't find a tool to that, so I code something that you can use.
This code must be improved, but here you can find a way to scan all project and get information about the OS.
Let me know if it helps you.
Pip install:
!pip install google-cloud
!pip install google-api-python-client
!pip install oauth2client
Code:
import subprocess
import sys
import logging
import threading
import pprint
logger = logging.Logger('catch_all')
def execute_bash(parameters):
try:
return subprocess.check_output(parameters)
except Exception as e:
logger.error(e)
logger.error('ERROR: Looking in jupyter console for more information')
def scan_gce(project, results_scan):
print('Scanning project: "{}"'.format(project))
ex = execute_bash(['gcloud','compute', 'instances', 'list', '--project', project, '--format=value(name,zone, status)'])
list_result_vms = []
if ex:
list_vms = ex.decode("utf-8").split('\n')
for vm in list_vms:
if vm:
vm_info = vm.split('\t')
print('Scanning Instance: "{}" in project "{}"'.format(vm_info[0], project))
results_bytes = execute_bash(['gcloud', 'compute', '--project',project,
'ssh', '--zone', vm_info[1], vm_info[0],
'--command', 'cat /etc/*-release' ])
if results_bytes:
results = results_bytes.decode("utf-8").split('\n')
list_result_vms.append({'instance_name': vm_info[0],'result':results})
results_scan.append({'project':project, 'vms':list_result_vms})
list_projects = execute_bash(['gcloud','projects', 'list', '--format=value(projectId)']).decode("utf-8").split('\n')
threads_project = []
results_scan = []
for project in list_projects :
t = threading.Thread(target=scan_gce, args=(project, results_scan))
threads_project.append(t)
t.start()
for t in threads_project:
t.join()
for result in results_scan:
pprint.pprint(result)
You can find the full code here:
Wuick and dirty:
gcloud projects list --format 'value(PROJECT_ID)' >> proj_list
cat proj_list | while read pj; do gcloud compute instances list --project $pj; done
You can use the following command in the Cloud Shell to fetch all projects and then show the instances for each of them:
for i in $(gcloud projects list | sed 1d | cut -f1 -d$' '); do
gcloud compute instances list --project $i;done;
note: make sure you have compute.instances.list permission to all of the projects
Here is how you do it using the pip3 install -U google-api-python-client without using bash. Note, this is to be ran with keyless auth. Using service account keys is bad practice.
https://github.com/googleapis/google-api-python-client/blob/main/docs/start.md
https://github.com/googleapis/google-api-python-client/blob/main/docs/dyn/index.md
https://googleapis.github.io/google-api-python-client/docs/dyn/compute_v1.html
from googleapiclient import discovery
from googleapiclient.errors import HttpError
import yaml
import structlog
logger = structlog.stdlib.get_logger()
def get_projects() -> list:
projects: list = []
service = discovery.build('cloudresourcemanager','v1', cache_discovery=False)
request = service.projects().list()
response = request.execute()
for project in response.get('projects'):
projects.append(project.get("projectId"))
logger.debug('got projects', projects=projects)
return projects
def get_zones(project: str) -> list:
zones: list = []
service = discovery.build('compute','v1', cache_discovery=False)
request = service.zones().list(project=project)
while request is not None:
response = request.execute()
if not 'items' in response:
logger.warn('no zones found')
return {}
for zone in response.get('items'):
zones.append(zone.get('name'))
request = service.zones().list_next(previous_request=request,previous_response=response)
logger.debug('got zones', zones=zones)
return zones
def get_vms() -> list:
vms: list = []
projects: list = get_projects()
service = discovery.build('compute', 'v1', cache_discovery=False)
for project in projects:
try:
zones: list = get_zones(project)
for zone in zones:
request = service.instances().list(project=project, zone=zone)
response = request.execute()
if 'items' in response:
for vm in response.get('items'):
ips: list = []
for interface in vm.get('networkInterfaces'):
ips.append(interface.get('networkIP'))
vms.append({vm.get('name'): {'self_link': vm.get('selfLink'), 'ips': ips}})
except HttpError:
pass
logger.debug('got vms', vms=vms)
return vms
if __name__ == '__main__':
data = get_vms()
with open('output.yaml', 'w') as fh:
yaml.dump(data, fh)

Can't send email in management command run by cron

I have a strange problem with a Django management command I am running via cron.
I've a production server set up to use Mailgun. I've a management command that simply sends an email:
from django.core.mail import send_mail
class Command(BaseCommand):
help = 'Send email'
def handle(self, *args, **options):
send_mail('Test email', 'Test content', 'noreply#example.com', ['me#example.com',], fail_silently=False)
This script works perfectly if I run it via the command line (I'm using virtualenvwrapper):
> workon myapp
> python manage.py do_command
or directly:
> /home/user/.venvs/project/bin/python /home/user/project/manage.py do_command
But when I set it up with cron (crontab -e):
*/1 * * * * /home/user/.venvs/project/bin/python /home/user/project/manage.py do_command
The script runs (without error), but the email isn't sent.
What could be going on?
OK, the issue was that the wrong DJANGO_SETTINGS_MODULE env var was set and there were a few things throwing me off the scent:
My manage.py script defaults to the "development" version of my settings: settings.local and this uses the command line email backend. Cron suppresses all output so I wasn't seeing that happening.
Secondly, I was testing in a shell that already has DJANGO_SETTINGS_MODULE set to settings.production, so it appeared that the script ran correctly when I ran it on the command line.
The fix is easy, add DJANGO_SETTINGS_MODULE to the crontab:
DJANGO_SETTINGS_MODULE=config.settings.production
*/1 * * * * ...

Django allow return response before another proccess finished

I have a problem when I want to run a webhook. In this case I want to run another script to build the project, let say runaway.sh
#!/bin/bash
cd /home/myuser/envs/project-vue
git pull https://username:password#gitlab.com/username/project-vue
npm install
npm run build
and then in my views.py, I try to call it command:
#csrf_exempt
def gitlab_webhook_view(request):
header_signature = request.META.get('HTTP_X_GITLAB_TOKEN')
if header_signature == settings.GITLAB_WEBHOOK_KEY:
subprocess.call(os.path.join(settings.BASE_DIR, 'runaway.sh'))
return HttpResponse('pull & build welldone!')
return HttpResponseForbidden('Permission denied.')
But gitlab always return Hook execution failed: Net::ReadTimeout, As we know npm install & npm run build take a long time.
So, I want to continuing that process in the background service, and for a few seconds just return "pull & build welldone!". Thank before..
You can use celery for this:
from celery import Celery
app = Celery('tasks', broker='pyamqp://guest#localhost//')
#app.task
def pull_proc():
subprocess.call(os.path.join(settings.BASE_DIR, 'runaway.sh'))
In view you can call this task in background like this:
#csrf_exempt
def gitlab_webhook_view(request):
header_signature = request.META.get('HTTP_X_GITLAB_TOKEN')
if header_signature == settings.GITLAB_WEBHOOK_KEY:
pull_proc.delay()
return HttpResponse('pull & build welldone!')
return HttpResponseForbidden('Permission denied.')
You can find description how to setting celery with django here.

django fabric define multiple host with password

Lets say I have a list of hosts to provide:
env.hosts = ['host1', 'host2', 'host3']
env.password = ['password1', 'password2', 'password3']
Its not working for me.. I dont just want to give host and give password for everyhost. I want to set the password for every host and it should get the password and deploy my site without asking for password.
How can I do that ?
Your best options is to do this:
Note: that the password keys need to be user#location:port otherwise it wont work.
fabfile.py
from fabric.api import env, task, run
#task
def environments():
env.hosts = ['user1#10.99.0.2', 'user2#10.99.0.2', 'user3#10.99.0.2']
env.passwords = {'user1#10.99.0.2:22': 'pass1', 'user2#10.99.0.2:22': 'pass2', 'user3#10.99.0.2:22': 'pass3'}
#task
def echo():
run('whoami')
and then to test:
$ fab environments echo
[user1#10.99.0.2] Executing task 'echo'
[user1#10.99.0.2] run: whoami
[user1#10.99.0.2] out: user1
[user1#10.99.0.2] out:
[user2#10.99.0.2] Executing task 'echo'
[user2#10.99.0.2] run: whoami
[user2#10.99.0.2] out: user2
[user2#10.99.0.2] out:
[user3#10.99.0.2] Executing task 'echo'
[user3#10.99.0.2] run: whoami
[user3#10.99.0.2] out: user3
[user3#10.99.0.2] out:
Done.
Disconnecting from user2#10.99.0.2... done.
Disconnecting from user1#10.99.0.2... done.
Disconnecting from user3#10.99.0.2... done.

htaccess on heroku for django app

The title pretty much sums up my question. I would like to to password protect some files in my django app that lives on heroku.
If I can't use htaccess does anyone have suggestions on what else I could use?
Thanks.
As #mipadi said, you can't use htaccess on Heroku, but you can create a middleware for that:
from django.conf import settings
from django.http import HttpResponse
from django.utils.translation import ugettext as _
def basic_challenge(realm=None):
if realm is None:
realm = getattr(settings, 'WWW_AUTHENTICATION_REALM', _('Restricted Access'))
# TODO: Make a nice template for a 401 message?
response = HttpResponse(_('Authorization Required'), mimetype="text/plain")
response['WWW-Authenticate'] = 'Basic realm="%s"' % (realm)
response.status_code = 401
return response
def basic_authenticate(authentication):
# Taken from paste.auth
(authmeth, auth) = authentication.split(' ',1)
if 'basic' != authmeth.lower():
return None
auth = auth.strip().decode('base64')
username, password = auth.split(':',1)
AUTHENTICATION_USERNAME = getattr(settings, 'BASIC_WWW_AUTHENTICATION_USERNAME')
AUTHENTICATION_PASSWORD = getattr(settings, 'BASIC_WWW_AUTHENTICATION_PASSWORD')
return username == AUTHENTICATION_USERNAME and password == AUTHENTICATION_PASSWORD
class BasicAuthenticationMiddleware(object):
def process_request(self, request):
if not getattr(settings, 'BASIC_WWW_AUTHENTICATION', False):
return
if 'HTTP_AUTHORIZATION' not in request.META:
return basic_challenge()
authenticated = basic_authenticate(request.META['HTTP_AUTHORIZATION'])
if authenticated:
return
return basic_challenge()
Then you need to define in settings.py:
BASIC_WWW_AUTHENTICATION_USERNAME = "your user"
BASIC_WWW_AUTHENTICATION_PASSWORD = "your pass"
BASIC_WWW_AUTHENTICATION = True
I was able to use .htaccess files on heroku with the cedar stack.
Procfile needs to specify a script for the web nodes:
web: sh www/conf/web-boot.sh
The conf/web-boot.sh slipstreams the include of a apache configuration file, e.g.:
A conf/httpd/default.conf then can allow override, as you know it from apache.
You can then just use .htaccess files. The whole process is documented in detail in my blog post PHP on Heroku again of which one part is about the apache configuration. The step in 2. including your own httpd configuration basically is:
sed -i 's/Listen 80/Listen '$PORT'/' /app/apache/conf/httpd.conf
sed -i 's/^DocumentRoot/# DocumentRoot/' /app/apache/conf/httpd.conf
sed -i 's/^ServerLimit 1/ServerLimit 8/' /app/apache/conf/httpd.conf
sed -i 's/^MaxClients 1/MaxClients 8/' /app/apache/conf/httpd.conf
for var in `env|cut -f1 -d=`; do
echo "PassEnv $var" >> /app/apache/conf/httpd.conf;
done
echo "Include /app/www/conf/httpd/*.conf" >> /app/apache/conf/httpd.conf
touch /app/apache/logs/error_log
touch /app/apache/logs/access_log
tail -F /app/apache/logs/error_log &
tail -F /app/apache/logs/access_log &
export LD_LIBRARY_PATH=/app/php/ext
export PHP_INI_SCAN_DIR=/app/www
echo "Launching apache"
exec /app/apache/bin/httpd -DNO_DETACH
I hope this is helpful. I used this for .htaccess and for changing the webroot.
You can't use .htaccess, because Heroku apps aren't served with Apache. You can use Django authentication, though.
Or you can serve the files from another server that is using Apache.