django fabric define multiple host with password - django

Lets say I have a list of hosts to provide:
env.hosts = ['host1', 'host2', 'host3']
env.password = ['password1', 'password2', 'password3']
Its not working for me.. I dont just want to give host and give password for everyhost. I want to set the password for every host and it should get the password and deploy my site without asking for password.
How can I do that ?

Your best options is to do this:
Note: that the password keys need to be user#location:port otherwise it wont work.
fabfile.py
from fabric.api import env, task, run
#task
def environments():
env.hosts = ['user1#10.99.0.2', 'user2#10.99.0.2', 'user3#10.99.0.2']
env.passwords = {'user1#10.99.0.2:22': 'pass1', 'user2#10.99.0.2:22': 'pass2', 'user3#10.99.0.2:22': 'pass3'}
#task
def echo():
run('whoami')
and then to test:
$ fab environments echo
[user1#10.99.0.2] Executing task 'echo'
[user1#10.99.0.2] run: whoami
[user1#10.99.0.2] out: user1
[user1#10.99.0.2] out:
[user2#10.99.0.2] Executing task 'echo'
[user2#10.99.0.2] run: whoami
[user2#10.99.0.2] out: user2
[user2#10.99.0.2] out:
[user3#10.99.0.2] Executing task 'echo'
[user3#10.99.0.2] run: whoami
[user3#10.99.0.2] out: user3
[user3#10.99.0.2] out:
Done.
Disconnecting from user2#10.99.0.2... done.
Disconnecting from user1#10.99.0.2... done.
Disconnecting from user3#10.99.0.2... done.

Related

Python2 - proc.stdin not passing password correctly to easyrsa

The idea to revoke users from the VPN, with an easyrsa command. Manually this works. However, when I perform this action with Python it fails.
The easyrsa command asks for 2 arguments, the first one is to continue, the second one is a password.
My current code is:
#!/usr/local/bin/python2.7
import time
from subprocess import Popen, PIPE
pass_phrase = "damn_good_password"
proc = Popen(['/usr/local/share/easy-rsa/easyrsa', 'revoke', 'user#somewhere.com'], stdin=PIPE)
time.sleep(1)
proc.stdin.write('yes\n')
proc.stdin.flush()
time.sleep(1)
proc.stdin.write(pass_phrase + "\n")
proc.stdin.flush()
The script outputs the following:
user#vpn $ sudo ./test.py
Type the word 'yes' to continue, or any other input to abort.
Continue with revocation: Using configuration from /usr/local/share/easy-rsa/openssl-1.0.cnf
Enter pass phrase for /etc/openvpn/pki/private/ca.key:
user#vpn $
User interface error
unable to load CA private key
5283368706004:error:0907B068:PEM routines:PEM_READ_BIO_PRIVATEKEY:bad password read:pem_pkey.c:117:
I think Python incorrectly passes the password to the process. When performing these steps manually and copying the password, it works.
Can someone point me in the right direction to fix this?

Can't send email in management command run by cron

I have a strange problem with a Django management command I am running via cron.
I've a production server set up to use Mailgun. I've a management command that simply sends an email:
from django.core.mail import send_mail
class Command(BaseCommand):
help = 'Send email'
def handle(self, *args, **options):
send_mail('Test email', 'Test content', 'noreply#example.com', ['me#example.com',], fail_silently=False)
This script works perfectly if I run it via the command line (I'm using virtualenvwrapper):
> workon myapp
> python manage.py do_command
or directly:
> /home/user/.venvs/project/bin/python /home/user/project/manage.py do_command
But when I set it up with cron (crontab -e):
*/1 * * * * /home/user/.venvs/project/bin/python /home/user/project/manage.py do_command
The script runs (without error), but the email isn't sent.
What could be going on?
OK, the issue was that the wrong DJANGO_SETTINGS_MODULE env var was set and there were a few things throwing me off the scent:
My manage.py script defaults to the "development" version of my settings: settings.local and this uses the command line email backend. Cron suppresses all output so I wasn't seeing that happening.
Secondly, I was testing in a shell that already has DJANGO_SETTINGS_MODULE set to settings.production, so it appeared that the script ran correctly when I ran it on the command line.
The fix is easy, add DJANGO_SETTINGS_MODULE to the crontab:
DJANGO_SETTINGS_MODULE=config.settings.production
*/1 * * * * ...

django-celery as a daemon: not working

I have a website project written with django, celery and rabbitmq. And a '.delay' task (the task creates a new folder) is called when a button is clicked.
Everything works fine with celery (the .delay task is called, and a new folder is created) when I run celery with manage.py like:
python manage.py celeryd
However, when I ran celery as the daemon, even there was no error, the task was not executed (no folder was created).
I was kind of following the tutorial: http://www.arruda.blog.br/programacao/django-celery-in-daemon/
My settings are:
/etc/default/celeryd
:
# Name of nodes to start, here we have a single node
CELERYD_NODES="w1"
# Where to chdir at start.
CELERYD_CHDIR="/var/www/myproject"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
# How to call "manage.py celeryctl"
CELERYCTL="$CELERYD_CHDIR/manage.py celeryctl"
# Extra arguments to celeryd
CELERYD_OPTS=""
# Name of the celery config module.
CELERY_CONFIG_MODULE="myproject.settings"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/w1.log"
CELERYD_PID_FILE="/var/run/celery/w1.pid"
# Workers should run as an unprivileged user.
#CELERYD_USER="root"
#CELERYD_GROUP="root"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="myproject.settings"
the correlated folders are created too
for the '/etc/default/celeryd/init.d' file, I used this version:
https://raw.github.com/ask/celery/1da3aa43d1e6de525beeda398d0acb8841d5b4d2/contrib/generic-init.d/celeryd
for /var/www/myproject/myproject/settings.py, I have:
:
import djcelery
djcelery.setup_loader()
BROKER_HOST = "127.0.0.1"
BROKER_PORT = 5672
BROKER_VHOST = "/"
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
INSTALLED_APPS = (
'djcelery',
...
)
There was no error when I start celery by using:
/etc/init.d/celeryd start
and no results neither. Does someone know how to fix the problem?
Celery's docs have a daemon troubleshooting section that might be helpful. Celery has a flag that lets you run your init script without actually daemonizing, and that should show what's going wrong:
C_FAKEFORK=1 sh -x /etc/init.d/celeryd start
Newer versions of that init script have a dryrun command that's an easier-to-remember way to run the start command without daemonizing.

Fabric - Force password prompt on Production Deploy

Is it possible to force the user to enter their password when they deploy to production?
I was deploying to staging, but accidentally hit tab on the CL to production instead and almost made a huge mistake! Needless to say I will never use autocomplete for fab ... ever ever again.
UPDATE:
Below is what our fabfile essentially looks like. Each host, like application-staging or application-production, is saved in the ssh config.
from fabric import colors
from fabric.api import *
from fabric.contrib.project import *
import git
env.app = '{{ project_name }}'
env.dest = "/var/www/%(app)s" % env
env.use_ssh_config = True
def reload_processes():
sudo("kill -HUP `cat /tmp/%(app)s.pid`" % env)
def sync():
repo = git.Repo(".")
sha = repo.head.commit.hexsha
with cd(env.dest):
run("git fetch --all")
run("git checkout {} -f".format(sha))
if "production" in env.host_string:
with cd(env.dest):
run("compass compile")
with prefix(". /home/ubuntu/environments/%(app)s/bin/activate" % env):
run("%(dest)s/manage.py syncmedia" % env)
def deploy():
sync()
link_files()
reload_processes()
add_commit_sha()
def link_files():
print(colors.yellow("Linking settings."))
env.label = env.host_string.replace("%(app)s-", "")
with cd(env.dest):
sudo("rm -f local_settings.py")
sudo("ln -s conf/settings/%(label)s.py local_settings.py" % env)
sudo("rm -f conf/gunicorn/current.py")
sudo("ln -s %(label)s.py conf/gunicorn/current.py" % env)
sudo("rm -f celeryconfig.py")
sudo("ln -s conf/settings/celery/%(label)s.py celeryconfig.py" % env)
sudo("rm -f conf/supervisor/programs.ini" % env)
sudo("ln -s %(label)s.ini conf/supervisor/programs.ini" % env)
def reload_processes(reload_type="soft"):
print(colors.yellow("Reloading processes."))
env.label = env.host_string.replace("%(app)s-", "")
with cd(env.dest):
sudo("kill -HUP `cat /tmp/gunicorn.%(app)s.%(label)s.pid`" % env)
def add_commit_sha():
repo = git.Repo(".")
sha = repo.head.commit.hexsha
sed("{}/settings.py".format(env.dest), "^COMMIT_SHA = .*$", 'COMMIT_SHA = "{}"'.format(sha), backup="\"\"", use_sudo=True)
I use this pattern, where you set up the staging/prod configurations in their own tasks:
#task
def stage():
env.deployment_location = 'staging'
env.hosts = ['staging']
#task
def prod():
env.deployment_location = 'production'
env.hosts = ['prod1', 'prod2']
#task
def deploy():
require('deployment_location', used_for='deployment. \
You need to prefix the task with the location, i.e: fab stage deploy.')
confirm("""OK. We're about to deploy to:
Location: {env.deployment_location}
Is that cool?""".format(env=env))
# deployment tasks down here
In this case, you have to type fab prod deploy and say yes to the confirmation message in order to deploy to production.
Just typing fab deploy is an error, because the deployment_location env variable isn't set.
It doesn't prevent total idiocy, but it does prevent accidental typos and so far it's worked well.
I mean yeah. You could remove all of their ssh keys and make them use passwords every time. You could also use stdlib prompts to ask the user if they meant production. You can also have only certain users write to production using basic ACLs. There are any number of ways of slowing the deployment process down, it's mostly going to come down to what you and your devs prefer.

Running periodic tasks with django and celery

I'm trying create a simple background periodic task using Django-Celery-RabbitMQ combination. I installed Django 1.3.1, I downloaded and setup djcelery. Here is how my settings.py file looks like:
BROKER_HOST = "127.0.0.1"
BROKER_PORT = 5672
BROKER_VHOST = "/"
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
....
import djcelery
djcelery.setup_loader()
...
INSTALLED_APPS = (
'djcelery',
)
And I put a 'tasks.py' file in my application folder with the following contents:
from celery.task import PeriodicTask
from celery.registry import tasks
from datetime import timedelta
from datetime import datetime
class MyTask(PeriodicTask):
run_every = timedelta(minutes=1)
def run(self, **kwargs):
self.get_logger().info("Time now: " + datetime.now())
print("Time now: " + datetime.now())
tasks.register(MyTask)
And then I start up my django server (local development instance):
python manage.py runserver
Then I start up the celerybeat process:
python manage.py celerybeat --logfile=<path_to_log_file> -l DEBUG
I can see entries like this in the log:
[2012-04-29 07:50:54,671: DEBUG/MainProcess] tasks.MyTask sent. id->72a5963c-6e15-4fc5-a078-dd26da663323
And I also can see the corresponding entries getting created in database, but I can't find where it is logging the text I specified in the actual run function in MyTask class.
I tried fiddling with the logging settings, tried using the django logger instead of celery logger, but of no use. I'm not even sure, my task is getting executed. If I print any debug information in the task, where does it go?
Also, this is first time I'm working with any type of message queuing system. It looks like the task will get executed as part of the celerybeat process - outside the django web framework. Will I still be able to access all the django models I created.
Thanks,
Venkat.
Celerybeat it stuff, which pushes task when it need, but not executing them. You tasks instances stored in RabbitMq server. You need to execute celeryd daemon for executing your tasks.
python manage.py celeryd --logfile=<path_to_log_file> -l DEBUG
Also if you using RabbitMq, I recommend to you to install special rabbitmq management plugins:
rabbitmq-plugins list
rabbitmq-enable rabbitmq_management
service rabbitmq-server restart
It will be available at http://:55672/ login: guest pass: guest. Here you can check how many tasks in your rabbit instance online.
You should check the RabbitMQ logs, since celery sends the tasks to RabbitMQ and it should execute them. So all the prints of the tasks should be in RabbitMQ logs.