Fabric: call run() for an explicit host - fabric

I'd like to use fabric as a tool to gather all server loads and process the values afterward, I thought of something like this:
from fabric.api import run
for servername in servernames:
load_str = run('cat /proc/loadavg | cut -d' ' -f1', host=servername)
but fabric doesn't allow me to specify the hostname this way, I found this IMO ugly way:
from fabric.api import env, run
for servername in servernames:
env.host_string = servername
load_str = run('cat /proc/loadavg | cut -d' ' -f1')
are there more elegant ways?
Using paramiko directly, as suggested here pushes me to write an own module that abstracts it - quoting from fabrics website, that's exactly what fabric should do for me:
In addition to use via the fab fool, Fabric’s components may be imported into other Python code, providing a Pythonic interface to the SSH protocol suite at a higher level than that provided by e.g. Paramiko (which Fabric itself leverages.)

This question offers a solution:
How to set target hosts in Fabric file

It appears that fabric really is the wrong tool for that.
The claim quoted above is probably from an earlier version.
Looking at the run() code it's clear there's no module in fabric that could be used for my purpose.
There are small abstraction layers around paramiko, e.g. this one

from fabric.api import settings
for servername in servernames:
with settings(host_string=servername):
load_str = run('cat /proc/loadavg | cut -d' ' -f1')
or better using execute
from fabric.tasks import execute
data = execute(load_str , hosts = servernames)
def load_str():
return run('cat /proc/loadavg | cut -d' ' -f1')
I'd recommend setting to skip hosts that are not reachable
env.skip_bad_hosts = True

Related

How to assign Static IP based on MAC Address using powershell/script in Windows?

I would like to write up a script to assign static IP based on mac addresses, as I am having trouble with "USB to ethernet" adapters lose it's IP settings and assign to different interface Names.
I am running on windows 10 environment and have found a wmi script online that I think might work.
Code I am using:
wmic nicconfig where macaddress="0F:98:90:D6:42:92" call EnableStatic ("192.168.1.1"), ("255.255.255.0")
Error output:
"Invalid format.
Hint: = [, ]."
Thanks
Something like this
$netAdapter = Get-WmiObject Win32_NetworkAdapterConfiguration | where {$_.MACAddress -eq '0F:98:90:D6:42:92'}
$netAdapter.EnableStatic("192.168.1.1", "255.255.255.0")

fetching a http process user using bash

Whats the best way to fetch a web process user (apache|nginx|www-data) for bash script usage?
In my case for setting up folder permissions and changing to the poper owner.
Currently I'm using:
ps aux | grep -E "(www-data|apache|nginx).*(httpd|apache2|nginx)" \
| grep -o "^[a-z\-]*" | head -n1
inside a bash script to fetch the owner of the http process.
Any hints on a more smartly solution or a better regex whould be great.
Your solution will really depend on your operating system. One option might be to check whether likely candidates exist in your password file:
user=$(awk -F: '/www|http/{print $1;exit}' /etc/passwd)
If you really want to look for the owner of running processes, remember that Apache often launches a root-owned "master" process, then launches children as the web user. So perhaps something like this:
user=$(ps aux|awk '$1=="root"{next} /www|http|apache/{print $1;exit}')
But you should also be able to determine things based on OS detection, since things tend to follow standards:
case "`uname -s`" in
Darwin) user=_www; uid=70 ;;
FreeBSD) user=www; uid=80 ;;
Linux)
if grep Ubuntu /etc/lsb-release; then
user=www-data; uid=$(id -u www-data)
elif [ -f /etc/debian_version ]; then
user=www-data; uid=$(id -u www-data)
elif etc
etc
fi
;;
esac
I'm not up on the best ways to detect different Linux distros, so that may require a bit of additional research for you.

How to set up replication in BerkeleyDB

I've been struggling for some time now on setting up a "simple" BerkeleyDB replication using the db_replicate utility.
However no luck in making it actually work, and I'm not finding any concrete example on how thing should be set up.
Here is the setup I have so far. Environment is a Debian Wheezy with BDB 5.1.29
Database generation
A simple python script reading "CSV" files and inserting each line into the BDB file
from glob import glob
from bsddb.db import DBEnv, DB
from bsddb.db import DB_CREATE, DB_PRIVATE, DB_INIT_MPOOL, DB_BTREE, DB_HASH, DB_INIT_LOCK, DB_INIT_LOG, DB_INIT_TXN, DB_INIT_REP, DB_THREAD
env = DBEnv()
env.set_cachesize(0, 1024 * 1024 * 32)
env.open('./db/', DB_INIT_MPOOL | DB_INIT_LOCK | DB_INIT_LOG |
DB_INIT_TXN | DB_CREATE | DB_INIT_REP | DB_THREAD)
db = DB(env)
db.open('apd.db', dbname='stuff', flags=DB_CREATE, dbtype=DB_BTREE)
for csvfile in glob('Stuff/*.csv'):
for line in open(csvfile):
db.put(line.strip(), None)
db.close()
env.close()
DB Configuration
In the DB_CONFIG file, this is where I'm missing the most important part I guess
repmgr_set_local_site localhost 6000
Actual replication try
# Copy the database file to begin with
db5.1_hotbackup -h ./db/ -b ./other-place
# Start replication master
db5.1_replicate -M -h db
# Then try to connect to it
db5.1_replicate -h ./other-place
The only thing I currently get from the replicate tool is:
db5.1_replicate(20648): DB_ENV->open: No such file or directory
edit after stracing the process I found out it was trying to access to __db.001, so I've copied those files manually. The current output is:
db5.1_replicate(22295): repmgr is already started
db5.1_replicate(22295): repmgr is already started
db5.1_replicate(22295): repmgr_start: Invalid argument
I suppose I'm missing the actual configuration value for the client to connect to the server, but so far no luck as all the settings yielded unrecognized name-value pair errors
Does anyone know how this setup might be completed? Maybe I'm not even headed in the right direction an this should be something completely different?

Mkdir over SSH with Python does not work

I'm trying to create a new dir via SSH with a python script. When i try my commands by using the Python command line it just works. But when I try to do the same by a script it does not create the new 'test' folder (I even copy/paste the commands in the script into the Python cmd to verify they are right and there they work). So any ideas why it does not work by script?
The used code:
child = pexpect.spawn('ssh 192.168.56.101 -oStrictHostKeyChecking=no')
child.expect=('password:')
child.sendline('MyPwd')
child.sendline('mkdir /home/myUser/Desktop/test')
Seems to work when I just add another line
for example
child.sendline('\n')
so the entire script is
child = pexpect.spawn('ssh 192.168.56.101 -oStrictHostKeyChecking=no')
child.expect=('password:')
child.sendline('MyPwd')
child.sendline('mkdir /home/myUser/Desktop/test')
child.sendline('\n')
What I usually do to solve this issue is sync-ing with host machine. After I send something to the machine, I expect an answer, which usually translates in the machine's prompt. So, in your case, I would go for something like this:
child = pexpect.spawn('ssh 192.168.56.101 -oStrictHostKeyChecking=no')
child.expect('password:')
child.sendline('MyPwd')
child.expect('YourPromptHere')
child.sendline('mkdir /home/myUser/Desktop/test')
child.expect('YourPromptHere')
You can just replace YourPromptHere with the prompt of the machine, if you are running the script on a single target, or with a regular expression (eg. "(\$ )|(# )|(> )").
tl;dr : To summarize what I said, you need to wait until the previous action was finished until sending a new one.

How to set up autoreload with Flask+uWSGI?

I am looking for something like uWSGI + django autoreload mode for Flask.
I am running uwsgi version 1.9.5 and the option
uwsgi --py-autoreload 1
works great
If you're configuring uwsgi with command arguments, pass --py-autoreload=1:
uwsgi --py-autoreload=1
If you're using a .ini file to configure uwsgi and using uwsgi --ini, add the following to your .ini file:
py-autoreload = 1
For development environment you can try using
--python-autoreload uwsgi's parameter.
Looking at the source code it may work only in threaded mode (--enable-threads).
You could try using supervisord as a manager for your Uwsgi app. It also has a watch function that auto-reloads a process when a file or folder has been "touched"/modified.
You will find a nice tutorial here: Flask+NginX+Uwsgi+Supervisord
The auto-reloading functionality of development-mode Flask is actually provided by the underlying Werkzeug library. The relevant code is in werkzeug/serving.py -- it's worth taking a look at. But basically, the main application spawns the WSGI server as a subprocess that stats every active .py file once per second, looking for changes. If it sees any, the subprocess exits, and the parent process starts it back up again -- in effect reloading the chages.
There's no reason you couldn't implement a similar technique at the layer of uWSGI. If you don't want to use a stat loop, you can try using underlying OS file-watch commands. Apparently (according to Werkzeug's code), pyinotify is buggy, but perhaps Watchdog works? Try a few things out and see what happens.
Edit:
In response to the comment, I think this would be pretty easy to reimplement. Building on the example provided from your link, along with the code from werkzeug/serving.py:
""" NOTE: _iter_module_files() and check_for_modifications() are both
copied from Werkzeug code. Include appropriate attribution if
actually used in a project. """
import uwsgi
from uwsgidecorators import timer
import sys
import os
def _iter_module_files():
for module in sys.modules.values():
filename = getattr(module, '__file__', None)
if filename:
old = None
while not os.path.isfile(filename):
old = filename
filename = os.path.dirname(filename)
if filename == old:
break
else:
if filename[-4:] in ('.pyc', '.pyo'):
filename = filename[:-1]
yield filename
#timer(3)
def check_for_modifications():
# Function-static variable... you could make this global, or whatever
mtimes = check_for_modifications.mtimes
for filename in _iter_module_files():
try:
mtime = os.stat(filename).st_mtime
except OSError:
continue
old_time = mtimes.get(filename)
if old_time is None:
mtimes[filename] = mtime
continue
elif mtime > old_time:
uwsgi.reload()
return
check_for_modifications.mtimes = {} # init static
It's untested, but should work.
py-autoreload=1
in the .ini file does the job
import gevent.wsgi
import werkzeug.serving
#werkzeug.serving.run_with_reloader
def runServer():
gevent.wsgi.WSGIServer(('', 5000), app).serve_forever()
(You can use an arbitrary WSGI server)
I am afraid that Flask is really too bare bones to have an implementation like this bundled by default.
Dynamically reloading code in production is generally a bad thing, but if you are concerned about a dev environment, take a look at this bash shell script http://aplawrence.com/Unixart/watchdir.html
Just change the sleep interval to whatever suits your needs and substitute the echo command with whatever you use to reload uwsgi. I run uwsgi un master mode and just send a killall uwsgi command.