How to make fabric flush the it's output? - fabric

I'm using fabric to run ssh tasks on remote machines.
The output isn't flushed automatically, is there a method to force auto flushing?
(the documentation doesn't appear to mention this subject)

When using Fabric's puts() to output some text, you can use the flush=True parameter to avoid buffering:
puts('Doing stuff', flush=True)
Or if you're concerned about the output from a remote command, you may want to flush the standard output after running the command:
run('some command')
sys.stdout.flush()
Note that some buffering may still occur in Fabric during execution of the command itself (not sure about it though), or within the remote command itself. In that case, you should see the same behavior when running it through Fabric or directly via SSH.

I'm slightly biased because I work there but we came up with a logging utility for fabric named gusset at my workplace. It's named gusset and it allows you to have configurable logging with your fabric scripts
from fabric.api import run
from gusset.output import with_output
#with_output(verbosity=1)
def foo():
run("ls")
In [9]: with settings(host_string="mybox", user="myuser"):
...: foo()
...:
[mybox] run: ls

Related

AWS-RunBashScript errors/warnings with Python

I have many EC2 instances that retain Celery jobs for processing. To efficiently start the overall task of completing the queue, I have tested AWS-RunBashScript in AWS' SSM with a BASH script that calls a Python script. For example, for a single instance this begins with sh start_celery.sh.
When I run the command in SSM, this is the following output (compare to other output below, after reading on):
/home/ec2-user/dh2o-py/venv/local/lib/python2.7/dist-packages/celery/utils/imports.py:167:
UserWarning: Cannot load celery.commands extension u'flower.command:FlowerCommand':
ImportError('No module named compat',)
namespace, class_name, exc))
/home/ec2-user/dh2o-py/tasks/task_harness.py:49: YAMLLoadWarning: calling yaml.load() without
Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
task_configs = yaml.load(conf)
Running a worker with superuser privileges when the worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT
environment variable (but please think about this before you do).
User information: uid=0 euid=0 gid=0 egid=0
failed to run commands: exit status 1
Note that only warnings are thrown. When I SSH to the same instance and run the same command (i.e. sh start_celery.sh), the following (same) output results BUT the process runs:
I have verified that the process does NOT run when doing this via SSM, and I have no idea why. As a work-around, I tried running the sh start_celery.sh command with bootstrapping in user data for each EC2, but that failed too.
So, why does SSM fail to actually run the process that I succeed in doing by actually via SSH to each instance running identical commands? The details below relate to machine and Python configuration:

Python exec_command throws Unknown command: `ls' [duplicate]

I am connecting to SSH via terminal (on Mac) and run a Paramiko Python script and for some reason, the two sessions seem to behave differently. The PATH environment variable is different in these cases.
This is the code I run:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('host', username='myuser',password='mypass')
stdin, stdout, stderr =ssh.exec_command('echo $PATH')
print (stdout.readlines())
Any idea why the environment variables are different?
And how can I fix it?
The SSHClient.exec_command by default does not allocate a pseudo terminal for the session. As a consequence a different set of startup scripts is (might be) sourced (particularly for non-interactive sessions, .bash_profile is not sourced). And/or different branches in the scripts are taken, based on an absence/presence of TERM environment variable.
To emulate the default Paramiko behavior with the ssh, use the -T switch:
ssh -T myuser#host
See the ssh man:
-T Disable pseudo-tty allocation.
Contrary, to emulate the default ssh behavior with Paramiko, set the get_pty parameter of the exec_command to True:
def exec_command(self, command, bufsize=-1, timeout=None, get_pty=False):
Though rather than working around the issue by allocating the pseudo terminal in Paramiko, you should better fix your startup scripts to set the same PATH for all sessions.
For that see Some Unix commands fail with "<command> not found", when executed using Python Paramiko exec_command.
Working with the Channel object instead of the SSHClient object solved my problem.
chan=ssh.invoke_shell()
chan.send('echo $PATH\n')
print (chan.recv(1024))
For more details, see the documentation

Google cloud compute startup script ignored with no logging

I have a standard Debian 8.9 instance on google cloud compute (GCE) where my startup script is ignored.
In the custom metadata field, for startup-script, I am trying to run an Rscript (which is used for batch execution of R files), followed by a system shutdown, with the following:
#! /bin/bash
sudo /usr/bin/Rscript /home/myuser/launch_script.R
sudo shutdown -h now
Starting the instance is immediately followed by a shutdown and the Rscript is ignored. Removing the last line to shutdown causes the GCE instance to start, but the Rscript to be ignored. Running just "sudo /usr/bin/Rscript /home/myuser/launch_script.R" from the terminal results in the script being run. It has a chmod of 755, so I don't think this is a permissions issue.
In addition to this problem, I have read elsewhere that logging should happen in /var/log/, but there is nothing there. Instead, I have a bunch of log files (that only contain the start-up script and nothing else) in the root of my instance:
I got in touch with Google cloud support, who gave the following response:
script definition is kept under /var/run/google.startup.script
If the script does not run initially, you can force it manually with : $ sudo google_metadata_script_runner --script-type startup # for Debian, or # sudo /usr/share/google/run-startup-scripts # on Ubuntu and older images
I'm posting this information here, because it is not in their documentation (as of August 2017). I'm not sure how helpful it is, since the google.startup.script didn't exist in my case (using the latest Debian image on GCE), but I did run the other commands.
However, I think my main issues were:
I was using autossh to connect to a remote database. The startup-script was running before autossh. Building a 40 second delay into the script and running the script as a user (not sudo-type root) seems to have solved this problem for now. Autossh was being run as the main user, which I think gets loaded before lower-privilege user-defined scripts get loaded.
I was using some gcloud commands from the user account which had its own authentication issues. Running gcloud auth login as the user and ensuring correct permissions on my private key solved this.
Always remember to check the messages and syslog files in /var/log for troubleshooting. This allowed me to see the order of things being loaded at system-boot.

Building project from cron task

When I build project from terminal by using 'xcodebuild' command I succeed, but when I try to do run same script from cron task I receive error
"Code Sign error: The identity '****' doesn't match any valid certificate/private key pair in the default keychain"
I think problem is in settings and permissions of crontab utility, it seems crontab does not see my keychain
Can anyone provide me terminal command how to make my keychain visible for crontab
I encountered a similar issue with trying to build nightly via cron. The only resolution I found was to create a plist in /Library/LaunchDaemons/ and load it via launchctl. The key necessary is "SessionCreate" otherwise you will quickly run in to problems similar to what was encountered with trying to use cron -- namely that your user login.keychain is not available to the process. "SessionCreate" is similar to "su -l" in that (as far as I understand) it simulates a login and thus default keychains you expect will be available; otherwise, you are stuck with only the System keychain despite the task running as your user.
I found the answers (though not the top answer currently) here useful in troublw shooting this issue: Missing certificates and keys in the keychain while using Jenkins/Hudson as Continuous Integration for iOS and Mac development
You execute your cron job with which account ?
most probably the problem !!
You can add
echo `whoami`
at the beginning of your script to see with which user the script is launched.
Also when a Bash script is launched from cron, it don't use the same environment variable (non login shell) as when you launch it as a user.
When the script launches from cron, it doesn't load your $HOME/.profile (or .bash_profile). Anything you run from cron has to be 100% self-sufficient in terms of it's environment. I'd suggest you make yourself a file called something like "set_build_env.sh" It should contain everything from your .profile that you need to build, such as $PATH, $HOME, $CLASSPATH etc. Then in your build script, load set_build_env.sh using the dot notation or source cmd as ericc said. You should also remove the build-specific lines from your.profile and then source set_build_env from there too so only one place to maintain. Example:
source /home/dmitry/set_build_env.sh #absolute path
. /home/dmitry/set_build_env.sh #dot-space notation same as "source"

How to run a deploy command on remote host from PyCharm?

I am looking for a way to simplify remote deployment of a django application directly from PyCharm.
Even if deploying the files itself works just file with the remote host and upload, I was not able to find a way to run the additional commands on the server site (like manage.py syncdb).
I am looking for a fully automated solution, one that would work at single click (or command).
I don't know much about PyCharm so maybe you could do something from the IDE, but I think you'll probably want to take a look at the fabric project (http://docs.fabfile.org/en/1.0.1/index.html)
It's a python deployment automation tool that's pretty great.
Here is one of my fabric script files. Note that I make a lot of assumptions (This is my own that I use) that completely depend on how you want to set up your project, such as I use virtualenv, pip, and south as well as my own personal preference for how to deploy and where to deploy to.
You'll likely want to rework or simplify it to meet your needs.
You may use File > Settings > Tools > External Tools to run arbitrary external executable files. You may write a small command that connects over SSH and issues a [set of] command. Then the configured tool would be executable
For example, in my project based on tornado, I run the instances using supervisord, which, according to answer here, cannot restart upon code change.
I ended up writing a small tool on paramiko, that connects via ssh and runs supervisorctl restart. The code is below:
import paramiko
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-s",
action="store",
dest="server",
help="server where to execute the command")
parser.add_option("-u",
action="store",
dest="username")
parser.add_option("-p",
action="store",
dest="password")
(options, args) = parser.parse_args()
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect(hostname=options.server, port=22, username=options.username, password=options.password)
command = "supervisorctl reload"
(stdin, stdout, stderr) = client.exec_command(command)
for line in stdout.readlines():
print line
client.close()
External Tool configuration in Pycharm:
program: <PYTHON_INTERPRETER>
parameters: <PATH_TO_SCRIPT> -s <SERVERNAME> -u <USERNAME> -p <PASSWORD>