I've just stumbled upon Fabric and the documentation doesn't really make it obvious how it works.
My educated guess is that you need to install it on both client-side and server-side. The Python code is stored on the client side and transferred through Fabric's wire-protocol when the command is run. The server accepts connections using the OpenSSH SSH daemon through the ~/.ssh/authorized_keys file for the current user (or a special user, or specified in the host name to the fab command).
Is any of this correct? If not, how does it work?
From the docs:
Fabric is a Python (2.5 or higher) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks.
It provides a basic suite of operations for executing local or remote shell commands (normally or via sudo) and uploading/downloading files, as well as auxiliary functionality such as prompting the running user for input, or aborting execution.
So it's just like ssh'ing into a box and running the commands you've put into run()/sudo().
There is no transfer of code, so you only need to have ssh running on the remote machine and have some sort of shell (bash is assumed by default).
If you want remote access to a python interpreter you're more looking at something like execnet.
If you want more information on how execution on the remote machine(s) work look to this section of the docs.
Most what you are saying is correct, except that the "fabfile.py" file only has to be stored on your client. An SSH server like OpenSSH needs to be installed on your server and an SSH client needs to be installed on your client.
Fabric then logs into one or more servers in turn and executes the shell commands defined in "fabfile.py". If you are located in the same dir as "fabfile.py" you can go "fab --list" to see a list of available commands and then "fab [COMMAND_NAME]" to execute a command.
The user on the server does not need to be added to "~/.ssh/authorized_keys" but if it is you don't have to type the password every time you want to execute a command.
Related
I have python script that I run on my cmd local machine.
now I want to run it also on a remote server(Windows)
How can I do it?
It is possible using ssh. Python accepts hyphen(-) as argument to execute the standard input,
cat hello.py | ssh user#192.168.1.101 python -
Run python --help for more info.
You can use the ssh approach, but also the PSEXEC approach which is easier to use, although you may need admin privileges on the server to execute processes and both windows machines better share the same user/password.
Download PsExec from here
https://technet.microsoft.com/en-us/sysinternals/bb896649
Run as follows:
psexec /ACCEPTEULA \\servermachine python fullpath_to\python_script.py
fullpath_to\python_script.py should be accessible from the server. If not, you have to copy it first here, or just put the script on a shared/networked drive visible by both machines.
Of course python must be installed on the server as well.
I'm not saying it is the best way. Jenkins is a good way to run stuff on a given server. But it does the job.
I have a lab system (with a hardware piece attached to it) which has some python test scripts. The test script sends commands to the attached hardware and receives response.
I don't want to work on the lab computer all the time. Currently, I'm using SSH from my local machine to the lab computer and using the shell to modify the scripts, run the commands etc. Using nano is cumbersome especially while debugging. I want to use an IDE (Pycharm) on my local machine in order to edit and run the scripts on the remote server. Pycharm has remote interpreters which uses the remote python but I want to be able to access and modify the scripts too, just like SSH from terminal.
How can I do that?
PyCharm (Professional Edition only) is also capable of Deployments. You can upload/download files via SFTP directly within Pycharm and run your scripts remotely.
You can visit the following pages for further instructions on how to set everything up:
Setting up a deployment
Configuring a remote interpreter
Yes, PyCharm Professional Edition can do this. Since PyCharm 2018.1 setting up a remote interpreter also automatically sets up deployment. If you have automatic deployments configured (Tools | Deployment | Automatic Deployment) all changes will automatically be uploaded to your SSH box.
See here for a tutorial on configuring an SSH box in PyCharm Professional Edition: https://blog.jetbrains.com/pycharm/2018/04/running-flask-with-an-ssh-remote-python-interpreter/
I've been following along this blog to setup a shoutcast server on openshift using the diy cartridge. After replacing the destip with my server's OPENSHIFT_DIY_IP and editing the action and stop hooks I find that the server isn't starting when I visit the application's url, instead I'm getting:
503 Service Temporarily Unavailable
Service Temporarily Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
When I checked the log file used in the action hook I'm finding:
server.log
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv.exe': Permission denied
(while using window's shoutcast distribution) and
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv': Permission denied
(while using linux's shoutcast distribution)
I've read on several forums that often openshift resets the chmod file permissions and prevents applications from being executed, and that's exactly what I found my openshift application doing (after using filezilla to edit the file permissions). Since sc_serv or sc_serv.exe is the main application (a command line application) to keep the server going I'm wondering how I could get around this odd permissions error.
start action hook (when I used window's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv.exe $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
start action hook (when I used linux's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
I'd like to note that the blogger used linux and I'm using windows to edit the openshift repository and I assume that the files extracted from the linux distribution of shoutcast are the same whether from windows or linux, but I clearly can't test that. All I can tell so far is that openshift is blocking the main executable (whether it's linux or windows) which essentially runs the whole service. I've tested the server myself on my own localhost and found it working perfectly so I have no doubt if it were to run (with the right settings listed in this blog that it would work.
Edit: Solved
In order to have the permissions changed and kept that way they need to be edited from git using
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
After stumbling across more stackoverflow answers (and feel free to link one that explains this I don't remember which one I used) I read that openshift will reset everything (permission wise) on every git push (to retain the safety of the code I assume). So the only way to solve the permissions issue is in fact with git, not through ftp software like filezilla or through ssh. This way changing the chmod will remain permanently.
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
In the end what I have in openshift's diy folder is the linux distribution of shoutcast (which can be extracted with 7-Zip. Modified so that it can be reached through port-forwarding like in this blog. To reach the server (having set up openshift's tools) all you'll have to do before broadcasting is this in command line:
rhc port-forward [app-name]
If you're using Sam broadcasting software the good news is that you can easily add a mysql database, and also port-forward into that as well using that same command. Port-forwarding would mean that instead of finding the ip:port for your stream and mysql on openshift you would use localhost or 127.0.0.1 and whatever ports indicated by rhc port-forward. You could also be using your other favorite software to broadcast in which case I'd recommend setting up a batch file like so:
cd C:\YourSoftwarePath
start YourSoftware.exe
start rhc port-forward [app-name]
If you have hardware doing the streaming like through a barix box there will probably be some way of doing this in some other tricky manner.
What I'm actually looking for is the equivalent off ssh connection in windows environment. As per requirement, my controller machine can connect to remote machine with username password using some kind of utility/protocol such as ssh or telnet or rpc or tpc and using this session I can transfer files or execute command on remote machine. This connection execution must be done without any intervention from remote machine, i.e. I don't need to install any module or install any utility on remote machine or run any script.
My both controller and remote environment are windows.
Can someone suggest me python module or utility, using which I'll be able do this ?
I explore few option however I'm not sure if they are the best way to do so. Pleas provide your thoughts on this too.
connect using psexec utility through popen.
using socket to communication channel, however I'm not sure if I can execute any command using this channel.
making use of Telnet. but I didn't explore the python module to make telnet connection and execute command.
using module such as Pyro or rpyc.
Any help is appreciated. Thanks a lot in advanced.
take a look at the pexpect module. It can be used for ftp, ssh ...
pexpect doc
I have a Jenkins server on OS X 10.7, which polls a subversion server, builds the code and packages the app. The last step that I need to complete is deploying the app on a remote host, which is a windows share. Note that my domain account has write access to the target folder and the volume is mounted. I've tried using a shell script build step:
sudo cp "path/to/app" "/Volumes/path/to/target"
However i get a "no tty" response. I was able to run this command succesfully in Terminal, but not as a build step in Jenkins.
Does this have something to do with the user being used when starting up Jenkins? As a side note, the default user.name is jenkins and my JENKINS_HOME resides in /Users/Shared/Jenkins. I would appreciate any help as to how to achieve this.
Your immediate problem seems to be that you are running Jenkins in the background and sudo wants to input a password. Run Jenkins in the foreground with $ java -jar jenkins.war.
However, this most probably won't solve your problem as you'll be asked to enter a password when the command runs - from the terminal you started Jenkins from (presumably it's not what you want). You need to find a way to copy your files without needing root permissions. In general, it it not a good idea to rely on administrative permissions in your builds (there are exceptions, but your case is not it).