How to run a deploy command on remote host from PyCharm? - django

I am looking for a way to simplify remote deployment of a django application directly from PyCharm.
Even if deploying the files itself works just file with the remote host and upload, I was not able to find a way to run the additional commands on the server site (like manage.py syncdb).
I am looking for a fully automated solution, one that would work at single click (or command).

I don't know much about PyCharm so maybe you could do something from the IDE, but I think you'll probably want to take a look at the fabric project (http://docs.fabfile.org/en/1.0.1/index.html)
It's a python deployment automation tool that's pretty great.
Here is one of my fabric script files. Note that I make a lot of assumptions (This is my own that I use) that completely depend on how you want to set up your project, such as I use virtualenv, pip, and south as well as my own personal preference for how to deploy and where to deploy to.
You'll likely want to rework or simplify it to meet your needs.

You may use File > Settings > Tools > External Tools to run arbitrary external executable files. You may write a small command that connects over SSH and issues a [set of] command. Then the configured tool would be executable
For example, in my project based on tornado, I run the instances using supervisord, which, according to answer here, cannot restart upon code change.
I ended up writing a small tool on paramiko, that connects via ssh and runs supervisorctl restart. The code is below:
import paramiko
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-s",
action="store",
dest="server",
help="server where to execute the command")
parser.add_option("-u",
action="store",
dest="username")
parser.add_option("-p",
action="store",
dest="password")
(options, args) = parser.parse_args()
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect(hostname=options.server, port=22, username=options.username, password=options.password)
command = "supervisorctl reload"
(stdin, stdout, stderr) = client.exec_command(command)
for line in stdout.readlines():
print line
client.close()
External Tool configuration in Pycharm:
program: <PYTHON_INTERPRETER>
parameters: <PATH_TO_SCRIPT> -s <SERVERNAME> -u <USERNAME> -p <PASSWORD>

Related

Startup script NOT running in instance

I have a instance where I have some Flask web app. In order the app to start when the VM is booted I have included a startup script:
#!/bin/sh
cd documentai_webapp
cd docai_webapp_instance_gcp
sudo python3 server.py
However, this is not at all executed, anyone can help me?thanks!
PD: When I execute this script manually within the VM it works perfectly fine
As context it is necessary contemplate:
For Linux startup scripts, you can use bash or non-bash file. To use a non-bash file, designate the interpreter by adding a #! to the top of the file. For example, to use a Python 3 startup script, add #! /usr/bin/python3 to the top of the file.
If you specify a startup script by using one of the procedures in this document, Compute Engine does the following:
Copies the startup script to the VM
Sets run permissions on the startup script
Runs the startup script as the root user when the VM boots (missing step from #Andoni)
For information about the various tasks related to startup scripts and when to perform each one, see the Overview.

Django Celery with Redis Issues on Digital Ocean App Platform

After quite a bit of trial and error and a step by step attempt to find solutions I thought I share the problems here and answer them myself according to what I've found. There is not too much documentation on this anywhere except small bits and pieces and this will hopefully help others in the future.
Please note that this is specific to Django, Celery, Redis and the Digital Ocean App Platform.
This is mostly about the below errors and further resulting implications:
OSError: [Errno 38] Function not implemented
and
Cannot connect to redis://......
The first error happens when you try run the celery command celery -A your_app worker --beat -l info
or similar on the App Platform. It appears that this is currently not supported on digital ocean. The second error occurs when you make a number of potential mistakes.
PART 1:
While Digital Ocean might remedy this in the future here is an approach that will offer a workaround. The problem is the not supported execution pool. Google "celery execution pools" if you want to know more and how they work. The default one is prefork. But what you need is either gevent or eventlet. I went with the former for my purposes.
Whichever you pick you will have to install it as it doesn't come with celery by default. In my case it was: pip install gevent (and don't forget adding it to your requirements as well).
Once you have that you can re-run the celery command but note that gevent and beat are not supported within a single command (will result in an error). Instead do the following:
celery -A your_app worker --pool=gevent -l info
and then separately (if you want to run beat that is) in another terminal/console
celery -A your_app beat -l info
In the first line you can also specify the concurrency like so: --concurrency=100. This is not required but useful. Read up on it what it does as that goes beyond the solution here.
PART 2:
In my specific case I've tested the above locally (development) first to make sure they work. The next issue was getting this into production. I use Redis as the db/broker.
In my specific setup I have most of my celery configuration in the_main_app/celery/__init__.py file but sometimes people put it directly into the_main_app/celery.py. Whichever it is you do make sure that the REDIS_URL is set correctly. For development it usually looks something like this:
YOUR_VAR_NAME = os.environ.get('REDIS_URL', 'redis://localhost:6379') where YOUR_VAR_NAME is then set to the broker with everything as below:
YOUR_VAR_NAME = os.environ.get('REDIS_URL', 'redis://localhost:6379')
app = Celery('the_main_app')
app.conf.broker_url = YOUR_VAR_NAME
The remaining settings are all documented on the "celery first steps with django" help page but are not relevant for what I am showing here.
PART 3:
When you setup your Redis Database on the App Platform (which is very simple) you will see the connection details as 'public network' and 'VPC network'.
The celery documentation says to use the following URL format for production: redis://:password#hostname:port/db_number. This didn't work. If you are not using a yaml file then you can simply copy paste the entire connection string (select from the dropdown!) from the Redis DB connection details and then setup an App-Level environment variable in your Digital Ocean project named REDIS_URL and paste in that entire string (and also encrypt it!).
The string should look like something like this (redis with 2 s!)
rediss://USER:PASS#URL.db.ondigitialocean.com:PORT.
You are almost done. The last step is to setup the workers. It was fine for me to run the PART 1 commands as console commands on the App Platform to test them but eventually I've setup a small worker (+ Add Component) for each line pasted them into the Run Command.
That is basically the process step by step. Good luck!

Methods to automate ColdFusion Administrator settings

When working with a ColdFusion server you can access the CFIDE/administrator to set config values, which update the cfusion/lib/ xml files (e.g. neo-runtime.xml, neo-mail.xml, etc.)
I'd like to automate a deployment process that includes setting these administrator values so that I don't have to log in and manually set them for each new box that shares settings. I'm unsure of the best way to go about it.
Some thoughts I had are:
Replacing the full files with ones containing my custom settings. I've done this for local development, but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
A script to read the wddx xml file and replace the attribute values. I'm having trouble finding information about how to do this method.
Has anyone done anything like this before? Or does anyone have any recommendations on how to best go about this?
At one company, we checked all the neo-*.xml files into source control, with a set for each environment Devs only had access to the dev settings and we could deploy a local development environment with all the correct settings for new employees quickly.
but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
You have to keep up with those changes and migrate each environment appropriately.
While I was there, we upgraded from 8 to 9, 9 to 11 and from 11 to 2016. Environments would have to be mixed as it took time to verify the applications worked with each new version of CF. Each server got their correct XML files for that environment and scripts would copy updates as needed. We had something like 55 servers in production running 8 instances each, so this scaled well.
There is a very usefull tool developed by Ortus Solutions for this kind of automatizations called cfconfig that can be installed with their commandbox command line utility. This tool isn't only capable of setting configurations of the administrator: It is also capable of exporting/importing settings to a json file (cfconfig.json). It might be what you need.
Here is the link to their docs
https://cfconfig.ortusbooks.com/introduction/getting-started-guide
CFConfig worked perfectly for my needs. I marked #AndreasRu answer as accepted for introducing me to that tool! I'm just adding this response with some additional detail for posterity.
Install CommandBox as part of deployment script
Install CFConfig as part of deployment script
Use CFConfig to export a config.json file from an existing box that will share settings with the new deployment. Store this json file in source control for each type/env of box.
Use CFConfig to import the config.json as part of deployment script
Here's a simple example of what this looks like on debian
# Installs CommandBox
curl -fsSl https://downloads.ortussolutions.com/debs/gpg | apt-key add -
echo "deb https://downloads.ortussolutions.com/debs/noarch /" | tee -a /etc/apt/sources.list.d/commandbox.list
apt-get update && apt-get install apt-transport-https commandbox
# Installs CFConfig module
box install commandbox-cfconfig
# Import config settings
box cfconfig import from=/<path-to-config>/config.json to=/opt/ColdFusion/cfusion/ toFormat=adobe#11.0.19

How to "dockerize" Flask application?

I have Flask application named as rest.py and I have dockerize but it is not running.
#!flask/bin/python
from flask import Flask, jsonify
app = Flask(__name__)
tasks = [
{
'id': 1,
'title': u'Buy groceries',
'description': u'Milk, Cheese, Pizza, Fruit, Tylenol',
'done': False
}
]
#app.route('/tasks', methods=['GET'])
def get_tasks():
return jsonify({'tasks': tasks})
if __name__ == '__main__':
app.run(debug=True)
Dockerfile is as follows
FROM ubuntu
RUN apt-get update -y
RUN apt-get install -y python-dev python-pip
COPY . /rest
WORKDIR /rest
RUN pip install -r Req.txt
ENTRYPOINT ["python"]
CMD ["rest.py"]
I have build it using this command...
$ docker build -t flask-sample-one:latest
...and when I run container...
$ docker run -d -p 5000:5000 flask-sample-one
returning the following output:
7d1ccd4a4471284127a5f4579427dd106df499e15b868f39fa0ebce84c494a42
What am I doing wrong?
The output you get is the container ID. Check with docker ps whether it keeps running.
Use docker logs [container-id] to figure out what's going on inside.
Some problems I can find in your question:
Change the app.run line to app.run(host='0.0.0.0', debug=True). From the point of view of the container, its services need to be externally available. So they need to run on the loopback interface, like you would run it if you'd set up a publicly available server on a host directly.
Make sure that Flask gets installed. Your docker image file requires all the commands to make it work from a blank Ubuntu installation.
Please do not forget to deactivate debug if you'd ever expose this service on your host. Debug mode in Flask makes it possible for visitors to run arbitrary code if they can trigger an exception (it's a feature, not a bug).
After that (and building the container again [1]), try curl http://127.0.0.1:5000/tasks on the host. Let me know if it works, if not there are other problems in your setup.
[1] You can improve the prototyping workflow with Flask's built-in reloader (which is enabled by default) if you use a volume mount in your docker container for the directory that contains your python files - this would allow you to change your script on the host, reload in the browser and directly see the result.
I believe that you need to reinforce your concepts about Docker, in order to understand how it works, and then you will achieve your objectives regarding "dockerizing" whatever application.
Here is an article which can give your some first steps.
An official HOWTO will also help you.
Some observations that might help you:
check if your Req.txt contains flask for installation
before dockerizing, check if your application is working
check your running containers with docker ps and see if your container is running
if it is running, test your application: curl http://127.0.0.1:5000/tasks
*
One more thing:
your JSON has an OBJECT with an ARRAY with just one ELEMENT
Is that what you want for your prototype?
Take a look on this doc, about the JSON standard.

Building project from cron task

When I build project from terminal by using 'xcodebuild' command I succeed, but when I try to do run same script from cron task I receive error
"Code Sign error: The identity '****' doesn't match any valid certificate/private key pair in the default keychain"
I think problem is in settings and permissions of crontab utility, it seems crontab does not see my keychain
Can anyone provide me terminal command how to make my keychain visible for crontab
I encountered a similar issue with trying to build nightly via cron. The only resolution I found was to create a plist in /Library/LaunchDaemons/ and load it via launchctl. The key necessary is "SessionCreate" otherwise you will quickly run in to problems similar to what was encountered with trying to use cron -- namely that your user login.keychain is not available to the process. "SessionCreate" is similar to "su -l" in that (as far as I understand) it simulates a login and thus default keychains you expect will be available; otherwise, you are stuck with only the System keychain despite the task running as your user.
I found the answers (though not the top answer currently) here useful in troublw shooting this issue: Missing certificates and keys in the keychain while using Jenkins/Hudson as Continuous Integration for iOS and Mac development
You execute your cron job with which account ?
most probably the problem !!
You can add
echo `whoami`
at the beginning of your script to see with which user the script is launched.
Also when a Bash script is launched from cron, it don't use the same environment variable (non login shell) as when you launch it as a user.
When the script launches from cron, it doesn't load your $HOME/.profile (or .bash_profile). Anything you run from cron has to be 100% self-sufficient in terms of it's environment. I'd suggest you make yourself a file called something like "set_build_env.sh" It should contain everything from your .profile that you need to build, such as $PATH, $HOME, $CLASSPATH etc. Then in your build script, load set_build_env.sh using the dot notation or source cmd as ericc said. You should also remove the build-specific lines from your.profile and then source set_build_env from there too so only one place to maintain. Example:
source /home/dmitry/set_build_env.sh #absolute path
. /home/dmitry/set_build_env.sh #dot-space notation same as "source"