Perform actions on server after CircleCI deployment - django

I have a Django project that I deploy on a server using CircleCI. The server is a basic cloud server, and I can SSH into it.
I set up the deployment section of my circle.yml file, and everything is working fine. I would like to automatically perform some actions on the server after the deployment (such as migrating the database or reloading gunicorn).
I there a way to do that with CircleCI? I looked in the docs but couldn't find anything related to this particular problem. I also tried to put ssh user#my_server_ip after my deployment step, but then I get stuck and cannot perform any action. I can successfully SSH in, but the rest of the commands is not called.
Here is what my ideal circle.yml file would look like:
deployment:
staging:
branch: develop
commands:
- rsync --update ./requirements.txt user#server:/home/user/requirements.txt
- rsync -r --update ./myapp/ user#server:/home/user/myapp/
- ssh user#server
- workon myapp_venv
- cd /home/user/
- pip install -r requirements.txt

I solved the problem by putting a post_deploy.sh file on the server, and putting this line on the circle.yml:
ssh -i ~/.ssh/id_myhost user#server 'post_deploy.sh'
It executes the instructions in the post_deploy.sh file, which is exactly what I wanted.

Related

AWS elasticbeanstalk hooks: postdeploy works, predeploy doesn't

I am using an ebs app on linux 2 platforms, and I need to clone a directory during deployment to get configfiles for my app.
I did a predeploy hook so that the files are there when the app starts after deployment: /.platform/hooks/predeploy/01_import
After deployment in a predeploy hook, the files are not there.
When I run the exact same script in a postdeploy hook, the files are there.
So the command works, I see the predeploy hook is running (I see the echo text in the log), but the files are not present. Anyone knows why?
#!/bin/bash
mkdir /var/app/current/config
echo Adding github in known hosts
ssh-keyscan -H github.com >> /home/webapp/.ssh/known_hosts
echo Done Adding github in known hosts
echo deleting old flows
echo cloning
git -c core.sshCommand="ssh -i /etc/pki/tls/certs/githubKey" clone -b dev --single-branch <mygithub> /var/app/current/config
echo done cloning
In predeploy stage, the new code is deployed to /var/app/staging, not /var/app/current.
/var/app/current is actually overwritten by staging if the new staging deployment is successful.
So in predeploy, I've cloned to staging instead, and it works.
This is not well documented in AWS docs; this helped me.

Why does google cloud build run differently for these two commands?

We run these two commands (the first one is async and the other runs synchronously)
#async BUT does something funky and doesn't run the Dockerfile image as-is
gcloud alpha builds triggers run staging-deploy --branch master
# sync BUT runs the image the way it's supposed to run!!!
gcloud builds submit --config cloudbuild.yaml
both are using our cloudbuild.yaml
steps:
- name: gcr.io/$PROJECT_ID/continuous-deploy
args: ['${_SERVICE}', '${_DOWNLOAD_URL}']
timeout: 1000s
substitutions:
_SERVICE: none
_DOWNLOAD_URL: none
timeout: 1100s
Our Dockerfile is very very simple
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:alpine
RUN mkdir -p ./monobuild
COPY . ./monobuild/
WORKDIR "/monobuild"
#NOTE: This file in google cloud build trigger MUST be in root of monorepo BUT I don't know why
#NOTE: This command receives any arguments to docker
#ie. for "docker run {image} {args}", it receives the args
ENTRYPOINT ["./downloadAndExtract.sh"]
Sooo, when I run the SECOND command, it completely uses the docker image obeying the Dockerfile. When I run the first command, it's ignoring all my Dockerfile stuff and trying to run scripts in my git repo(which is very frustrating and not what I want).
We HAD this directory structure
- gitroot
- stagingDeploy
- Dockerfile
- deployStaging.sh # part of Dockerfile
- cloudbuild.yaml
- prodDeploy
- Dockerfile
- prodDeploy.sh #part of Docker file
- cloudbuild.yaml
Of course, only the second command works with this directory structure. The first command CANNOT find deployStaging.sh UNTIL we ln -s stagingDeploy/deployStaging.sh from our gitrepo root and we have around 5 deploy directories and now our git repo root is fully polluted.
It is to say the least very frustrating and we are not sure how to clean this up so prodDeploy contains all the prod deploy scripts and staging, the staging ones and get rid of all root files.
Of course, we now have a corrupted git repo directory structure with a whole slew of files in the root directory from various different builds(sometimes conflicting on accident as files get the same names sometimes).
EDIT: Not really much to share on configuration of twitter as each one just points to the yaml file is all like so
thanks,
Dean

Elastic BeanStalk app deploy post hook not executing my command

I recently was able to get my Laravel app deployed using codepipeline on Elastic Beanstalk but ran into a problem. I noticed that my routes where failing because of php.conf Nginx configuration. I had to add a few lines of code to EB's nginx php.conf file to get it to work.
My problem now was that after every deployment, the instance of the application I modified the php.conf file was destroyed and recreated fresh. I wanted a way to dynamically update the file after every successful deployment. I had a version of the file I wanted versioned with my application and so wanted to create a symlink to that file after deployment.
After loads of research, I stumbled on appDeploy Hooks on Elastic Beanstalk that runs post scripts after deployment so did this
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/91_post_deploy_script.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo mkdir /var/testing1
sudo ln -sfn /var/www/html/php.conf.example /etc/nginx/conf.d/elasticbeanstalk/php.conf
sudo mkdir /var/testing
sudo nginx -s reload
And this for some reason does not work. The symlink is not created so my routes are still not working..
I even added some mkdir so am sure the commands in that script runs, none of those commands ran because none of those directories where created.
Please note that if I ssh into the ec2 instance and run the commands there it works. That bash script also exists in the post directory and if I manually run in on the server it works too.
Any pointers to how I could fix this would be helpful. Maybe I am doing something wrong too.
Now I have gotten my scripts to run by following this. However, the script is not running. I am getting an error
2020/06/28 08:22:13.653339 [INFO] Following platform hooks will be executed in order: [01_myconf.config]
2020/06/28 08:22:13.653344 [INFO] Running platform hook: .platform/hooks/postdeploy/01_myconf.config
2020/06/28 08:22:13.653516 [ERROR] An error occurred during execution of command [app-deploy] - [RunPostDeployHooks]. Stop running the command. Error: Command .platform/hooks/postdeploy/01_myconf.config failed with error fork/exec .platform/hooks/postdeploy/01_myconf.config: permission denied
I tried to follow this forum post here to make my file executable by adding to my container command a new command like so:
01_chmod1:
command: "chmod +x .platform/hooks/postdeploy/91_post_deploy_script.sh"
I am still running into the same issue. Permission denied
Sadly, the hooks you are describing (i.e. /opt/elasticbeanstalk/hooks/appdeploy) are for Amazon Linux 1.
Since you are using Amazon Linux 2, as clarified in the comments, the hooks you are trying to use do not apply. Thus they are not being executed.
In Amazon Linux 2, there are new hooks as described here and they are:
prebuild – Files here run after the Elastic Beanstalk platform engine downloads and extracts the application source bundle, and before it sets up and configures the application and web server.
predeploy – Files here run after the Elastic Beanstalk platform engine sets up and configures the application and web server, and before it deploys them to their final runtime location.
postdeploy – Files here run after the Elastic Beanstalk platform engine deploys the application and proxy server.
The use of these new hooks is different than in Amazon Linux 1. Thus you have to either move back to Amazon Linux 1 or migrate your application to Amazon Linux 2.
General migration steps from Amazon Linux 1 to Amazon Linux 2 in EB are described here
Create a folder called .platform in your project root folder and create a file with name 00_myconf.config inside the .platform folder.
.platform/
00_myconf.config
Open 00_myconf.config and add the scripts
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/91_post_deploy_script.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo mkdir /var/testing1
sudo ln -sfn /var/www/html/php.conf.example /etc/nginx/conf.d/elasticbeanstalk/php.conf
sudo mkdir /var/testing
sudo nginx -s reload
Commit your changes or reupload the project. This .platform folder will be considered in each new instance creation and your application will deploy properly in all the new instances Amazon Elastic beanstalk creates.
If you access the documentation here and scroll to the section with the title "Application example with extensions" you can see an example of the folder structure of your .platform folder so it adds your custom configuration to NGINX conf on every deploy.
You can either replace the entire nginx.conf file with your file or add additional configuration files to the conf.d directory
Replace conf file with your file on app deploy:
.platform/nginx/nginx.conf
Add configuration files to nginx.conf:
.platform/nginx/conf.d/custom.conf

Gitlab - Google compute engine Continuous delivery

What I am trying to do is to enable Continuous delivery from GitLab to my compute engine on Google Cloude. I have Ubuntu 16.04 TSL running over there. I did install all components needed to run my project like: Swift, vapor, nginx.
I have manage to install Gitlab runner as well and created a runner whcihc is accessible from my gitlab repo. Everytime I do push on master the runner triggers. What happen is a failure due to:
could not create leading directories of '/home/gitlab-runner/builds/2bbbbbd/0/Server/Packages/vapor.git': Permission denied
If I change the permissions to chmod -R 777 It will hange on running for build stage visible on gitlab pipeline.
I did something like:
sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/builds
sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/cache
but this haven't help, the error is same Permission denied
Below you have my .gitlab-ci.yml
before_script:
- swift --version
stages:
- build
- deploy
job_build:
stage: build
before_script:
- vapor clean
script:
- vapor build --release
only:
- master
job_run_app:
stage: deploy
script:
- echo "Deploy a API"
- vapor run --name=App --env=production
environment:
name: production
job_run_frontend:
stage: deploy
script:
- echo "Deploy a Frontend"
- vapor run --name=Frontend --env=production
environment:
name: production
But that haven't pass to next stage eg. deploy. I had waited more then 14h for that but with out result.
And... I have few more questions:
Gitlab runner creates builds under location /home/gitlab-runner/builds/ in this location every new job have own folder. for eg. /home/gitlab-runner/builds/2bbbbbd/ in which is my project and the commands are executed. So what happens when the first one is running and I do deploy new version? the ports are blocked by the first instance and so on?
If I want to enable supervisor how do I do that with this when every time I deploy folder is different?
Can anyone explain or show me or point me to tutorial how do Continuous deployment with out docker?
How to start a service using GitLab runner
Thanks to long deep search I finally found an answer! The full article can be found above.
Briefly GitLab CI documentation recommends using dpl for deployment. Gitlab runner run test and process should end. The runner is designed to kill all created processes after finishing each build. The GitLab runner is unable to perform operations outside the catalogue.

AWS: CodeDeploy for a Docker Compose project?

My current objective is to have Travis deploy our Django+Docker-Compose project upon successful merge of a pull request to our Git master branch. I have done some work setting up our AWS CodeDeploy since Travis has builtin support for it. When I got to the AppSpec and actual deployment part, at first I tried to have an AfterInstall script do docker-compose build and then have an ApplicationStart script do docker-compose up. The containers that have images pulled from the web are our PostgreSQL container (named db, image aidanlister/postgres-hstore which is the usual postgres image plus the hstore extension), the Redis container (uses the redis image), and the Selenium container (image selenium/standalone-firefox). The other two containers, web and worker, which are the Django server and Celery worker respectively, use the same Dockerfile to build an image. The main command is:
CMD paver docker_run
which uses a pavement.py file:
from paver.easy import task
from paver.easy import sh
#task
def docker_run():
migrate()
collectStatic()
updateRequirements()
startServer()
#task
def migrate():
sh('./manage.py makemigrations --noinput')
sh('./manage.py migrate --noinput')
#task
def collectStatic():
sh('./manage.py collectstatic --noinput')
# find any updates to existing packages, install any new packages
#task
def updateRequirements():
sh('pip install --upgrade -r requirements.txt')
#task
def startServer():
sh('./manage.py runserver 0.0.0.0:8000')
Here is what I (think I) need to make happen each time a pull request is merged:
Have Travis deploy changes using CodeDeploy, based on deploy section in .travis.yml tailored to our CodeDeploy setup
Start our Docker containers on AWS after successful deployment using our docker-compose.yml
How do I get this second step to happen? I'm pretty sure ECS is actually not what is needed here. My current status right now is that I can get Docker started with sudo service docker start but I cannot get docker-compose up to be successful. Though deployments are reported as "successful", this is only because the docker-compose up command is run in the background in the Validate Service section script. In fact, when I try to do docker-compose up manually when ssh'd into the EC2 instance, I get stuck building one of the containers, right before the CMD paver docker_run part of the Dockerfile.
This took a long time to work out, but I finally figured out a way to deploy a Django+Docker-Compose project with CodeDeploy without Docker-Machine or ECS.
One thing that was important was to make an alternate docker-compose.yml that excluded the selenium container--all it did was cause problems and was only useful for local testing. In addition, it was important to choose an instance type that could handle building containers. The reason why containers couldn't be built from our Dockerfile was that the instance simply did not have the memory to complete the build. Instead of a t1.micro instance, an m3.medium is what worked. It is also important to have sufficient disk space--8GB is far too small. To be safe, 256GB would be ideal.
It is important to have an After Install script run service docker start when doing the necessary Docker installation and setup (including installing Docker-Compose). This is to explicitly start running the Docker daemon--without this command, you will get the error Could not connect to Docker daemon. When installing Docker-Compose, it is important to place it in /opt/bin/ so that the binary is used via /opt/bin/docker-compose. There are problems with placing it in /usr/local/bin (I don't exactly remember what problems, but it's related to the particular Linux distribution for the Amazon Linux AMI). The After Install script needs to be run as root (runas: root in the appspec.yml AfterInstall section).
Additionally, the final phase of deployment, which is starting up the containers with docker-compose up (more specifically /opt/bin/docker-compose -f docker-compose-aws.yml up), needs to be run in the background with stdin and stdout redirected to /dev/null:
/opt/bin/docker-compose -f docker-compose-aws.yml up -d > /dev/null 2> /dev/null < /dev/null &
Otherwise, once the server is started, the deployment will hang because the final script command (in the ApplicationStart section of my appspec.yml in my case) doesn't exit. This will probably result in a deployment failure after the default deployment timeout of 1 hour.
If all goes well, then the site can finally be accessed at the instance's public DNS and port in your browser.