I have a django project which is deployed in a docker container. I created a pipeline in jenkins triggered via github webhooks. Everyting works fine but I have some user files in project directory which I want to backup before jenkins pull the repository. Is there a way to add a pre build step to my pipeline script or avoid deleting files when running git pull command ?
Related
I had a few questions about automatic git pulls on a remote server. I am aware there are several questions like this, but I wasn't sure what steps to take exactly, and I don't want to mess up my current setup with a mistake :/
To wit, the environment is on a Google Cloud VM. I am running a flask-based website that renders each page with the render_template() function.
The website resides inside its git folder, i.e. I never set up a bare repo and copied stuff. When I set it up a couple years ago, I just did git clone repo-url, then inside the repo directory, did flask run. Then I set up nginx to connect to the site's socket created with uwsgi inside the repo directory.
--
It has been working fine. I make changes locally to the content, push to github, then log in to the VM, and perform a git pull.
I want to do this automatically. I tried adding a cron job to do this, where the job basically ran a script, and the script did the git pull. Script content was:
cd /repo
git pull
Running the script in the server worked, but cron never managed to do the pull.
--
I have been reading about web hooks, and there is a bunch of stuff about post-receive hooks, post-update hooks, and making bare repos. At this point, I am embarrassed to say I have no idea what I should be doing.
Any help is greatly appreciated.
Another option would be to consider a GitHub Action, which, from GitHub, could interract with your Google cloud VM.
For example, actions-hub/gcloud.
- uses: actions-hub/gcloud#master
env:
PROJECT_ID: test
APPLICATION_CREDENTIALS: ${{ secrets.GOOGLE_APPLICATION_CREDENTIALS }}
with:
args: cp your-file.txt gs://your-bucket/
cli: gsutil
We have an unusual setup. We use git on Azure Devops for our code repositories, and AWS for our cloud-based services. In our arsenal we have a mixture of AWS Lambda functions, along with console apps, web apps, and Windows services running on EC2 instances. We have been able to create CI/CD pipelines for all three classes of apps. For the apps running on EC2 instances we use AWS CodeDeploy. These deployments are more complicated, but they all work -- except for one.
Another unusual thing about our setup is that both our development and QA environments are on the same EC2 instance. When the CodeDeploy agent running on that instance retrieves the deployment archive, it unpacks it, reads the appspec.yml file, runs our before install script, which backs up the existing installation and shuts down any services that might be using those files. Then, the install phase updates the files in the designated environment, then deletes -- or tries to delete -- all the files in the other environment folder.
In other words, if a DEV deployment is running, it replaces the files in the DEV folder and also tries to delete the files in the QA folder. I know this sounds like a scripting problem, but I have checked all the script and yaml files no where do I reference the opposing environment.
In this case, the app is a Windows service. Normally, I get a Ruby 'Permission denied # unlink_internal' error on a file in the other folder. As an experiment, I shut down the service in the other environment in my before install script and, as I expected, the agent deleted all the files in the other environment. It updated the files in the target environment, but left the folder in the other environment empty!
Here are my files. I suspect, the problem is being caused by something I did, but I can't, for the life of me, find it.
These are all .NET projects. In my solution I have a ConfigFiles folder set up with subfolders for each environment. Then, in my pipeline yaml file I run a script to select the correct files to move into the archive based on the git branch that is being built.
Here's the code for code for the script that selects the correct files.
Here's the Azure pipeline YAML file.
Here's my before install script:
And, finally, here is my appspec.yml file, which the CodeDeploy agent uses to know where to update the files during installation. How I want this to be the wrong path, but in the deployment archive, the environment specific values are all exactly right.
Any ideas on this one would be greatly appreciated.
I encountered the same problem where deployment of an app deletes files from another app in another folder unexpectedly. My solution is to use different deployment groups for each app, even though they are deploying to the same EC2 instance.
Deploying many apps on the same EC2 instance using the same deployment group results in files/folder deletion on other deployed projects.
From AWS Technical Support:
The reason is that codedeploy creates a clean up file by the format '[deployment group 1 ID]_cleanup" in the directory '/opt/codedeploy-agent/deployment-root/deployment-instructions' everytime a deployment is made to the deployment group and this file deletes all the files that had been installed during the previous deployment made to the deployment group. Since the deployment group is the same in your case, when you make a deployment to the deployment group which installs files to the folder "/var/www/project1", files installed by the previous deployment in the folder "/var/www/project2" are being cleaned up and vice versa which is an expected mechanism of the codedeploy agent.
You can find the explaination here: https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent.html#codedeploy-agent-install-files
Please consider creating two different applications/deployment groups
and configure the two pipelines to use different
applications/deployment groups which should fix your problem.
I need to deploy my spring-boot application on compute engine in Google cloud platform. I have already created an instance and through SSH Apache and Maven have been installed. Further, war file has been uploaded into the bucket. Anybody can provide me with the remaining commands to deploy the war file on tomcat instance or any other cloud platforms with linux?
Thanks
Deploy in compute engine instance of google not substantially different from AWS, Azure or another linux host provider.
You just need an ssh connection to the remote machine and install the required software to compile, build, zip, deploy, etc
I will list some approaches from basic(manually) to advanced(automated):
#1 Bash scripting
unzip and configure git
unzip and configure java
unzip and configure maven
unzip and configure tomcat (this is not required if spring-boot is used)
configure the linux host to open 8080 port
create a script called /devops/pipeline.sh in your remote cloud linux instance, with the following steps
For war deployment :
# get the source code
cd /tmp/folder/3dac58b7
git clone http://github.com/myrepo.git .
# create war
mvn clean package
# move war to deploy tomcat folder
cp target/my_app.war /my/tomcat/webapps
# stop tomcat
bash /my/tomcat/shutdown.sh
# start tomcat
bash /my/tomcat/startup.sh
Or spring-boot startup
# get the source code
cd /tmp/folder/3dac58b7
git clone http://github.com/myrepo.git .
# create jar
mvn clean package
# kill or stop the application
killall java
# start the application
java $JAVA_OPTS -jar $jar_file_name
After push to git, just connect to you instance using ssh and execute
bash /devops/pipeline.sh
Improvements: Parametrize repository name, branch name, mvn profile, database credentials, create a tmp/uuid folder in every execution, delete the tmp/uuid after deploy,optimize start and stop of application using pid, etc
#2 Docker
Install docker in your remote cloud linux instance
Create a Dockerfile with all the steps for war or springboot (approach #1) and store it close to your source code (I mean in your git repository)
Perform a git push of your changes
Connect to your remote cloud linux instance using ssh:
Build your docker image: docker build ...
Delete previous container and run a new version:
docker rm my_app -f
docker run -d --name my_app -p 8080:8080 my-container-name
In the previous approaches, build operations are performed in the remote server. To do that, several tools are needed in that server. In the following approaches, build is performed in an intermediate server and just deploy is executed in the remote server. This is a a little better
#3 Local Build (an extra instance is required)
In this approach, the build is performed in the developer machine and its uploaded to some kind of repository. I advice you docker instead of just war or jar compilation.
In order to build and upload the docker image, one of these docker registries are required:
Docker simple registry
Amazon Elastic Container Registry (ECR)
Docker hub.
Harbor.
JFrog Container Registry.
Nexus Container Registry.
Portus
Azure Container Registry.
Choose one and install it in a new server. Configure your developer machine and your remote server to point to your new docker registry.
Final steps are:
Perform a docker build in your developer machine after. This will create a new docker image of your java application (tomcat+war or springboot jar)
Upload your local image to your new docker registry with something like:
docker push example.com/test-image
Connect to your remote cloud linux instance using ssh and just download the docker image
docker pull example.com/test-image
In the remote server, just start your new downloaded image with docker run...
#4 Use a continuous integration server (an extra instance is required)
Same to the #3 but not in the developer machine. All the steps are performed in another server called: Continuous integration server.
#4.1 Use a continuous integration server (an extra instance is required)
Install Jenkins or another Continuous integration server in the new instance
Configure plugins and other required things in jenkins in order to enable webhook url : https://jrichardsz.github.io/devops/configure-webhooks-in-github-bitbucket-gitlab
Create a job in jenkins to call the script of approach #1 or execute docker commands of approach #2. If you can, Approach #3 would be perfect.
Configure your SCM (github, bitbucket, gitlab, etc) to point to the webhook url published by Jenkins.
When you are ready to deploy, just push the code to your scm, jenkins will be notified and will execute the previous created job. As you can see, there is no human required to deploy de application in the server(With the exception of developer push)
Note: At this point, you could migrate the scripts of approaches #1 and #2 to :
Jenkins pipeline script
https://gist.github.com/jrichardsz/a62e3790c6db7654808528bd5e5a385f#file-jenkins_scripted_pipeline_java_mvn_basic-js
Jenkins declarative pipeline
https://gist.github.com/jrichardsz/a62e3790c6db7654808528bd5e5a385f#file-jenkins_declarative_pipeline_hello_world-js
These are more advanced and scalable approaches to mapping all the commands and configurations required from the beginning to the deployment.
#5 Advanced (Sysadmin team or extra people and knowledge are required )
More instances and technologies will be required.
Kubernetes
Ansible
High availability / Load balancer
Backups
Configurations Management
And more automations
This will be necessary when more and more web applications, microservices are required in your company/enterprise.
#6 Saas
All the previous approaches could be simplified using WORLD CLASS platforms like:
Jelastic
Heroku
Openshift, etc
I have a django web application code in github. From time to time, I make necessary updates and arrangements on the repository. I have to pull the project every time and make adjustments on the docker and run on my machine.
Is there a way to run docker synchronously with the code in my github repoitory? When I make a change in github I want the docker to pull it automatically and try to run the project without interrupting.
Using hooks inside Jenkins we configure Git & Docker.
Say:
When ever we push changes to git, then jenkins job will trigger, jenkins will pull the changes and build new docker image and push the image inside docker.
We are using jenkins for CI. we get late night builds. Is there any way to automate the build deploy as soon as we get a mail or intimation ? Any suggestions would be appreciated..
One mechanism to deploy off of a build on Jenkins is to use artifacts to place the latest binary in a known location, and then kick off a new job (only on success of the compile/test phase) which uses (private key protected) ssh or scp to copy the artifacts to the test/production machine and then perform the install.
We use a similar mechanism for some automated testing that we do. The tricky part is getting the shell command to handle the ssh keys, so we do the following:
eval `ssh-agent -s`
ssh-add ~/.ssh/your_private_key_here
As long as that private key is on the Jenkins server and the public key is on the server you're trying to push to, you can then use ssh and scp commands in the rest of the script to perform functions on the server in question.
If you prefer to run the process entirely from the target server end, you can create a small script that runs on the server that checks for new files in the artifact directory of your Jenkins server build. Thanks to the latest path, you don't have to know the build number to do this. To find the specific path, you can log in to your Jenkins server (once you've saved at least one artifact), and find the project you are using and look at the Last Successful Artifacts, which will be URLs to the last successful builds of the artifacts. These URLs remain constant and always point at the most recent successful build, so you don't have to worry about them changing unless the project name or server name changes.
NOTE: there are security holes here that you can drive a truck through if you are doing this for anything other than a deployment to test. In the case of the first mechanism, your build server has an ssh key that gives it access (potentially destructive) to the target. In the case of the second mechanism, you are trusting that the Jenkins server will only serve up binaries that are good for you. However, for test environments, push to stage, etc. these techniques will work well.
These are the ways I know:
With a script:
In the Jenkins configurations, you can execute windows/shell commands after the execution of your maven goals. In my case, I have a Glassfish on a Linux, and via ssh I execute the asadmin parameters for the deployment. I have installed an instance in the server, and the process that I follow is: stop instance, undeploy app, deploy app, start instance (commands).
With a Maven Deploy Plugin:
This plugin takes a war/ear file and deploys that to a running remote application server at the end of a build. The list of currently supported containers include:
Tomcat 4.x/5.x/6.x/7.x
JBoss 3.x/4.x
Glassfish 2.x/3.x
https://wiki.jenkins-ci.org/display/JENKINS/Deploy+Plugin
With Cargo:
The Deploy Plugin is based on this. You must edit your pom.xml and execute the goals of deploy with maven.
http://cargo.codehaus.org/
In tomcat, configuration with jenkins and tomcat:
Install and download the jenkins on your server and start the server go to jenkins portal after that create the project using 'New Item' and select the pom.xml and create the maven project.
Now go to your project and click on Configure and select the "Restrict where this project can be run" and add master in your Level Expression.
select the "Source Code Management" clisck on git and configure your git repository and credential and branch name.
Select the "Build" add Root pom : pom.xml and Goals and options : clean install -DskipTests
select the "Post-build Actions" and select the "Deploy war/ear to a container"
WAR/EAR files : target/test.war
Context path : test
Containers select tomcat and add Credentials
Tomcat URL : example : http://localhost:8080/
Update the 'apache-tomcat-8.5.5\webapps\manager\META-INF\context.xlm file. uncomment the Value tag. and restart server
context.xml file
Before :
<Context antiResourceLocking="false" privileged="true">
<Valve className="org.apache.catalina.valves.RemoteAddrValve"
allow="192\.168\.0\.9|127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" />
</Context>
After change :
<Context antiResourceLocking="false" privileged="true" >
</Context>
for auto deployment: go to 'apache-tomcat-8.5.5\conf\context.xml' and add antiResourceLocking="true" in 'Context' tag