How could I automate build deploy in jenkins? - build

We are using jenkins for CI. we get late night builds. Is there any way to automate the build deploy as soon as we get a mail or intimation ? Any suggestions would be appreciated..

One mechanism to deploy off of a build on Jenkins is to use artifacts to place the latest binary in a known location, and then kick off a new job (only on success of the compile/test phase) which uses (private key protected) ssh or scp to copy the artifacts to the test/production machine and then perform the install.
We use a similar mechanism for some automated testing that we do. The tricky part is getting the shell command to handle the ssh keys, so we do the following:
eval `ssh-agent -s`
ssh-add ~/.ssh/your_private_key_here
As long as that private key is on the Jenkins server and the public key is on the server you're trying to push to, you can then use ssh and scp commands in the rest of the script to perform functions on the server in question.
If you prefer to run the process entirely from the target server end, you can create a small script that runs on the server that checks for new files in the artifact directory of your Jenkins server build. Thanks to the latest path, you don't have to know the build number to do this. To find the specific path, you can log in to your Jenkins server (once you've saved at least one artifact), and find the project you are using and look at the Last Successful Artifacts, which will be URLs to the last successful builds of the artifacts. These URLs remain constant and always point at the most recent successful build, so you don't have to worry about them changing unless the project name or server name changes.
NOTE: there are security holes here that you can drive a truck through if you are doing this for anything other than a deployment to test. In the case of the first mechanism, your build server has an ssh key that gives it access (potentially destructive) to the target. In the case of the second mechanism, you are trusting that the Jenkins server will only serve up binaries that are good for you. However, for test environments, push to stage, etc. these techniques will work well.

These are the ways I know:
With a script:
In the Jenkins configurations, you can execute windows/shell commands after the execution of your maven goals. In my case, I have a Glassfish on a Linux, and via ssh I execute the asadmin parameters for the deployment. I have installed an instance in the server, and the process that I follow is: stop instance, undeploy app, deploy app, start instance (commands).
With a Maven Deploy Plugin:
This plugin takes a war/ear file and deploys that to a running remote application server at the end of a build. The list of currently supported containers include:
Tomcat 4.x/5.x/6.x/7.x
JBoss 3.x/4.x
Glassfish 2.x/3.x
https://wiki.jenkins-ci.org/display/JENKINS/Deploy+Plugin
With Cargo:
The Deploy Plugin is based on this. You must edit your pom.xml and execute the goals of deploy with maven.
http://cargo.codehaus.org/

In tomcat, configuration with jenkins and tomcat:
Install and download the jenkins on your server and start the server go to jenkins portal after that create the project using 'New Item' and select the pom.xml and create the maven project.
Now go to your project and click on Configure and select the "Restrict where this project can be run" and add master in your Level Expression.
select the "Source Code Management" clisck on git and configure your git repository and credential and branch name.
Select the "Build" add Root pom : pom.xml and Goals and options : clean install -DskipTests
select the "Post-build Actions" and select the "Deploy war/ear to a container"
WAR/EAR files : target/test.war
Context path : test
Containers select tomcat and add Credentials
Tomcat URL : example : http://localhost:8080/
Update the 'apache-tomcat-8.5.5\webapps\manager\META-INF\context.xlm file. uncomment the Value tag. and restart server
context.xml file
Before :
<Context antiResourceLocking="false" privileged="true">
<Valve className="org.apache.catalina.valves.RemoteAddrValve"
allow="192\.168\.0\.9|127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" />
</Context>
After change :
<Context antiResourceLocking="false" privileged="true" >
</Context>
for auto deployment: go to 'apache-tomcat-8.5.5\conf\context.xml' and add antiResourceLocking="true" in 'Context' tag

Related

AWS CodeDeploy agent is deleting files in the wrong folder during install

We have an unusual setup. We use git on Azure Devops for our code repositories, and AWS for our cloud-based services. In our arsenal we have a mixture of AWS Lambda functions, along with console apps, web apps, and Windows services running on EC2 instances. We have been able to create CI/CD pipelines for all three classes of apps. For the apps running on EC2 instances we use AWS CodeDeploy. These deployments are more complicated, but they all work -- except for one.
Another unusual thing about our setup is that both our development and QA environments are on the same EC2 instance. When the CodeDeploy agent running on that instance retrieves the deployment archive, it unpacks it, reads the appspec.yml file, runs our before install script, which backs up the existing installation and shuts down any services that might be using those files. Then, the install phase updates the files in the designated environment, then deletes -- or tries to delete -- all the files in the other environment folder.
In other words, if a DEV deployment is running, it replaces the files in the DEV folder and also tries to delete the files in the QA folder. I know this sounds like a scripting problem, but I have checked all the script and yaml files no where do I reference the opposing environment.
In this case, the app is a Windows service. Normally, I get a Ruby 'Permission denied # unlink_internal' error on a file in the other folder. As an experiment, I shut down the service in the other environment in my before install script and, as I expected, the agent deleted all the files in the other environment. It updated the files in the target environment, but left the folder in the other environment empty!
Here are my files. I suspect, the problem is being caused by something I did, but I can't, for the life of me, find it.
These are all .NET projects. In my solution I have a ConfigFiles folder set up with subfolders for each environment. Then, in my pipeline yaml file I run a script to select the correct files to move into the archive based on the git branch that is being built.
Here's the code for code for the script that selects the correct files.
Here's the Azure pipeline YAML file.
Here's my before install script:
And, finally, here is my appspec.yml file, which the CodeDeploy agent uses to know where to update the files during installation. How I want this to be the wrong path, but in the deployment archive, the environment specific values are all exactly right.
Any ideas on this one would be greatly appreciated.
I encountered the same problem where deployment of an app deletes files from another app in another folder unexpectedly. My solution is to use different deployment groups for each app, even though they are deploying to the same EC2 instance.
Deploying many apps on the same EC2 instance using the same deployment group results in files/folder deletion on other deployed projects.
From AWS Technical Support:
The reason is that codedeploy creates a clean up file by the format '[deployment group 1 ID]_cleanup" in the directory '/opt/codedeploy-agent/deployment-root/deployment-instructions' everytime a deployment is made to the deployment group and this file deletes all the files that had been installed during the previous deployment made to the deployment group. Since the deployment group is the same in your case, when you make a deployment to the deployment group which installs files to the folder "/var/www/project1", files installed by the previous deployment in the folder "/var/www/project2" are being cleaned up and vice versa which is an expected mechanism of the codedeploy agent.
You can find the explaination here: https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent.html#codedeploy-agent-install-files
Please consider creating two different applications/deployment groups
and configure the two pipelines to use different
applications/deployment groups which should fix your problem.

Selenium cloud execution on a machine without code or IDE

I set up my Selenium project (Maven, Java, TestNG) in GitHub repo and it is connected to Jenkins. I am able to execute the Maven project via Jenkins and do the testing. This requires all dependant tools (Maven,Java,Jenkins) set up in my local machine.
But we have a requirement to do this in the cloud. I know we can use Selenium Grid-Docker, BrowserStack or GCP to execute the tests in the cloud but what we need is to have everything installed in the cloud and any external user with access being able to execute any test via UI or executable file without installing anything in user's local machine.
Is this possible at all? If yes,how?
I searched a lot and couldn't find anything. One of my friends said it can be done using AWS but doesn't know how. I just need guidance on the path to take here and I'm willing to learn and implement it myself.
Solved this my deploying code to AWS-EC2.
Here's what I did.
I created a TestNG-Maven project and uploaded to GitHub. Then created a AWS-EC2 t2.micro linux instance and installed Chrome and Jenkins in it. I accessed Jenkins from my local machine and connected it to GitHub repo. From Jenkins when I build the project everything was getting downloaded in EC2 and execution happened in EC2. This will be chrome-headless execution.

How to deploy java application in a cloud instance from the scratch to an advanced architecture?

I need to deploy my spring-boot application on compute engine in Google cloud platform. I have already created an instance and through SSH Apache and Maven have been installed. Further, war file has been uploaded into the bucket. Anybody can provide me with the remaining commands to deploy the war file on tomcat instance or any other cloud platforms with linux?
Thanks
Deploy in compute engine instance of google not substantially different from AWS, Azure or another linux host provider.
You just need an ssh connection to the remote machine and install the required software to compile, build, zip, deploy, etc
I will list some approaches from basic(manually) to advanced(automated):
#1 Bash scripting
unzip and configure git
unzip and configure java
unzip and configure maven
unzip and configure tomcat (this is not required if spring-boot is used)
configure the linux host to open 8080 port
create a script called /devops/pipeline.sh in your remote cloud linux instance, with the following steps
For war deployment :
# get the source code
cd /tmp/folder/3dac58b7
git clone http://github.com/myrepo.git .
# create war
mvn clean package
# move war to deploy tomcat folder
cp target/my_app.war /my/tomcat/webapps
# stop tomcat
bash /my/tomcat/shutdown.sh
# start tomcat
bash /my/tomcat/startup.sh
Or spring-boot startup
# get the source code
cd /tmp/folder/3dac58b7
git clone http://github.com/myrepo.git .
# create jar
mvn clean package
# kill or stop the application
killall java
# start the application
java $JAVA_OPTS -jar $jar_file_name
After push to git, just connect to you instance using ssh and execute
bash /devops/pipeline.sh
Improvements: Parametrize repository name, branch name, mvn profile, database credentials, create a tmp/uuid folder in every execution, delete the tmp/uuid after deploy,optimize start and stop of application using pid, etc
#2 Docker
Install docker in your remote cloud linux instance
Create a Dockerfile with all the steps for war or springboot (approach #1) and store it close to your source code (I mean in your git repository)
Perform a git push of your changes
Connect to your remote cloud linux instance using ssh:
Build your docker image: docker build ...
Delete previous container and run a new version:
docker rm my_app -f
docker run -d --name my_app -p 8080:8080 my-container-name
In the previous approaches, build operations are performed in the remote server. To do that, several tools are needed in that server. In the following approaches, build is performed in an intermediate server and just deploy is executed in the remote server. This is a a little better
#3 Local Build (an extra instance is required)
In this approach, the build is performed in the developer machine and its uploaded to some kind of repository. I advice you docker instead of just war or jar compilation.
In order to build and upload the docker image, one of these docker registries are required:
Docker simple registry
Amazon Elastic Container Registry (ECR)
Docker hub.
Harbor.
JFrog Container Registry.
Nexus Container Registry.
Portus
Azure Container Registry.
Choose one and install it in a new server. Configure your developer machine and your remote server to point to your new docker registry.
Final steps are:
Perform a docker build in your developer machine after. This will create a new docker image of your java application (tomcat+war or springboot jar)
Upload your local image to your new docker registry with something like:
docker push example.com/test-image
Connect to your remote cloud linux instance using ssh and just download the docker image
docker pull example.com/test-image
In the remote server, just start your new downloaded image with docker run...
#4 Use a continuous integration server (an extra instance is required)
Same to the #3 but not in the developer machine. All the steps are performed in another server called: Continuous integration server.
#4.1 Use a continuous integration server (an extra instance is required)
Install Jenkins or another Continuous integration server in the new instance
Configure plugins and other required things in jenkins in order to enable webhook url : https://jrichardsz.github.io/devops/configure-webhooks-in-github-bitbucket-gitlab
Create a job in jenkins to call the script of approach #1 or execute docker commands of approach #2. If you can, Approach #3 would be perfect.
Configure your SCM (github, bitbucket, gitlab, etc) to point to the webhook url published by Jenkins.
When you are ready to deploy, just push the code to your scm, jenkins will be notified and will execute the previous created job. As you can see, there is no human required to deploy de application in the server(With the exception of developer push)
Note: At this point, you could migrate the scripts of approaches #1 and #2 to :
Jenkins pipeline script
https://gist.github.com/jrichardsz/a62e3790c6db7654808528bd5e5a385f#file-jenkins_scripted_pipeline_java_mvn_basic-js
Jenkins declarative pipeline
https://gist.github.com/jrichardsz/a62e3790c6db7654808528bd5e5a385f#file-jenkins_declarative_pipeline_hello_world-js
These are more advanced and scalable approaches to mapping all the commands and configurations required from the beginning to the deployment.
#5 Advanced (Sysadmin team or extra people and knowledge are required )
More instances and technologies will be required.
Kubernetes
Ansible
High availability / Load balancer
Backups
Configurations Management
And more automations
This will be necessary when more and more web applications, microservices are required in your company/enterprise.
#6 Saas
All the previous approaches could be simplified using WORLD CLASS platforms like:
Jelastic
Heroku
Openshift, etc

Nexus 3.5.1 proxies from snapshot repo nothing but maven metadata files

I have upgraded nexus repository from 2.x to 3.x through following path:
2.4.14 -> 3.4.0 -> 3.5.1
All nexus services were packed in docker with data directory mapped from host's. For all services I use default either sonatype/nexus or sonatype/nexus3 containers. Nexus web interface is hidden behind nginx with simple reverse proxying.
I use the nexus service with boot-cj (with no credentials) tools which manages dependencies the same way as maven. Anyway the tool first downloads nexus-maven.xml with relevant sha1 files and tries to download jars. It works fine with all 2.x I had.
I created a proxy repository against remote sonatype-snapshots repo. When I start compilation I have Could not find artifact error. I found that the meatdata files are cached but all poms and jars.
I have tried to fix it by cleaning cache with the clean_cache file trick and more rough rm -rfv /srv/nexus3/nexus-data/cache/* with no success. There are no any logs about error. Also I have checked manually that required artefact exists in the remote repository. More obvious Rebuild index button gave no solution. I do not thing it is a problem with nginx, but who knows? Also leaving overnight to run the scheduled tasks did not help.
The expected artifact is org.eclipse.rdf4j:rdf4j:pom:2.3-20170901.145510-11.

Visual Studio Team Services Build cannot deploy test agent

I've got a VSTS build that is trying to deploy a test agent so I can run some Selenium tests and when I get to the "Deploy TestAgent on" build step, I am getting the following error:
2017-06-22T14:29:05.6157972Z ##[warning]Task 'DownloadTestAgent' for
machine vmtest43xxx.cloudapp.net:5986's Error : System.Exception: The
process cannot access the file 'C:\TestAgent\vstf_testagent.exe'
because it is being used by another process.
Also, if setting up a local build agent is a good workaround for this, I'm all ears, but so far I have had a lot of trouble trying to set up a local test agent. This seems weird since setting up a local build agent was relatively easy up to this point. Any suggestions on how I should set up a local agent? I've been trying to follow instructions from here and here.
Thanks!
It’s easy to setup a local build agent, so try to setup a build agent:
Steps for windows:
Go to Admin page of Agent Pools (https://[account].visualstudio.com/_admin/_AgentPool)
Click Download agent button
Unzip the downloaded file
Run Command Line as Administrator
Run config.cmd
Specify collection url (https://[account].visualstudio.com), Personal Access Token etc
More information, you can refer to: Deploy an agent on Windows.
You can specify Test Agent Location (can be access from build agent machine) for Visual Studio Test Agent Deployment task to save time.