I have a job that I want to run every time a commit is made to a repository. I want to avoid pulling this code down, I only want the notification build trigger. So, is there either a way to not pull down certain repositories in your SCM upon a build or a way to poll things that aren't in the SCM for a build?
you could use a post commit hook to trigger your hudson job.
Since you want to avoid changing SVN, you have to write a job that gets executed every so often (may be every 5 Minutes). This jobs runs a svn command using windows bach or shell script task to get the current revision for the branch in question. You can set the status of the job to unstable if there is a change. Don't use failure because you can't distinguish than between a real failure and a repository change. I think there is a plugin that sets the job status depending on the contents of you output.
You can then use the email extension plugin to send an email every time the revision changes. You can get the revision number from the last (or better the last successful or unstable) job. You can archive a file containing the revision number on the jobs or you can set the description for the job to the revision using the description setter plugin. Have a look at Hudsons remote API for ideas on how to get the information from the previous job.
Since you run your job very often during the day. don't forget to delete old job runs. But I would keep at least two days worth of history, just in case your svn is down for 24 hours.
Related
Is there a way to setup up a build's priority in a yaml based pipeline? There seem to be references to build priority in the Azure DevOps API, but nothing in how to do this via yaml. I thought there might be some docs in the Triggers section, but no.
We need this because we have some fast building NuGet packages, but these get starved via slow-build pipelines making turnaround time for packages painful.
The closest thing I could come up with to working around this is via agent demands in the yaml
demands:
- Agent.ComputerName = XYZ
to separate build pipelines, but this is a bit of a hack and doesn't use agents efficiently.
A way to set this in UI would be acceptable, but I couldn't seem to find anything.
Recently Azure DevOps introduced the ability to manually specify a build/release runs next.
This manifests as a Run next button. (image source).
So while you can't say "this pipeline always takes priority" yet, you can manually force a specific run to the front of the queue.
If you need a specific pipeline to always take priority, then you likely want to setup a separate agent pool just for those pipelines, or use demands as Leo Liu mentioned.
Setting build priority in yaml or UI
I'm afraid this feature is not yet supported in Azure DevOps at this moment.
There is a popular user voice about it, you can upvote it and check the feedback from that ticket.
Currently as a workaround, just like what you did, set the demands in build definitions to force building with the specific agents.
Hope this helps.
We have a CodePipeline set up to do a build, deploy to a QA ECS environment, then a manual approval step to deploy to Prod.
What gets confusing though, is when there are several builds running one after another. Several builds get deployed to QA in sequence, but then the Approval button seems to approve them one at a time, and it's not clear which build you're approving when you click on it.
What I would like to be able to do is to approve the latest build, in case
the earlier builds had issues that were fixed by the later builds. What would be the best way to accomplish that?
I had the same problem. Manual approvals are confusing since several pipeline executions can get queued and it's easy to lose track of things. I think we can blame this on CodePipeline's bad UX.
The workaround I settled with is to have two identical pipelines for the same project. They have the same source stage (same repo/branch) but different deploy stages (one deploys to QA, one deploys to prod). No more manual approval stages. The QA pipeline is set to auto-execute when changes in the source (repo/branch) are detected while the Prod pipeline needs to be manually released.
Basically, we replaced the Manual Approval with Manual Release. Manual release always releases the latest from source unlike manual approvals.
You should place the deploy and approval action in the same stage. This lets you approve exactly what you tested. Why? Because exactly one pipeline execution can be in a pipeline stage at any given time.
...approve the latest build, in case the earlier builds had issues
that were fixed by the later builds.
If you want to let later builds catch up, reject the earlier build that is waiting for approval.
One option if you don't want to have multiple pipelines is to by default disable stage transitions into your environments that required controlled releases.
When you are ready to deploy into an environment, you enable the stage transition to allow the most recent release from the previous stage to be processed and then disable the transitions again.
It's still a bit clunky, but reasonably effective once you get used to it. Having to reject each change that comes through becomes very slow and cumbersome to manage, so by disabling transitions you choose when to promote a release.
IMO, CodePipeline should have an option to automatically supersede executions if they are paused at the manual approval stage.
In the CodePipeline UI, you can see the history of Manual approvals in your pipelines' History. Click on History to see what's in progress (Manual Approvals that haven't timed out will always be in progress) and the source (git) short-sha that triggered it (if you need to narrow down to the relevant commit).
To know which Manual approval you're approving, in Pipeline view, click on View current revisions next to the Manual step (to get the Execution ID), then find the matching Execution ID in History (should be the oldest one).
Only way I found to get to the latest Approval is to hit reject n-1 times in the pipeline (where n is how many manual approvals are still in progress) until I only have 1 approval left (or until I find matching Execution ID).
Well, we can solve this problem as how you describe it with development, but it might also be a process glitch.
For example: If we have a development branch, a release branch (staging) and a master branch ( production ) we could easily solve this issue.
Development branch
Things we develop will be going through the development branch stage where we don't need the manual approval, as we don't want to check every changes. We have setup automated unit tests for that.
Release branch
This will deploy to the staging environment where we extensively test the software quality, also based on the regression tests on an acceptance chain with acceptance systems. This should prevent all the big issues before merging towards master branch. Next to that, we could also manually test the release branch on the staging environment. If this works, be happy and easily migrate towards master
Master branch
This will deploy to the production environment with a manual approval before the actual deployment is taking place, knowing for sure you will only push 1 change, being the merge from release to master, preventing the issues you've summarized in the ticket.
Another way is to develop a new AWS feature where you can uncheck or check a checkbox saying: always take the latest release, but that will not help adding value to the pipeline integration as things will be pushed without testing well enough.
As a preface, our setup is somewhat unusual due to "legacy" reasons. It is fully possible that I am going against the grain with this. I would love to get an expert opinion on whether it is possible to get my current setup working or a suggestion on a different approach.
Environment
Java application with over 10K unit tests in JUnit. For legacy reasons the entire unit test run takes a long time (the ultimate goal is to fix the root of the problem, but this will not happen soon).
The application is broken up into multiple modules, with each module having its own unit tests. Executing tests module by module takes a reasonable amount of time, so that if someone commits the code to repo subtree with module's code and only module's tests get executed, they can get a result quickly.
Current Jenkins Setup
JUnit job
This is the single parameterized job that can run tests for any module. The job takes in as parameters the regexs for which tests to run and a parameter indicating which module it is running, for notification purposes. It checks out the whole repo tree and then does the run based on the parameters.
After the completion of the run this job does the analysis of JUnits, publishes the report and sends out email notifications.
Repo watchers
One repo watcher for each module. The watcher checks out only the repo subtree that it wants to monitor. When a change is detected it triggers the JUnit job telling it which tests to run and for what component this is.
Question
In general the setup works well, does exactly what I need, however it breaks a few of the nice and expected features of Jenkins and JUnit plugin.
Because the same job keeps executing different subsets of unit tests, the job to job comparison between unit tests does not provide any value. Without manually scanning between jobs it is not possible to tell what changed in terms of new failures or new fixes to unit tests.
Something very similar happens to change history:
Each repo watches runs on its own schedule. Suppose we have a change to module A and a change to module B, very close time wise to each other. If watcher A triggers first, the JUnit job triggered by watcher A will "claim" both changes. When the JUnit job triggered by watcher B runs, it will not detect any new changes in the repo. This plays havoc with email notifications, as the second JUnit job would not know who broke the build.
At the end of the day I believe I am looking for a way to establish dependency relationship between non sequential job runs in Jenkins for the same job, or alternatively a totally different approach.
Thank you!
Okay so lets see if I get this right in basic language,
You want to track what failures are caused by what changeset?
In which case I would suggest the following: Again in simple term - you need to adapt this to your current setup.
Setup a job that manages results
This job should be parameterized to take the name or change number and publish ALL the results.
This job should be triggered following each test run is completed to consolidate all results
now if a new test or failure is introduced the same job can track it and email the person who caused the failure.
Jenkins is very powerful and very generic so pretty much every secanrio if not plugins with groovy it can be resolved. So I would suggest taking a pen and map it on the board and create process of it rather than just a single job.
Is it possible to trigger a Hudson/Jenkins build only when a certain string appears in a commit-message?
For instance, I want to trigger a build that rolls out my application to the dev environment by writing a commit message like:
MYPROJECT-123 Fixed NPE in MyClass.java #deploy:DEV
The general idea is described in this great talk on Continuos Deployment but I couldn't find any information on how to do this in Hudson.
I would prefer to have this behavior in Hudson itself and not in an external system like commit-hooks or web-hooks.
I don't know of an out of the box way you can parse the SCM message as part of the trigger. You have a couple of options that might achieve what you want though
Write your own Hudson SCM plugin
Chain your jobs together into a build pipeline. The first job could simply look for that message in the changelog.xml to determine if the next build is triggered or not.
If you are looking at building a pipeline of build jobs, check out the build-pipeline-plugin. http://www.centrumsystems.com.au/blog/?p=121
Anyone got a more elegant solution??
Cheers,
Geoff
There is a plugin called Commit Message Trigger Plugin, but it had just a 0.1 release.
Maybe the easiest way is to use a version control post commit (or push) trigger to start a Hudson Job. You'd one anyway to automatically start your build.
I'm trying to figure out how to model my build process in hudson. At present most of our hudson builds are somewhat hard coded in that the build process is series of steps and we have one process per branch.
I have another build system that has many active branches and each build has a series of integration tests which require a suit of machines to execute. As I migrate from the home grown to hudson I'm not quite sure what's the right way to model this to keep sustainability costs and build times to a minimum
Here's my basic build:
create workspace
compile, link, package
transfer artifacts to test systems
invoke test harness on multiple systems to handle installation and acceptance tests
collect results
publish results
I'd like the integration part to be a group of generic machines (perhaps an elastic group) which can handle integration-tests for any branch. I want to run as much in parallel as possible to keep my build times low. It looks like the best way to execute in parallel on hudson is to break up steps into jobs and use the Parameterized Trigger Plugin to customize the generic jobs.
So, i'd have two main jobs: build, test
I could have on build job per branch and a generic test job. The build job would use Parameterized Trigger Plugin to call the test job and provide the location of the build artifacts. The test job would call a series of jobs in parallel passing down parameters for branch, artifact.
test
test-client-install (params: artifact location, branch)
test-server-install (params: artifact location, branch)
test-run (params: client machine, server machine)
join - collect results (params: client machine, server machine)
Each of the test-* jobs would pull a slave out of the group of slaves and execute. I'm not quite sure how to inform the slaves running the client and server jobs how to find each other nor am I sure how to reserve them from the pool and release them back into it.
I guess, I could have a write properties to a common share and have the sub jobs use that for inter-job communication.
Has anyone created this kind of complex setup in hudson, or is this usually done in another system with which hudson interacts (hudson + STAF with STAF managing resources)?
A few thoughts. The fewer jobs you have, the easier it is to maintain them. The more jobs you have, the more flexible you are and you can run more in parallel. Since you emphasized fast build times, have a look at the join plugin, to run a few jobs in parallel and when all of these are finished to go on with another job in that chain.
If your server is big enough, you can experiment a little bit with the clone workspace plugin. It will help you to reduce the need for manual copying files between jobs and the need for the parametrized trigger plugin.
Reserving a slave is easy. You can group the slaves with labels. In your job you define what label you node has to have in order to execute your job. A node can have more than one label and your job can be bound to more than one label. This way, Hudson decides where to put your job depending on availability. If your slaves have more than one build queue, they can run two jobs in parallel. I haven't used the locks-and-latches plugin for synchronizing across nodes. So I don't know if locks are only per node or for the whole Hudson installation. Latches are not supported yet. If you need to ensure that two jobs need to run on the same slave, try to combine them, otherwise you will lose the advantage of Hudson to distribute your jobs freely over the available nodes.