I'm having trouble setting up incremental builds in Azure DevOps. There are too many variables with workspace cleaning to ensure that I don't have to do a full build every time.
I had a thought that I could just always copy the built files to a location outside of the agents' purview, and then copy those files into my release directory before each build.
Would that allow for an incremental build?
You probably can 'fool' the incremental logic but you would be working against the tooling.
For an actual incremental build you need to build in the same place.
In the context of Azure DevOps, that means building the same job of the same pipeline on the same agent. You can't let the build move around between agents or even between work folders of the same agent. (It also means that your agent and the state of the agent work folder must be persistent across the builds.)
You can make the job, stage, or pipeline 'sticky' to one dedicated agent by using demands and capabilities.
Decide what will be on your dedicated agent. Will it be the entire pipeline or just a stage of the pipeline or just a job of a stage?
For the dedicated agent, create a capability that represents the build. Using the name of the pipeline (or pipeline+stage or pipeline+stage+job depending) for the name of the capability is handy and self-documenting. You can create the capability in Azure DevOps as a 'user capability' of the agent.
Change your pipeline to add a demand on the custom capability. The demand can test if the custom capability exists. In a YAML pipeline the demands are configured in the pool definition.
This is an easier and less brittle approach then trying to outsmart the incremental logic.
With this approach, all builds will be done in series on the one agent. If the build takes a long time (which may be the motivation for building incrementally) and the build is tied to one agent, the 'throughput' of builds will be limited. If a build's duration is 1 hour, there will be a maximum of 8 builds in an 8 hour work day.
Tying specific builds to specific agents is not the intent in Azure DevOps. For a monolithic legacy codebase where there is no notion of semantic versioning and immutable interfaces, you may have little choice. But a better way is to use package management. Instead of one big build, have multiple smaller builds that produce packages that are used by other builds. The challenge is that packages will not work well without some attention and discipline around versioning and keeping published interfaces and contracts unchanged.
Related
Is there a way to setup up a build's priority in a yaml based pipeline? There seem to be references to build priority in the Azure DevOps API, but nothing in how to do this via yaml. I thought there might be some docs in the Triggers section, but no.
We need this because we have some fast building NuGet packages, but these get starved via slow-build pipelines making turnaround time for packages painful.
The closest thing I could come up with to working around this is via agent demands in the yaml
demands:
- Agent.ComputerName = XYZ
to separate build pipelines, but this is a bit of a hack and doesn't use agents efficiently.
A way to set this in UI would be acceptable, but I couldn't seem to find anything.
Recently Azure DevOps introduced the ability to manually specify a build/release runs next.
This manifests as a Run next button. (image source).
So while you can't say "this pipeline always takes priority" yet, you can manually force a specific run to the front of the queue.
If you need a specific pipeline to always take priority, then you likely want to setup a separate agent pool just for those pipelines, or use demands as Leo Liu mentioned.
Setting build priority in yaml or UI
I'm afraid this feature is not yet supported in Azure DevOps at this moment.
There is a popular user voice about it, you can upvote it and check the feedback from that ticket.
Currently as a workaround, just like what you did, set the demands in build definitions to force building with the specific agents.
Hope this helps.
We are migrating our container building process to Google Container Builder. We have multiple repo using Node or Scala.
As of actual container builder features, is it possible to cache dependencies between two builds (ex: node_modules, .ivy, ...). It's really time (money) consuming to download everything each time.
I know it's possible to build a custom docker image with all packaged within, but we would prefer avoiding this solution.
For example can we mount a persistent volume for that purpose, as we used to do with DroneIO? or even better automatically like in Bitbucket Pipelines?
Thanks
GCB doesn't currently support mounting a persistent volume across builds.
In the meantime, the team recently published a document outlining some options for speeding up builds, which might be useful: https://cloud.google.com/container-builder/docs/speeding-up-builds
In particular, caching generated output to Google Cloud Storage and pulling it in at the beginning of your build might help in your case.
I would like to ask about your experience with build server for embedded systems. What are you using (if any), and what are good and bad sides.
We are developing mainly for microcontrollers without operating system.
At this moment I'm trying to use Jenkins and my build is running. But I have some problem with projects structure. When I want all plugins working, than I need flat job structure. But we have few projects that are developed in parallel, and then job view start to be messy.
I've tried folders, but than some plugins stopped working.
I would like to build a pipeline, that is running sequential, but have parallel jobs inside. eg. Commit stage have: compile, lint check, style check, unit tests. all of them can run in parallel and when all are successful next stage is executed.
What I need from Build server at this moment:
build pipeline support
user authorization based on LDAP
parallel job execution
hierarchical projects (projects/configurations groups)
reports from xUnit, Lint, Compiler warnings, Robot framework.
slave/agents support, tags for slave
privileges based on ldap groups
privileges per group/project
I'm opened for any suggestions, open source and commercial.
I was looking at Bamboo on videos look very nice but I didn't try it yet.
We have two development teams, that are developing different projects. It could be nice to have projects grouped for teams and privileges for group. Members of one group shouldn't modify builds of other. But it is more "nice to have" than "must have".
TeamCity
I tried to use TeamCity. Building build pipeline is easier than in Jenkins, just click add Step.
One thing that I found difficult is making steps in parallel in one configuration. For example after commit I would like to run in parallel Lint, Unit tests, Compile to save some time. I found solution, but it make pipeline harder to view and maintain.
TeamCity support multiple configuration in projects which solve problem with jobs grouping. I didn't found option to group projects.
TeamCity is a free, Java-based CI server from JetBrains. We've been using it very successfully (for very different kinds of projects) and I would unreservedly recommend it to you. To each of your requirements:
Build pipelines are configured as a series of steps within a build configuration. A project can have an arbitrary number of configurations, which in turn can have an arbitrary number of steps.
LDAP integration is fully supported.
Build pipelines can be executed in parallel. TeamCity delegates work to Build Agents, which are typically distinct servers that have all the necessary tools (frameworks, etc.) to perform the steps of a build configuration. The free version of TeamCity comes with licenses for three agents, so you could have up to three builds running in parallel. Additional agents can be licensed for a nominal fee.
By 'hierarchical projects' I understand you to mean that the completion of one build pipeline will automatically trigger the start of a subsequent pipeline. This is supported, and build/version numbers can be passed between the stages for consistency.
XUnit has first-class support. Lint/compiler reports can be saved as 'artifacts' of the build for easy review later. Essentially, a lot of frameworks have built-in support in TeamCity, and for everything else you can execute arbitrary shell commands, the output of which can be saved as artifacts or used in subsequent build steps.
Slave/agent support is central to the TeamCity model, as noted above.
All of this is highly configurable and customizable. We've been able to do a lot of diverse, complex things with TeamCity, and it has been totally solid and stable for us. And it looks good, too -- the server dashboard arranges information in an easily-understood way.
Disclaimer: I work for Atlassian so I'm a bit biased.
Configuring your build pipeline in Bamboo is pretty easy to do. Bamboo operates based on a Plan → Stage → Job structure, listed from higher to lower order. Check out the Bamboo Plan Structure.
Every Project in Bamboo holds a collection of Plans. Plans are comprised of one or more Stages. Stages run sequentially and are comprised of one or more Jobs. Jobs run in parallel and are comprised of one or more Tasks (tasks run sequentially, but can be placed in separate Jobs so that they run in parallel and speed up build time). Agents in Bamboo are machines or services that perform your build steps. An entire Job will execute on a single Agent. You can read more about Agents here. As for slave tags, the ability to make certain agents exclusively tied to certain builds or projects is on the short-list for new features.
To answer your other points:
user authorization based on LDAP/privileges based on ldap groups/project: You can connect to an external LDAP server to manage users and permissions. Bamboo has a groups feature or if your team is using JIRA you can take advantage of JIRA groups to set up global permissions, plan permissions, and also indicate which users will receive notifications on a plan’s build results. Global permissions control who has access to build plans and the Bamboo server whereas Plan permissions control who can perform specific operations on a Plan and its Jobs.
hierarchical projects (projects/configurations groups): Bamboo does support parent & child plan structure. There are several ways you can set up triggering for builds. One of them is to base triggers on other builds, that is, Plan builds are triggered by preceding successful builds of other plans or if other specified plans are building successfully. Example: If Plan A builds successfully it will automatically trigger builds of Plans B & C.
reports from xUnit, Lint, Compiler warnings, Robot framework: Bamboo can run any build process that can be started from a command line. Support includes Maven/Maven2, Ant, make, MSBuild, NAnt, Grails, devenv.exe, and any xUnix-compiant framework (JUnit, Selenium, JWebUnit, NUnit, PHPUnit, etc...).
I am a Qt/C++ developer. I would like to setup a continuous integration environment whereby after committing the source code, it triggers a build process that build the code for the 3 platforms I'm using:
Linux
OS X
Win32
If possible, how do I setup such environment. Any hints or links are welcome.
I've read around about Jenkins, but I can't find any good tutorial for it.
I also suggest Jenkins for several reasons:
It will run on all of the platforms you listed.
It can be configured to start a build when the repository is updated (hint: configure the Job to "Poll SCM" and you won't have to muck with your SCM tool to get it to tell Jenkins to start building).
It provides good support (mostly through plugins) for Unit Testing. [You're project is doing unit testing, right?]
The price is right
A bigger issue is going to have is that AFAIK, Qt doesn't really do cross-compiling for other platforms well. Using Jenkins (and the appropriate plugins), you should be able to solve this.
One method that comes quickly to mind is to have an instance of Jenkins on each platform. Each instance is responsible for building the version for its own platform. At the end of the build, the created artifacts are all put into a common, shared location.
Jenkins supports this feature via plugins for all major source control systems. If you seriously considering using Jenkins (and I would highly recommend it), consider buying John Ferguson Smart's Jenkins: The Definitive Guide.
Two solutions coming to my mind:
BuildBot
BuildBot is a highly customizable continuous integration system written in Python. The master component offers a nice web-based GUI to monitor and trigger builds; slave components are put on the target machines (usually virtual machines but they could be the Mac laptop of one of the developers). Docs are good enough to build up a basic system, customization could be a little tricky (at least it was for me). Using commit/push hooks provided by VC systems you can easily activate the master and trigger builds across the slaves. It also supports incremental builds (a must if your project is big).
CDash
Developed by the authors of CMake, CDash is a web application collecting builds coming from across the network, not exactly what you asked for but I think it's worth a try. Very powerful if you have a team of developers who could continuosly submit build result on their machines to the server (and if you use CMake it's almost transparent). You cannot trigger builds from the server as Buildbot does, but you could setup a bunch of VM with a cron which checks for changes and in case performs the build and sends results to CDash
Sure it's possible. Most of the version control systems are able to execute custom script on server side. Some of them (git, for example), has hooks to achieve the same locally. Have a look at git's post-commit hook.
All you need is to create a script that will trigger cross-platform builds.
Most version control systems allow post-commit hooks to allow you to kick off events like builds. Alternatively build systems can be configured to regularly poll a source control repository and manage their own build scheduling (this is how we use Jenkins).
Something to bear in mind is how long it will take to do a complete build across platforms and the typical number of check-ins in that interval. You might find batching check-ins a better way of doing continuous integration builds if you have an fair sized team or limited build server resources. Otherwise your build system could quickly end up trying to play catch up.
As for whether it is possible to build on all target platforms, that depends on your tool chain.
I am thinking about writing my own release storage server and before I do this, I'd like to know what people use to see integration instead of create.
So what do you use to store your builds for internal access?
I'm looking for a web app that allows me to upload artifacts and then reference them by various tags so I can group them together by component or release vehicle. I also want access controls per build by readiness or promotion.
I define staging as placing built artifacts on a server for communities of users to access. The artifacts are usually zip files containing either applications or libraries + documentation. The user communities are developers, QA, and service delivery/operations. Basically, the creators, the checkers and external-users.
We release artifacts individually and as groups in a release vehicle (e.g., release 1.1 contains foo 1.0.1 and bar 1.0.7). Depending on the artifact, we may want to restrict access. Operations shouldn't be able to access pre-released builds and we may want to track who downloads a limited availability release.
So, I'm hoping to find a tool that does most of what I want with a good extensible design so I can add in what I don't have.
Any one know of a good tool for managing the builds post-build?
Examples might be:
quickbuild/lunt build
Team forge
build forge
Jira & confluence as a set
sonatype nexus
home grown
SVN repository using branching to promote builds from dev->Qa->GA
Peter,
Since you're not getting many answers, I'll let you know about AnthillPro whose developer, Urbancode, I work for.
Ok, disclaimers out of the way, AnthillPro is designed to serve exactly the broad audience that you're discussing - dev, checkers, and operations. Compared to the tools you list, AnthillPro is something like a BuildForge (a key competitor of ours) or quick build with a tightly integrated artifact repository (like nexus). So the builds are run, and you can view the results of your builds - and the build artifacts - in a nice web ui. Users with correct permissions can run a secondary process like a deployment or test against prior builds - and the artifacts from the selected build.
The goal is to manage the entire build lifecycle from creation, through various testing tools and deployment environments out through release to production. It's not a big nasty suite, instead we integrate with tools like Subversion and Jira to make sure every release has a manifest of source and problem ticket changes.
Your release packages would map well to AnthillPro's built in dependency system. We often see customers create virtual projects that take little or no source code, but instead either relate or package components into a release bundle.
Where AnthillPro may fall short for you is that generally, we would allow operations to see pre-release builds. However, you could add rules that would immediately fail / block an attempted releaes by operations of any build not marked as "pre-release". AnthillPro's system of statuses allows the team to flag a build with custom markers like, "In QA" or "Approved for Release". Combined with rules about running workflows,that should give you the control you need. If some projects are particularly sensitive, you'd just use the role based security to lock those away.
Hope that gives you something to look into.
-- Eric
My options are
build automation systems like AntHill, QuickBuild, TeamForge, BuildForge
file server
source control server
maven repository manager (nexus, archiva)
My goals are
group build by multiple criteria (artifact type, release vehicle, stage/phase)
promote build from dev -> qa -> released
provide access control for dev builds, qa ready builds, production ready builds
I'm going to focus on either source control as file server (using svn) or maven repos manager as file server using nexus. The rational is as follows:
minimize effort
minimize cost
use something I can easily extend when needed to (because I'm certain my requirements will shift).
maven use is growing and will eventually be the dominant build technology here.
Thanks for the information.