GOCD - Multiple OS GOCD Agents and pipelines - go-cd

I am trying to build an application, where some of them should execute on a windows OS and some in linux like Centos/Ubuntu. While designing a pipeline, how can I assure that all the windows related tasks/jobs should go to the windows agent and all the linux related jobs should go to the appropriate linux OS.
What would be ideal approach to achieve this.

[Just to add more information]
You generally enable/add Go agents(which are installed on different operating systems) via the "AGENTS" tab in the Go dashboard. You can tag a Go agent with multiple resources using the "RESOURCES" button in the "AGENTS" tab.
And when you create a job in a stage(in a pipeline), you can mention the resources(which Go agent(s)) to be used to run this job in the "Resources" field of the "Job Settings" tab.
Based on the resources mentioned the Go server will use those Go agents to allocate the jobs. You can tag a Go agent with multiple resources.
Hope this helps.

I have several pipelines that must run on specific machines. I accomplish this using two methods.
The first is using environments. I assign specific pipelines to a specific environment (e.g. Developement, QA, Production). I then assign an agent that has the ability to execute Development related tasks/jobs to the Development environment. I assign agents that can execute QA related tasks/jobs to the QA environment.
This ensures that pipelines that reside in the Development environment are executed by only agents that are also assigned to the development environment.
If your agent has the ability to execute both Development and QA tasks, then assign the agent to both the Development and QA Environment.
Your can use this same concept to ensure that only windows agents reside in a particular environment.
For example, you can have a Developement-Win environment that you assign pipelines and agents that should handle windows builds. Another called Development-Linux that you assign agents and pipelines that should handle linux builds.
Since I have agents that can build for multiple environments (Dev, QA and Prop) I also use the "Resources" that are assigned under the "Job Settings" tab. If the job needs to execute on a windows host, then I will assign a resource of "win2012". This tells me that the agent must have the resource of "Windows 2012" in order to execute this particular task.
I assign the agent the resource of "win2012" to indicate that it is a windows 2012 resource.
With the combination of the environment configuration and the resource settings the job will only be executed by an agent that meets both the resource requirements and is in the proper environment.
If you have multiple resource requirements for the task you can assign each of them by using a comma separated list. Some of the resources that I use are msbuild, subversion, sqlcmd. This tells me that the agent must have access to msbuild (to compile the code), subversion (to acquire it from SVN), and sqlcmd (to execute sql queries against SQL Server). I then mark agents that contain these particular resources. Only if an agent meets all of these resource requirements will be assigned the task.

Related

Would like to get build information from Google Cloud Profiler

I'm using Google Cloud Profiler (located at https://console.cloud.google.com/profiler) and would like to know how my profiling data changes across different builds of my application.
One way to do that would be to check the range of dates during which a particular commit was running on production, but that's time consuming because I have to:
Get the start date/time of release, determine the date/time of the next release
Set those dates manually in the profiler interface from the link above
That's really not terrible, but it'd be great to be able to set BUILD_ID environment variable like I can in Cloud Build and then be able to access that from the UI. Is something like this possible? Or is my approach the best way to do this at the moment?
Comparing across service versions would likely be a simpler and more precise way to do this (as opposed to using the time interval to select for profiles). To compare across service versions, it is necessary that the profiling agents set the service version.
The service version can be specified in the configuration passed to the agent (for the Go, Python, or Node.js agent) or via the -cprof_service_version flag (for the Java agent). If one is setting the service version using the configuration passed to the agent (applicable for the Go, Python, and Node.js agents), it may be convenient to use a flag or command line argument to set the service version so that the source code won't need to updated with each new version.
If one is running on Knative or App Engine standard, the service version should be auto-populated. These environments set the K_REVISION and GAE_VERSION environment variables (respectively), and the profiling agents (for all supported languages) use these environment variables to populate the service version. If one is running in another environment and modifying the source code is inconvenient or infeasible, one can set either the K_REVISION or GAE_VERSION environment variable in the environment running the application with the agent enabled to specify the service version.
My understanding is that the BUILD_ID is available at build time, but not at run time, so I don't know that it's possible for agents to use that directly.
(Disclosure: I work on Cloud Profiler at Google)
You can set the service version for this purpose. Please refer to the agent documentation for how to set it for supported languages.
For example, this shows using ServiceVersion for Go services.

How to replicate code changes across multiple AWS instances?

We have a load balanced setup in AWS with two instances. We do pretty frequent code updates, utilizing SVN. I need to know how easy it is to update the code changes across all the instances in our cluster. Can we simply do 'snapshots' and create new volumes each time for the instances?...or?...
I would not do updates via EBS snapshots. Think of EBS volumes as a hard disk - you would not change your harddisk if you have an update for your software.
As you have your code in a version control system, code updates should be quite simple like logging in to your (multiple) servers and doing a git pull or svn update. This should fetch the latest code files from your servers. Depending on the type of application you would have to do some other tasks afterwards, running build scripts, emptying cache etc.
The problem is that this kind of setup does not scale well. If you have n servers, you will have to login and do this command n times. Therefore it makes sense to look into some remote management tools that you can use in one step. With a lot of these tools, you also get a complete configuration management stack: you define a set of recipes or tasks (like installed packages, configuration files, fetch the latest code, necessary build steps) for each of your servers, and when you boot up a new server it fetches the lastest version of its configuration and installs itself.
Popular configuration management tools include Puppet or Salt. Both tools have remote execution included which should make your task to publish your code base easier, you would only have to fire one command on your master server and it automatically executes this task on all its minions / slave servers.

Build (CI) server for embedded softtware

I would like to ask about your experience with build server for embedded systems. What are you using (if any), and what are good and bad sides.
We are developing mainly for microcontrollers without operating system.
At this moment I'm trying to use Jenkins and my build is running. But I have some problem with projects structure. When I want all plugins working, than I need flat job structure. But we have few projects that are developed in parallel, and then job view start to be messy.
I've tried folders, but than some plugins stopped working.
I would like to build a pipeline, that is running sequential, but have parallel jobs inside. eg. Commit stage have: compile, lint check, style check, unit tests. all of them can run in parallel and when all are successful next stage is executed.
What I need from Build server at this moment:
build pipeline support
user authorization based on LDAP
parallel job execution
hierarchical projects (projects/configurations groups)
reports from xUnit, Lint, Compiler warnings, Robot framework.
slave/agents support, tags for slave
privileges based on ldap groups
privileges per group/project
I'm opened for any suggestions, open source and commercial.
I was looking at Bamboo on videos look very nice but I didn't try it yet.
We have two development teams, that are developing different projects. It could be nice to have projects grouped for teams and privileges for group. Members of one group shouldn't modify builds of other. But it is more "nice to have" than "must have".
TeamCity
I tried to use TeamCity. Building build pipeline is easier than in Jenkins, just click add Step.
One thing that I found difficult is making steps in parallel in one configuration. For example after commit I would like to run in parallel Lint, Unit tests, Compile to save some time. I found solution, but it make pipeline harder to view and maintain.
TeamCity support multiple configuration in projects which solve problem with jobs grouping. I didn't found option to group projects.
TeamCity is a free, Java-based CI server from JetBrains. We've been using it very successfully (for very different kinds of projects) and I would unreservedly recommend it to you. To each of your requirements:
Build pipelines are configured as a series of steps within a build configuration. A project can have an arbitrary number of configurations, which in turn can have an arbitrary number of steps.
LDAP integration is fully supported.
Build pipelines can be executed in parallel. TeamCity delegates work to Build Agents, which are typically distinct servers that have all the necessary tools (frameworks, etc.) to perform the steps of a build configuration. The free version of TeamCity comes with licenses for three agents, so you could have up to three builds running in parallel. Additional agents can be licensed for a nominal fee.
By 'hierarchical projects' I understand you to mean that the completion of one build pipeline will automatically trigger the start of a subsequent pipeline. This is supported, and build/version numbers can be passed between the stages for consistency.
XUnit has first-class support. Lint/compiler reports can be saved as 'artifacts' of the build for easy review later. Essentially, a lot of frameworks have built-in support in TeamCity, and for everything else you can execute arbitrary shell commands, the output of which can be saved as artifacts or used in subsequent build steps.
Slave/agent support is central to the TeamCity model, as noted above.
All of this is highly configurable and customizable. We've been able to do a lot of diverse, complex things with TeamCity, and it has been totally solid and stable for us. And it looks good, too -- the server dashboard arranges information in an easily-understood way.
Disclaimer: I work for Atlassian so I'm a bit biased.
Configuring your build pipeline in Bamboo is pretty easy to do. Bamboo operates based on a Plan → Stage → Job structure, listed from higher to lower order. Check out the Bamboo Plan Structure.
Every Project in Bamboo holds a collection of Plans. Plans are comprised of one or more Stages. Stages run sequentially and are comprised of one or more Jobs. Jobs run in parallel and are comprised of one or more Tasks (tasks run sequentially, but can be placed in separate Jobs so that they run in parallel and speed up build time). Agents in Bamboo are machines or services that perform your build steps. An entire Job will execute on a single Agent. You can read more about Agents here. As for slave tags, the ability to make certain agents exclusively tied to certain builds or projects is on the short-list for new features.
To answer your other points:
user authorization based on LDAP/privileges based on ldap groups/project: You can connect to an external LDAP server to manage users and permissions. Bamboo has a groups feature or if your team is using JIRA you can take advantage of JIRA groups to set up global permissions, plan permissions, and also indicate which users will receive notifications on a plan’s build results. Global permissions control who has access to build plans and the Bamboo server whereas Plan permissions control who can perform specific operations on a Plan and its Jobs.
hierarchical projects (projects/configurations groups): Bamboo does support parent & child plan structure. There are several ways you can set up triggering for builds. One of them is to base triggers on other builds, that is, Plan builds are triggered by preceding successful builds of other plans or if other specified plans are building successfully. Example: If Plan A builds successfully it will automatically trigger builds of Plans B & C.
reports from xUnit, Lint, Compiler warnings, Robot framework: Bamboo can run any build process that can be started from a command line. Support includes Maven/Maven2, Ant, make, MSBuild, NAnt, Grails, devenv.exe, and any xUnix-compiant framework (JUnit, Selenium, JWebUnit, NUnit, PHPUnit, etc...).

Triggering builds across different TeamCity instances

We have multiple groups running their own TeamCity setups inside the firm. My group provides a set of generic libraries that other project groups use in their projects. We use TeamCity to push versions of our libraries to production. What I need is a way to automatically trigger builds on other group's CI system that depend on our libraries once we push a new version to production? I already have have the scripts to upgrade to the latest version etc ready. Right now it is manual, i would like to automate it and have a new build of the dependent projects triggered once we release a version to production. I am looking for a way to push the trigger notification across Teamcity instances.
You can trigger TeamCity builds using an HTTP request so you could modify your build script to make the required requests at the end of the build. This does have the downside that you need to hardcode the list of builds that need to be triggered on the remote servers into your build script.
The syntax for the HTTP request is:
http://<user name>:<user password>#<server address>/httpAuth/action.html?add2Queue=<build type Id>
For full details take a look at this page of the TeamCity documentation:
Accessing Server by HTTP

Amazon EC2 multiple instances with SVN app

Heya,
Quick question.
I've got multiple instances on EC2 with a load balancer between them. I use an SVN app that used to push to my live env. at will.
With the multiple EC2's, how would I push a codebase to all of them at once?
Any thoughts/links would be appreciated.
There are a few different ways to do this.
If You Are Using Elastic Load Balancers
Write a script that:
Removes a machine from the pool
Updates the SVN repository
Re-adds the machine to the pool
Repeats for any additional machines
You could also get fancy and remove one machine, update it, remove all other machines and update them, if you're concerned about consistency.
If You Are Using a Custom Load Balancing Application
Look into Capistrano. You don't need to use it with Ruby/Rake -- you can write custom cap files that can do parallel deploys.
How about vlad or fabric for code deployment.
We use Scalr. It is available as a service (Scalr.net) or you can run it yourself (it is Open Source - though the source in the googlecode repository is sometimes a little behind the version the service uses).
Basically, Scalr has a global scripting feature whereby you can specify a script (e.g. bash, PHP, anything with #!bang) and trigger it to be run on all instances of a given 'role' (e.g. web instance). In our case we have a script that just does svn checkout or svn update as appropriate. Scalr supports periodic scheduling of scripts, so in the dev environment I run it every 5 mins to keep dev in synch with SVN, but obviously I manually trigger it for production.
(I have the script taking a param to specify the SVN branch to use)