My company is developing a web application and have decided that a multi-tenant architecture would be most appropriate for isolating individual client installs. An install would represent an organization (a nonprofit, for example) and not an individual users account. Each install would be several Cloud Run applications bucketed to an individual GCP project.
We want to be able to take advantage of Cloud Build's GitHub support to deploy from our main branch in GitHub to each individual client install. So far, I've been able to get this setup working across two individual GCP projects, where Cloud Build runs in each project individually and deploys to the individual GCP project Cloud Runs at roughly the same time and with the same duration. (Cloud Build does some processing unique to each client install so the build processes in each install are not performing redundant work)
My specific question is can we scale this deployment technique up? Is there any constraint preventing using Cloud Build in multiple GCP projects to deploy to our client installs, or will we hit issues when we continue to add more GCP projects? I know that so far this technique works for 2 installs, but will it work for 20, 200 installs?
You are limited to 10 concurrent builds per project. But if you run one cloud build per project, there is no limitation or known issues.
Related
Currently we're having a dev environment in a gcp project. We're using GDM templates and other stuffs along with repo in bitbucket. Whenever we push any changes in bitbucket it builds and deploy to this dev environment. Suddenly, we've decided to have a new gcp project as test environment and we want to deploy automatically to this environment like dev environment. Our preference will be to deploy to this environment from the cloudbuild execution in dev environment. Can you suggest us any guideline that'll help us to set up things in one place that'll automatically deploy this in multiple projects as multiple environments automatically?
You can use Terraform to achieve this.
There's a lot of information on how to start here.
However, I would suggest having projects in separate deployments. This way you limit the blast radius and protect production from errors occurring in other environments.
You need separate calls for separate projects. Just like almost all Google API resources deploymentmanager/deployments lives inside a project (https://www.googleapis.com/deploymentmanager/v2/projects/[PROJECT]/global/deployments), thus you cannot deploy to multiple projects in one call.
I am using TeamCity (version 9.1.5 if that matters) and I am trying to figure out how to create a trigger that deploys the project to a server. Or maybe there is a way to deploy a project to a server without using a trigger on TeamCity.
It's a very broad question, but I will share the approaches I have used in a couple of scenarios:
1) To deploy when a code checkin is performed, I have setup a build configuration that does the deployment, added the build configuration that does the compiling & packaging as a snapshot and artefact dependency which is then triggered with a Finish Build Trigger https://confluence.jetbrains.com/display/TCD9/Configuring+Finish+Build+Trigger
2) To deploy at a given time of the day but only when new code has been checked in, I have setup a build configuration as above but triggered with a Schedule Trigger https://confluence.jetbrains.com/display/TCD9/Configuring+Schedule+Triggers ensuring to select the dependent build in the Build Changes section.
With regards to how to perform the deployment there are many options, I have used WebDeploy for ASP.Net applications and MSI packages executed by Remote Powershell scripts for Windows Services, but other options are also available depending on the technology you have.
JetBrains provide an end to end example for ASP.Net in their on-line documentation, search for "Continuous Delivery to Windows Azure Web Sites (or IIS)"
We have a load balanced setup in AWS with two instances. We do pretty frequent code updates, utilizing SVN. I need to know how easy it is to update the code changes across all the instances in our cluster. Can we simply do 'snapshots' and create new volumes each time for the instances?...or?...
I would not do updates via EBS snapshots. Think of EBS volumes as a hard disk - you would not change your harddisk if you have an update for your software.
As you have your code in a version control system, code updates should be quite simple like logging in to your (multiple) servers and doing a git pull or svn update. This should fetch the latest code files from your servers. Depending on the type of application you would have to do some other tasks afterwards, running build scripts, emptying cache etc.
The problem is that this kind of setup does not scale well. If you have n servers, you will have to login and do this command n times. Therefore it makes sense to look into some remote management tools that you can use in one step. With a lot of these tools, you also get a complete configuration management stack: you define a set of recipes or tasks (like installed packages, configuration files, fetch the latest code, necessary build steps) for each of your servers, and when you boot up a new server it fetches the lastest version of its configuration and installs itself.
Popular configuration management tools include Puppet or Salt. Both tools have remote execution included which should make your task to publish your code base easier, you would only have to fire one command on your master server and it automatically executes this task on all its minions / slave servers.
I've a question regarding Build Servers for .NET Projects. Currently I'm using TeamBuild in conjunction w/ TFS 2010 to do automated builds in the .NET world. Some older projects are built using plain old MSBuild scripts.
To get rid of the administrative effort I'm currently moving my sources to github. Github offers, as many other sites service hooks to trigger build servers for doing automated builds such as CI or nightly builds.
Sure I could use TeamCity OnPremise and dynamically create Build Agents in Windows Azure using VMRole and Virtual Disks, but I think this hybrid solution is a little bit moronic.
So what are your thoughts about the following architectural idea?
Let's say you're using github as source control platform. When commiting sources to your repository an Azure WebRole hosting a WCF Service will be triggered.
The WebRole itself will just use the Azure API to fire up a new instance of a custom Azure VMRole.
The Azure VMRole itself will use some kind of buildscript such as Rake or MSBuild to have as few developer tools installed on the build agent as needed. After building the entire project the artifacts will be published to Azure BlobStorage and the WebRole hosting the WCF service will be called again, but right now the Azure WebRole is going to terminate the BuildAgent.
While using such a setup you could minimize the costs for the build agent and build nearly any kind of project as far as you're able to install the required element for the build by using PowerShell.
So in bottom line: what are your thoughts on this architecture? Other Ideas? Is there an existing service offering such a solution?
Thorsten
have you looked at https://appharbor.com ? I know a number of people who are using it to do exactly what you are doing.
Check out Team Foundation Service as it can do the following:
Continuous Delivery to Azure
Deploy to production on Windows Azure with two clicks from Visual Studio, or automatically as part of your build process.
Just found this one http://www.appveyor.com/ AppVeyor is also free for OpenSource projects.
Quite a few build and CI systems support steps for pushing build output to Azure, but I haven't seen any which can actually run on Azure (or EC2). Ideally I would like to be able to spin up an arbitrary number of instances (depending on the # of pending submits) to deal with the actual build + quality gates (UTs, FXCop, other static analysis tools) + source repository checkin process.
Are there existing tools which can do this, or has anyone built something which they can discuss?
Thanks!
[Edit: I found this question which is quite similar but didn't have any informative answers, so I'll keep my question alive]
If you're using Git or Mercurial for source control, AppHarbor might be what you're looking for. It's a CI build/deploy environment that runs exclusively in the cloud (EC2), and can deploy build output to Azure.
Here are some links for reference:
http://sourcecodebean.com/archives/appharbor-heroku-for-net/987
http://lostechies.com/chrismissal/2011/03/12/using-appharbor-for-continuous-integration
http://haacked.com/archive/2011/05/12/making-let-me-bing-that-for-you-open-source.aspx
http://appharbor.com/page/pricing
The open souce Jenkins CI server has an EC2 plugin that will spin up EC2 instances automatically depending on your build load. I couldn't find anything for Azure, but I highly recommend Jenkins - it's easy to configure, well maintained and has stacks of features.
Continuous Integration on Windows Azure http://code.google.com/p/cassis/ (over Mercurial)
Disclaimer: work produced by my 1st year CS students
Also Teamcity has support for this: http://www.jetbrains.com/teamcity/features/amazon_ec2.html