Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to setup my build and release pipelines in Azure dev ops to publish my webforms application to a file system on a VM on my companies network. Currently this is accomplished through Visual Studio by going to Build > Publish ...
I have a build pipeline set up previously I was using to catch build issues. But now I want to actually publish the builds from the cloud automatically when the master branch is updated.
I have an agent installed on a local VM and I can get Azure dev ops to run on this agent but I have some confusion as to what to do next. I've tried playing around with the Build Solution Task parameters, MSBuild task parameters and so on but it doesn't actually publish it.
The farthest I've gotten is to publish the build to a build folder on the agent but this only contains the solution and associated files, not the built output that would be published to the file system location.
I'm trying to understand how to actually publish the solution once it's been built and placed on the agent.
It also doesn't help that I can't find very good resources on the build variables that all of the default tasks are using.
Build pipeline is responsible for building your software and publishing built artifacts (build output) inside Azure DevOps for release pipelines. Release pipeline is responsible for deploying your website into test/production environment.
First you need to ensure that your build is publishing artifacts (output of build) inside azure devops for the release pipeline. This is achieved by using publish artifacts task.
Path to publish is the directory that contains your build output (files that are copied into VM). You can use relative paths with artifactsstagingdirectory. You can leave drop into artifact name, because that is just the name for zip file, also use Azure Pipelines in artifact publish location.
Now your build is creating artifacts for release pipelines. You can verify this by looking one of the builds in build history. Top right corner should contain artifacts:
If it's not there, then your build is not publishing artifacts correctly. Check build log for more details!
Now if you have azure devops agent running inside VM, the deployment is an easy task.
Stop IIS website with IIS Web App Manage task
Copy artifacts into desired folder (for example c:\wwwroot) with Copy files task.
Start IIS website with IIS Web App Manage task
Make use you have linked build into release pipeline. When you are creating the release pipeline first time. It will ask to set this setting.
Related
My company is developing a web application and have decided that a multi-tenant architecture would be most appropriate for isolating individual client installs. An install would represent an organization (a nonprofit, for example) and not an individual users account. Each install would be several Cloud Run applications bucketed to an individual GCP project.
We want to be able to take advantage of Cloud Build's GitHub support to deploy from our main branch in GitHub to each individual client install. So far, I've been able to get this setup working across two individual GCP projects, where Cloud Build runs in each project individually and deploys to the individual GCP project Cloud Runs at roughly the same time and with the same duration. (Cloud Build does some processing unique to each client install so the build processes in each install are not performing redundant work)
My specific question is can we scale this deployment technique up? Is there any constraint preventing using Cloud Build in multiple GCP projects to deploy to our client installs, or will we hit issues when we continue to add more GCP projects? I know that so far this technique works for 2 installs, but will it work for 20, 200 installs?
You are limited to 10 concurrent builds per project. But if you run one cloud build per project, there is no limitation or known issues.
I am new to VSTS build and deploy and I am struggling with it.
I have a solution that contains a Web Core API and a ASP.Net web project.
I have done my build and now I want to deploy the build to an on premise web server.
When I look at my artifacts, everything looks OK;
So when I set up a deploy definition I start with an empty environment and I add a task. It looks to me that given I want to move the artifacts to an on premise web server I should be using the Windows Machine File Copy task. But when I do I find that I do not have access to the drop folder. How do I fix this (and have I selected the correct task?).
You're using the hosted agent. The hosted agent can't deploy to an on-prem server -- it has no network route.
You can either use Deployment Groups (the agent is installed on your target machine and talks directly to VSTS), or you can install your own on-prem build/release server, then push the bits to the target machine using the Windows Machine File Copy task.
I use VSTS to build a project once changes are checked in from a git repository. That build gets stored in vsts storage. Now is there any way to easily access any retained build and copy it out for some other purpose? that is, say i've done 5 build versions.. all retained in the history of vsts. I need a copy of the 2nd build for something. Can i get to that build folder and copy it?
currently in one of the build steps it gets copied to an area called
$(Build.ArtifactStagingDirectory)\xxxxx.zip
how can i get a copy of that? trying to avoid having to remote into the build agent and dig up the files.
Use the Publish Build Artifacts task. That will upload it as a linked artifact to the build, which you can then download either via clicking on the "Download" link or via the VSTS REST API.
I am using TeamCity (version 9.1.5 if that matters) and I am trying to figure out how to create a trigger that deploys the project to a server. Or maybe there is a way to deploy a project to a server without using a trigger on TeamCity.
It's a very broad question, but I will share the approaches I have used in a couple of scenarios:
1) To deploy when a code checkin is performed, I have setup a build configuration that does the deployment, added the build configuration that does the compiling & packaging as a snapshot and artefact dependency which is then triggered with a Finish Build Trigger https://confluence.jetbrains.com/display/TCD9/Configuring+Finish+Build+Trigger
2) To deploy at a given time of the day but only when new code has been checked in, I have setup a build configuration as above but triggered with a Schedule Trigger https://confluence.jetbrains.com/display/TCD9/Configuring+Schedule+Triggers ensuring to select the dependent build in the Build Changes section.
With regards to how to perform the deployment there are many options, I have used WebDeploy for ASP.Net applications and MSI packages executed by Remote Powershell scripts for Windows Services, but other options are also available depending on the technology you have.
JetBrains provide an end to end example for ASP.Net in their on-line documentation, search for "Continuous Delivery to Windows Azure Web Sites (or IIS)"
I've a question regarding Build Servers for .NET Projects. Currently I'm using TeamBuild in conjunction w/ TFS 2010 to do automated builds in the .NET world. Some older projects are built using plain old MSBuild scripts.
To get rid of the administrative effort I'm currently moving my sources to github. Github offers, as many other sites service hooks to trigger build servers for doing automated builds such as CI or nightly builds.
Sure I could use TeamCity OnPremise and dynamically create Build Agents in Windows Azure using VMRole and Virtual Disks, but I think this hybrid solution is a little bit moronic.
So what are your thoughts about the following architectural idea?
Let's say you're using github as source control platform. When commiting sources to your repository an Azure WebRole hosting a WCF Service will be triggered.
The WebRole itself will just use the Azure API to fire up a new instance of a custom Azure VMRole.
The Azure VMRole itself will use some kind of buildscript such as Rake or MSBuild to have as few developer tools installed on the build agent as needed. After building the entire project the artifacts will be published to Azure BlobStorage and the WebRole hosting the WCF service will be called again, but right now the Azure WebRole is going to terminate the BuildAgent.
While using such a setup you could minimize the costs for the build agent and build nearly any kind of project as far as you're able to install the required element for the build by using PowerShell.
So in bottom line: what are your thoughts on this architecture? Other Ideas? Is there an existing service offering such a solution?
Thorsten
have you looked at https://appharbor.com ? I know a number of people who are using it to do exactly what you are doing.
Check out Team Foundation Service as it can do the following:
Continuous Delivery to Azure
Deploy to production on Windows Azure with two clicks from Visual Studio, or automatically as part of your build process.
Just found this one http://www.appveyor.com/ AppVeyor is also free for OpenSource projects.