I'm trying to establish a Google Cloud Builder Build Trigger to autobuild and deploy my ASP .NET Core application to Google AppEngine.
Using the current cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/dotnet'
args: [ 'publish', '-c', 'Release' ]
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app','deploy','./bin/Release/netcoreapp2.1/publish/app.yaml']
I have tested local build working using cloud-build-local tool.
These two approach worked locally:
From the application subdirectory: cloud-build-local --config=cloudbuild.yaml --dryrun=false .
From the repository root: cloud-build-local --config=clearbooks-rest-aspnetcore/cloudbuild.yaml --dryrun=false clearbooks-rest-aspnetcore
The Build Trigger definition seems to partially support config files from a subdirectory of the repository root (approach no 2) however it seems to assume that code always lives in repository root.
How do I configure Cloud Builder to start a build in a subdirectory of the repository?
The solution is to update cloudbuild.yaml:
Add the dir: option on the build step
Provide the correct app.yaml location for deploy step
Here is the working cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/dotnet'
args: [ 'publish', '-c', 'Release' ]
dir: 'clearbooks-rest-aspnetcore'
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app','deploy','clearbooks-rest-aspnetcore/bin/Release/netcoreapp2.1/publish/app.yaml']
When testing locally, run cloud-build-local on repository root, never on the app subdirectory:
cloud-build-local --config=clearbooks-rest-aspnetcore/cloudbuild.yaml --dryrun=false .
This reflects the way Cloud Build works:
Path to correct cloudbuild.yaml
Current directory for source
I was developing a sample project with Spring Boot with App Engine and directory structure is like.
google-cloud
- appengine-spring-boot
- appflexengine-spring-boot
below are the cloudbuild.yaml file that is working for me.
steps:
- name: 'gcr.io/cloud-builders/mvn'
dir: "appengine-spring-boot"
#args: [ 'package','-f','pom.xml','-Dmaven.test.skip=true' ]
args: [ 'clean', 'package']
- name: "gcr.io/cloud-builders/gcloud"
dir: "appengine-spring-boot"
args: [ "app", "deploy" ]
timeout: "1600s"
Related
In my app, I have the following:
app.yaml
cloudbuild.yaml
I use the above for the first time to deploy the default service.
app.qa.yaml
cloudbuild_qa.yaml
app.staging.yaml
cloudbuild_staging.yaml
app.prod.yaml
cloudbuild_prod.yaml
They all reside at the root of the app.
For instance, the cloudbuild_qa.yaml is as follows:
steps:
- name: node:14.0.0
entrypoint: npm
args: ['install']
- name: node:14.0.0
entrypoint: npm
args: ['run', 'prod']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'app', 'deploy', '--project', '$PROJECT_ID', '-q', '$_GAE_PROMOTE', '--version', '$_GAE_VERSION', '--appyaml', 'app.qa.yaml']
timeout: '3600s'
The Cloud Build works well, however, it's not respecting the app.qa.yaml instead, it always takes the default app.yaml.
Services to deploy:
descriptor: [/workspace/app.yaml]
source: [/workspace]
target project: [test-project]
target service: [default]
target version: [qa]
target url: [https://test-project.uc.r.appspot.com]
Any idea what's happening? Do you know how to use the correct app.yaml file in such a case?
Remove the '--appyaml', in the attribute list.
However, I'm not sure that is a good practice to have a deployment file different from an environment to another one. When you update something at a place, you could forget to update the same thing in the other files.
Did you think to replace placeholders in the files? or to use substitution variables in the Cloud Build?
In our build we are using:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '--appyaml=app-qa.yaml', '--no-promote', '--version=${_TAG_VERSION}']
FYI:
I've notice you are building your applications using the node builder but you could add the script gcp-build in your package.json because the script gcloud app deploy should look for scripts named gcp-build and execute them before deploying
{
"scripts": {
...
"build": "tsc",
"start": "node -r ./tsconfig-paths-dist.js dist/index.js",
"gcp-build": "npm run build"
},
}
Reference: https://cloud.google.com/appengine/docs/standard/nodejs/running-custom-build-step
Does anyone have any experience of using Azure DevOps to deploy React build package to AWS using their extension?
I'm stuck on uploading only the build package of npm build.
Here is my scripts so far:
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- script: |
npm install
npm test
npm run build
- task: S3Upload#1
inputs:
awsCredentials: 'AWS Deploy User'
regionName: 'us-east-1'
bucketName: 'test'
globExpressions: '**'
createBucket: true
displayName: 'npm install and build'
The only options on the task for S3Upload that stands out is sourceFolder. They use something like "$(Build.ArtifactStagingDirectory)" but since I've never used that before that doesn't make a lot of sense to me. Would it just be as simple as like $(Build.ArtifactStagingDirectory)/build
The predefined variable $(Build.ArtifactStagingDirectory) is mapped to c:\agent_work\1\a, which is the local path on the agent where any artifacts are copied to before being pushed to their destination.
In your yaml pipeline, your source code is downloaded in folder $(Build.SourcesDirectory)(ie. c:\agent_work\1\s). And the npm commands in the script task all runs in this folder. So the npm build result is this folder $(Build.SourcesDirectory)\build (ie.c:\agent_work\1\s\build).
S3Upload task will upload file from $(Build.ArtifactStagingDirectory) by default. You can specifically point the sourceFolder attribute (default is $(Build.ArtifactStagingDirectory)) of S3Upload task to folder $(Build.SourcesDirectory)\build. See below:
- task: S3Upload#1
inputs:
awsCredentials: 'AWS Deploy User'
regionName: 'us-east-1'
bucketName: 'test'
globExpressions: '**'
createBucket: true
sourceFolder: '$(Build.SourcesDirectory)/build'
Another workaround is to use copy file task to copy the build results from $(Build.SourcesDirectory)\build to folder $(Build.ArtifactStagingDirectory). See example here.
- task: CopyFiles#2
inputs:
Contents: 'build/**' # Pull the build directory (React)
TargetFolder: '$(Build.ArtifactStagingDirectory)'
I currently have an Azure Devops pipeline to build and deploy a next.js application via the serverless framework.
Upon reaching the AWSPowerShellModuleScript#1 task I get these errors:
[warning]MSG:UnableToDownload «https:...» «»
[warning]Unable to download the list of available providers. Check
your internet connection.
[warning]Unable to download from URI 'https:...' to ''.
[error]No match was found for the specified search criteria for the
provider 'NuGet'. The package provider requires 'PackageManagement'
and 'Provider' tags. Please check if the specified package has the
tags.
[error]No match was found for the specified search criteria and
module name 'AWSPowerShell'. Try Get-PSRepository to see all
available registered module repositories.
[error]The specified module 'AWSPowerShell' was not loaded because no
valid module file was found in any module directory.
I do have the AWS.ToolKit installed and it's visible when I go to manage extensions within Azure Devops.
My pipeline:
trigger: none
stages:
- stage: develop_build_deploy_stage
pool:
name: Default
demands:
- msbuild
- visualstudio
jobs:
- job: develop_build_deploy_job
steps:
- checkout: self
clean: true
- task: NodeTool#0
displayName: Install Node
inputs:
versionSpec: '12.x'
- script: |
npm install
npx next build
displayName: Install Dependencies and Build
- task: CopyFiles#2
inputs:
Contents: 'build/**'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts#1
displayName: Publish Artifact
inputs:
pathtoPublish: $(Build.ArtifactStagingDirectory)
artifactName: dev_artifacts
- task: AWSPowerShellModuleScript#1
displayName: Deploy to Lambda#Edge
inputs:
awsCredentials: '###'
regionName: '###'
scriptType: 'inline'
inlineScript: 'npx serverless --package dev_artifacts'
I know I can use the ubuntu vmImage and then make use of the awsShellScript but the build agent I have available to me doesn't support bash.
I am trying to setup continuous deployment of my golang backend using the Google documentation, but when my trigger fires, it fails with the following error:
starting build "eba3ce39-caad-43f0-a255-0a3cacec4913"
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/my-porject/r/github_myusername_myproject.com
* branch 660796f575bae6860d6f96df60cfd631a730c3ae -> FETCH_HEAD
HEAD is now at 660796f cloudbuild.yaml
BUILD
Starting Step #0
Step #0: Already have image (with digest): gcr.io/cloud-builders/docker
Step #0: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #0
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
My project file structure looks like:
project
frontend
backend
main.go
cloudbuild.yaml
Dockerfile
where my cloudbuild.yaml looks like:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-t",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
".",
]
# Push the image to Container Registry
- name: "gcr.io/cloud-builders/docker"
args:
[
"push",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
]
# Deploy image to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args:
- "run"
- "deploy"
- "[SERVICE_NAME]"
- "--image"
- "gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA"
- "--region"
- "us-central1"
- "--platform"
- "managed"
images:
- gcr.io/my-project/github.com/username/project.com
and my Dockerfile looks like
# Use the official Golang image to create a build artifact.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.13 as builder
# Create and change to the app directory.
WORKDIR /app
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN CGO_ENABLED=0 GOOS=linux go build -mod=readonly -v -o server
# Use the official Alpine image for a lean production container.
# https://hub.docker.com/_/alpine
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine:3
RUN apk add --no-cache ca-certificates
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/server /server
# Run the web service on container startup.
CMD ["/server"]
I got the Dockerfile from Quickstart: Build and Deploy
.
When you execute a push command to your github repo, the Cloud Build will triggers and look for the cloudbuild.yaml file. You can specify the cloudbuild.yaml location when you create the build trigger by editing the Configuration section and Cloud Build configuration file (yaml or json) in which you can choose the cloudbuild.yaml location. in your case just make it backend/cloudbuild.yaml.
Now, that's not enough because when the build start, docker build command will initiate to build your image as per your first step. However, your build context for docker is . which should not be because all your repo was copied to GCP and the build context here is relational to the project and not where the cloud build is.
To solve this issue just change the build context of docker to ./backend. Your cloudbuild final version should be something like:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-t",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
"./backend",
]
#Rest of the steps ...
The Cloud Build trigger is currently pointing to /project/ while your directory structure is as follows:
project
frontend
backend
main.go
cloudbuild.yaml
Dockerfile
When you execute the trigger, the directory workspace is copied to /workspace/, thus it cannot find the Dockerfile therein.
You can move everything to the same working directory.
.
├── main.go
├── cloudbuild.yaml
├── Dockerfile
If you would like to keep your current directory structure,your Cloud Build trigger will need to point to /project/backend/, instead. Note that you can check your directory structure using the ls -la linux command.
I've created a mirrored GitHub repo in Google's Container Registry and then created a Build Trigger. The dockerfile in the repo includes gsutil -m rsync -r gs://asset-bucket/ local-dir/ so that I can move shared private assets into the container.
But I get an error:
ServiceException: 401 Anonymous caller does not have storage.objects.list access to asset-bucket
I have an automatically created service account (#cloudbuild.gserviceaccount.com) for building and it has the Cloud Container Builder role. I tried adding Storage Object Viewer, but I still get the error.
Shouldn't the container builder automatically have the appropriate permissions?
Are you using the gcr.io/cloud-builders/gsutil build step to do this? That should use default credentials properly and it should Just Work.
steps:
- name: 'gcr.io/cloud-builders/gsutil'
args: [ "-m", "rsync", "gs://asset-bucket/", "local-dir/" ]
Alternatively, you could try the GCS Fetcher.
Just to be specific about the answer from #david-bendory, privileged calls cannot occur inside a dockerfile. I created a cloudbuild.yaml that looks like this:
steps:
- name: 'gcr.io/cloud-builders/gsutil'
args: [ "-m", "rsync", "-r", "gs://my-assets/", "." ]
dir: "static"
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/project-name', '.']
images: ['gcr.io/$PROJECT_ID/project-name']
and a dockerfile that includes
COPY static/* www/