GoCD: How to define a pipeline's entire workspace as an artifact for the next stage? - go-cd

Using GoCD, how can I define a stage's entire workspace (as a single artifact) for the next stage? This would highly simplify my setup, in which the second stage needs to fetch many different artifacts from the previous one.
I have tried the following artifact declarations:
Artifact source = .
This causes an error already during the upload in the first stage:
[go] The rule [.] cannot match any resource under [pipelines/mypipeline]
[go] [go] Uploading finished. Failed to upload [.]
Artifact source = *
This does not cause errors, but causes a separate upload for each directory in the root folder, instead of a single artifact of the entire workspace. As a result, I still need to fetch multiple concrete artifacts, instead of one big workspace artifact.
[go] Uploading artifacts from /var/lib/go-agent/pipelines/mypipeline/.svn to [defaultRoot]
[go] Uploading artifacts from /var/lib/go-agent/pipelines/mypipeline/cruise-output to [defaultRoot]
[go] Uploading artifacts from /var/lib/go-agent/pipelines/mypipeline/<dir1> to [defaultRoot]
[go] Uploading artifacts from /var/lib/go-agent/pipelines/mypipeline/<...> to [defaultRoot]
[go] Uploading artifacts from /var/lib/go-agent/pipelines/mypipeline/<dirN> to [defaultRoot]
I could probably zip everything myself with another task and define that as an artifact, but with GoCD already zipping and unzipping on its onw, I thought there must be a simpler solution to my problem.

I am not an expert, just a newbie with GoCD, but having gone through the same pain of trial and error, I'll share the syntax I found to work from my experience. For build artifacts in an upstream pipeline:
Source: #{BUILD_DIR}/*.whl - it seems wild card syntax works when defining build artifacts; NOTE: the values between the curly braces {XXX_XX} are pipeline parameters.
Destination: #{ARTIFACT_DIR} - I prefer to put all my artifacts in a separate folder as it is easier to fetch. GoCD will zip and transfer the folder and unzip it in the downstream pipeline.
In downstream pipeline or another stage in the same pipeline when retrieving artifacts using "Fetch Artifact" task:
Source: #{ARTIFACT_DIR} - I use only the directory name to fetch the entire directory
Destination ./ - the fetched directory will be extracted in current folder in this case
NOTE: The above syntax is for fetching directory of artifacts only where the Source is a file(Not a directory) checkbox in the "Fetch Artifact Task" is not selected.
I was not able to fetch artifact files when I tried parametrizing a file name with pipeline parameters or environment variables (for example my file artifact contained a dynamically generated version) and I did not find anything in the GoCD documentation or examples suggesting it is possible. The docs have example of downloading a file name that is a constant and does not change.
Please share examples of parametrizing file artifacts if possible or suggested workarounds if possible at all.

Related

CodeBuild get Artifact Folder Path

I am running a build through a Codebuildpipeine. I am uploading artifacts based on each stage as documented which is working fine. As you know each time a build is run the artifact folder creates a new folder for the new set of artifacts (all in S3) to be uploaded. What I want to do is retrieve the new folder name that is created in the Artifact folder into my buildspec so I can use it as a variable. Does anyone have a link or a way I can reference this? I would be willing to settle if I can get the entire URL where I can parse it?

Redeploy Google Cloud Function from command line using Source Repositories

I have a fairly simply Google Cloud Function that I'm deploying from Cloud Source Repositories.
I'm using the Google Cloud Shell as my development machine.
When I make updates to the function as I'm developing, I use the CLI to push updates to my Source Repository. However, running the gcloud functions deploy ... command from the command line doesn't seem to force GCF to pull in the latest source.
Occasionally, the deploy command after pushing new source code will simply state "Nothing to update." (which is incorrect.)
More often, it will go through the deployment process but the function will still run the previous version of the code.
When this happens the only way I can get the function to update is to use the dashboard, "Edit" the function, and then hit the Deploy button (even though I didn't change anything.)
Am I forgetting to do some kind of versioning or tagging that is required? Is there a way to force the CLI to pull the most current commit from the source repo?
I think you're looking for the --source=SOURCE gcloud functions deploy option to point to a source repository instead of the current directory (the default):
--source=SOURCE
Location of source code to deploy. Location of the source can be one
of the following three options:
Source code in Google Cloud Storage (must be a .zip archive),
Reference to source repository or,
Local filesystem path (root directory of function source).
Note that if you do not specify the --source flag:
Current directory will be used for new function deployments.
If the function is previously deployed using a local filesystem path, then function's source code will be updated using the current
directory.
If the function is previously deployed using a Google Cloud Storage location or a source repository, then the function's source code will
not be updated.
The value of the flag will be interpreted as a Cloud Storage location,
if it starts with gs://.
The value will be interpreted as a reference to a source repository,
if it starts with https://.
Otherwise, it will be interpreted as the local filesystem path. When
deploying source from the local filesystem, this command skips files
specified in the .gcloudignore file (see gcloud topic
gcloudignore for more information). If the .gcloudignore file
doesn't exist, the command will try to create it.
The minimal source repository URL is:
https://source.developers.google.com/projects/${PROJECT}/repos/${REPO}
By using the URL above, sources from the root directory of the
repository on the revision tagged master will be used.
If you want to deploy from a revision different from master, append
one of the following three sources to the URL:
/revisions/${REVISION},
/moveable-aliases/${MOVEABLE_ALIAS},
/fixed-aliases/${FIXED_ALIAS}.
If you'd like to deploy sources from a directory different from the
root, you must specify a revision, a moveable alias, or a fixed alias,
as above, and append /paths/${PATH_TO_SOURCES_DIRECTORY} to the URL.
Overall, the URL should match the following regular expression:
^https://source\.developers\.google\.com/projects/
(?<accountId>[^/]+)/repos/(?<repoName>[^/]+)
(((/revisions/(?<commit>[^/]+))|(/moveable-aliases/(?<branch>[^/]+))|
(/fixed-aliases/(?<tag>[^/]+)))(/paths/(?<path>.*))?)?$
An example of a validly formatted source repository URL is:
https://source.developers.google.com/projects/123456789/repos/testrepo/
moveable-aliases/alternate-branch/paths/path-to=source

Using ant with AWS CodeBuild - build.xml does not exist and other questions for a newbie

I am trying to switch over to using CodeBuild to build my code so I can then easily push it to my EC2 instances instead of manually building and copying.
I can manually run ant on my station and all will build as it should.
I am now trying to use the AWS CodeBuild console to try this.
I zipped up my source code files and put it in an S3 bucket and put its location in the source fields of AWS CodeBuild. I have the build.xml in this same bucket and I also put the build.xml in the base of the codes zip file. In the build commands I put "ant".
I assume that the build.xml needs to go somewhere else?
Do I need more then just "ant" in the build commands? That is all I use when i manually build the project.
From what I have read i should be able to zip up my code , put it in the S3 location and CodeBuild will extract it and build it correct?
Also, under "Environment: How to Build" - what is the "Output files" section for? It's not for the artifacts that are built correct?
Any other tips or tricks? I am very new to all of this so any help is appreciated! I just learned about ant this week. This is building a rather large project with many classes being built - Will this cause an issue? Like I stated earlier - I do have it building file if I run it manually on my system.
Here is the error I get when I build through Code Build:
[Container] 2019/03/21 15:32:27 Entering phase BUILD
[Container] 2019/03/21 15:32:27 Running command ant
Buildfile: build.xml does not exist!
Build failed
I figured out my issue - I zipped the build files from the folder level and not the root level. I re-zipped and it can now see the build.xml.
I built again with these changes and it looks like I am close! It failed for the following -
https://forums.aws.amazon.com/ 2019/03/21 20:57:13 Expanding myapp.jar
https://forums.aws.amazon.com/ 2019/03/21 20:57:13 Skipping invalid artifact path myapp.jar
https://forums.aws.amazon.com/ 2019/03/21 20:57:13 Phase complete: UPLOAD_ARTIFACTS Success: false
https://forums.aws.amazon.com/ 2019/03/21 20:57:13 Phase context status code: CLIENT_ERROR Message: no matching artifact paths found
Isn't myapp.jar what the build is creating?
I am very confused as to what the Artifact/name should be - isn't this what is being created from the build? It is asking for an ARN - how can there be an ARN for it when it is not created?
Also very confused as to what the Environment/Output files is? It is required but I have no idea what should go in this field? It states that output files can not be empty. Does this mean it wants all the class files that are being built? If so then this build is creating over 30 class files in multiple locations - that is a ton to list.
Thanks
Ernie
I have it working! I will post my findings for others going that might be struggling -
So I figured out that the "Outputs" means what are all the files and/or directories that you want to go into your final artifact after all is built.
I have two directories that I want in the final jar artifact. One is WebContent and the other is build. They both have multiple sub-directories. I put "WebContent/*,build/*" in the output files field. It gave me a jar artifact but when I open the jar it did not have any sub-directories. In order to get it to include all sub-directories I had to make the output files field with "WebContent/**/*,build/**/*". All sub-directories are now in the zip and it appears as if the build was successful.
Hopefully this can help others out.
Now on to creating a script for this and also getting this to work from GitLab.

CommitID as a variable throughout CodePipeline - AWS

I have a pipeline which creates docker images and pushes it to ECR. Since I want to use the AWS provided build environments, I am using 2 build stages.
The pipeline has a total of 3 stages
Get the source code from GitHub : Source
Install dependencies and create a .war file : Build : aws/codebuild/java:openjdk-9
Build the docker image and push it to ECR : Build : aws/codebuild/docker:17.09.0
I would like to tag the docker images with the commit ID which is usually CODEBUILD_RESOLVED_SOURCE_VERSION. However, I have noticed that this variable is only available in my second stage which is immediately after the source.
The worst case work around I found is to write this variable into a file in the second stage and include that file in the artifacts which is the input for the third stage.
Is there a better way to use this in my third stage or overall the pipeline?
Can you write the commit ID to a file that sits alongside the WAR file in the CodePipeline artifact?
And a couple related thoughts:
CodeBuild can be configured in CodePipeline to have multiple input
artifacts, so I assume CODEBUILD_RESOLVED_SOURCE_VERSION refers to
the primary artifact. I'm not sure how to generalize getting the
commit ID into the third action (publish to ECR) because fan-in
(multiple sources with a distinct commit id) can occur at both
CodeBuild actions.
Tagging by commit ID means that multiple pipeline executions may produce an image with the same tag. Ideally I'd like each pipeline execution to be isolated so I don't have to worry about the tag being changed by concurrent pipeline executions or later to use a different dependency closure.
I have managed to do something with jq and sponge as shown in this file buildspec.yaml
I modify my config.json file upon each commit and pass it on to the next stage.
I am using a combination of codepipeline + jq. It's not the best approach, but it's the best I have so far.
commit=$(aws codepipeline get-pipeline-state --name PIPELINE_NAME | jq '.stageStates[0].actionStates[0].currentRevision.revisionId' | tr -d '"'))
and then push the docker image with the new tag. You need to install jq first, if you don't like jq, you can parse the response by yourself.
This 'may' be a duplicate of this other question

AWS Code Deploy Error on Before Install Cannot Solve

So I am attempting to setup CodeDeploy for my application and I keep getting an error during the BeforeInstall part of the deployment. Below is the error.
Error Code UnknownError
Script Name
Message No such file or directory - /opt/codedeploy-agent/deployment-root/06100f1b-5495-42d9-bd01-f33d59fb5deb/d-NL5K1THE8/deployment-archive/appspec.yml
Log Tail
I assumed this meant the YAML file was in the wrong place. However it is in the root directory of my revision. I have tried using a simple AppSpec file like so instead of a more complex one.
## YAML Template.
---
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/www
More or less since this is a first deployment I want it to add all files in the revision to the public directory on the web server.
I am tearing my hair out over this and I feel it is a simple issue. I have the IAM policies and roles correct and I have CodeDeploy setup and running on my instance I am trying to deploy to.
It seems to think you had a successful deploy at some point.
Go into /opt/codedeploy-agent/deployment-root/deployment-instructions/ and delete all the files in there. Then it won't look for this last deploy.
I just had this SAME problem and I figured it out! Make sure your AppSpec file has the right EXTENSION! I was using yaml and not yml, now everything works perfectly.
I made it work like this:
I had a couple of failed deployments for various reasons.
The thing is that CD keeps in the EC2 instance and in the path /opt/codedeploy-agent/deployment-root/​ a folder named by the ID of the failed deployment [a very long alphanumeric sting] .
Delete this folder and create a new deployment [from the aws UI console] and redeploy the application. This way the appspec.yml file that is in the wrong place will be deleted.
It should now succeed.
Extra Notice:
CD does not rewrite files [that have not been created by it's specific deployment]
CodeDeploy does not deploy in a folder that there is already code[files] as it does not want to interfere with different CD deployments and/or other CI/CD tools [like Jenkins].
It only deploys in a path that has already deploy code with the specific deployment.
You can empty the folder where your deployment want to happen and redeploy your code via CD.
When you login to the host, do you see the appspec.yml file in the directory there? If not are you positive it has been checked in with the rest of your deployed code?
Just encountered this issue too. In my case, the revision zip file extracts into a directory when deployed. Because of that /opt/codedeploy-agent/deployment-root/xxx/xxx/deployment-archive contains the parent directory of my revision files (instead of the actual revision files).
The key is to compress your revision without the parent directory. In mac terminal,
cd your-app-directory-containing-appspec
zip -r app.zip .