GoCD - How to copy files across machines/pipelines - go-cd

I have two pipelines running on different agents, one to make a build and run unit tests another to deploy the artifacts to tomcat. The first pipeline is configured to store the artifacts, the files are copied to server/artifacts/pipelines/xx folder. How to get the second pipeline to copy the file on the second agent?

As Juhi said in previous answer you can make build pipeline upstream dependency for deploy pipeline. When you create deploy pipeline be sure to select in Step 2: Materials Pipeline as material and select build pipeline.
Since Go CD has client server architecture, all artifacts defined in one pipeline are transferred to server and available to all downstream dependencies. This happens since you can have multiple agents and there is no guarantee same agent will do both build and deploy.
In downstream pipeline (deploy pipeline in your case) you can add job with Fetch Task where you can select build pipeline, stage and job which created artifact and give it path to your artifact.
You can even create template out of deploy and use it for deploy on different environments.

You can create pipeline dependency between the first and second pipeline. Refer create pipeline dependncy document. After that setup fetch artifact task in the second pipeline to fetch artifacts from the first pipeline.

Related

How to have a simple manual ECS deployment in CodePipeline / CodeDeploy?

Basically I would like to have a simple manual deploy step that's not directly linked to a build. For use cases, when using containers, I wouldn't like to perform a build separately per environment (eg: once my build puts an image tag in ECR, I would like to deploy that to any number of environments).
Now, I know in CodePipeline I can have a number of actions and I can precede them with manual approval.
The problem with that is that should I not want to perform the last manually approved deploy, subsequent executions will pile on - the pipeline execution doesn't complete and what comes next will just have to wait. I can set a timeout, for sure, but there are moments when 20 builds come in fast and I don't know which one of them I may want to deploy to which environment (they generally all go to some QA/staging, but some need to manually deployed to a particular dev-related environment or even to production).
Manually updating task definitions all around in ECS is tedious.
I have a solution where I can manually patch a task definition using awscli and yq but is there a way to have a simple pipeline with one step that takes a manual input (aka image tag) and either uses an ECS deploy step (the only place where you can provide a clean straight patch json to patch the task definition) or uses my yq script to deploy?

Trigger Gitlab CI/CD pipeline to deploy specific part of the repository

I have a repository on GitLab with a directory structure similar to this:
folder-a\
-python-a.py\
folder-b\
-python-b.py
I am trying to set up a CI/CD pipeline on gitlab that will detect changes made to the python code, and deploy them to a production server. What I have currently is the user have to trigger the pipeline manually, and input in the folder name as a variable, which will then cause the pipeline to "cd" into the folder and deploy the code inside the folder.
Is there any configuration or settings that can be added to the pipeline so whenever a Merge Request is merged to the main branch, the pipeline triggers and detects which code was changed, and then deploy the respective code without having the user to manually trigger it and inputting the folder name as a variable?
You might be able to use only:changes / except:changes to do that.
You can have two jobs. One job that goes to folder-a if something under folder-a/* has changed and the other job goes to folder-b if something under folder-b/* has changed.

Code pipeline to build a branch on pull request

I am trying to make a code pipeline which will build my branch when I make a pull request to the master branch in AWS. I have many developers working in my organisation and all the developers work on their own branch. I am not very familiar with ccreating lambda function. Hoping for a solution
You can dynamically create pipelines everytime a new pull-request has been created. Look for the CodeCommit Triggers (in the old CodePipeline UI), you need lambda for this.
Basically it works like this: Copy existing pipeline and update the the source branch.
It is not the best, but afaik the only way to do what you want.
I was there and would not recommend it for the following reasons:
I hit this limit of 20 in my region: "Maximum number of pipelines with change detection set to periodically checking for source changes" - but, you definitely want this feature ( https://docs.aws.amazon.com/codepipeline/latest/userguide/limits.html )
The branch-deleted trigger does not work correctly, so you can not delete the created pipeline, when the branch has been merged into master.
I would recommend you to use Github.com if you need a workflow as you described. Sorry for this.
I have recently implemented an approach that uses CodeBuild GitHub webhook support to run initial unit tests and build, and then publish the source repository and built artefacts as a zipped archive to S3.
You can then use the S3 archive as a source in CodePipeline, where you can then transition your PR artefacts and code through Integration testing, Staging deployments etc...
This is quite a powerful pattern, although one trap here is that if you have a lot of pull requests being created at a single time, you can get CodePipeline executions being superseded given only one execution can proceed through a given stage at a time (this is actually a really important property, especially if your integration tests run against shared resources and you don't want multiple instances of your application running data setup/teardown tasks at the same time). To overcome this, I publish an S3 notification to an SQS FIFO queue when CodeBuild publishes the S3 artifact, and then poll the queue, copying each artifact to a different S3 location that triggers CodePipeline, but only if there are are currently no executions waiting to execute after the first CodePipeline source stage.
We can very well have dynamic branching support with the following approach.
One of the limitations in AWS code-pipeline is that we have to specify branch names while creating the pipeline. We can however overcome this issue using the architecture shown below.
flow diagram
Create a Lambda function which takes the GitHub web-hook data as input, using boto3 integrate it with AWS pipeline(pull the pipeline and update), have an API gateway to make the call to the Lambda function as a rest call and at last create a web-hook to the GitHub repository.
External links:
https://aws.amazon.com/quickstart/architecture/git-to-s3-using-webhooks/
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codepipeline.html
Related thread: Dynamically change branches on AWS CodePipeline

Use two sources in an AWS-CodePipeline pipeline

I have a specific case which I'm not sure if it's possible with AWS CodePipeline, and I didn't find any information about it in the documentation and event by googling....
So I would like to know if I can set two sources in a pipeline (it could be in the same stage or different stages).
Here is my use case :
I would like my pipeline to start when a file (a specific object) is modified in my s3 bucket
When this file changes and the pipeline is triggered, I would like to clone a codecommit repository and then process the build and other stages...
In the other hand when there is a commit on the master branch of my codecommit repository, I would like the pipeline to start and build my sources.
So The pipeline should be triggered either when the change comes from s3 or codecommit
I don't want to version the s3 file in my codecommit repository because it should be encrypted and used by others teams than dev team working with the git repository
And any time my pipeline starts either if it's from the s3 bucket change or the codecommit push, I should source the commit from the repository for build purposes...
I don't know if my objectives specifications are clear, if yes is it possible to use two source actions in a pipeline as described above and how to achieve this?
Thank you in advance.
Cheers,
Eugène NG
Yes. It is possible to have two sources for an AWS CodePipeline. Or many for that matter. The two sources have to be in your first stage.
Then in your build phase properties, you need to tell it that you are expecting two sources.
Then tell the build project which is your primary source. This is going to be the one that you want your build project to execute the codebuild.
From your buildspec or from any scripts you call, you can then access the source directories by referencing:
$CODEBUILD_SRC_DIR_SourceOutput1
$CODEBUILD_SRC_DIR_SourceOutput2
Just replace SourceOutputX above with what you call your output from the source stage.
I found the following link with more information:
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-multi-in-out.html
Yes, CodePipeline allows multiple source actions in a single pipeline. A change in either source will trigger a pipeline execution. The thing to know is that every pipeline execution will pull the latest source for both actions (not just the one with a change that triggered the pipeline execution).

AWS Codepipeline Continue Previous Execution

I am using AWS CodePipeline in order to automatically check out code, build an application with CodeBuild and deploy the application to an ECS cluster for development. After that I inserted a manual step to approve deployment to the staging environment. This works well. However, when I run the pipeline again, there seems to be no way to approve the actions in one of previous executions. As far as I can see, I can only push the latest build artifact to staging (and later to production). This is surely not, what I would like to do. I could use more than one pipeline - one for each stage - for this, but than, what is the manual approval good for?
Currently updating a pipeline will end all in-flight executions at the end of their current action. This includes cancelling in-flight approvals.
After updating your pipeline you can click "Release Change" to have a fresh execution run through your pipeline and after that the changes will continue to be released as usual.
Unlike creating a pipeline, editing a pipeline does not rerun the most
recent revision through the pipeline. If you want to run the most
recent revision through a pipeline you've just edited, you must
manually rerun it. Otherwise, the edited pipeline will run the next
time you make a change to a source location configured in the source
stage of the pipeline. For information, see Start a Pipeline Manually
in AWS CodePipeline.
From the documentation here: https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-edit.html