Aws Code pipeline is failing at Deployment stage by timing out - amazon-web-services

I am trying to work my way to have a ci/cd for the Api part of the application.
I have 3 steps:
1: Source (git hub version2)
2: Build (currently has no commands)
3: Deploy(provider is code deploy(application))
Here is the screenshot of the events in code deploy.
.
While creating the Deployment Group. I chose the option of downloading the code deploy provider from the option(though it was necessary).
While setting up the code pipeline chose
Felt that was appropriate.
This code pipeline has put an object into the S3 bucket for this pipeline.
Code deploy is acting on that source artifact.
Note:
We have nothing on this Ec2 image it's just a place where we have our API.
Currently, Ec2 is empty.
What would be the proper way to implement this? How can I overcome the issues I am facing.

Without appspec.yml your deployment will fail. From docs:
An AppSpec file must be a YAML-formatted file named appspec.yml and it must be placed in the root of the directory structure of an application's source code. Otherwise, deployments fail.

Related

Trigger specific AWS Codepipeline source stage when change is made to a specific directory in repo

I have a number of services in a single GitHub repository, each service has its own CodePipeline on AWS managed through Terraform. Instead of triggering all of the pipelines on commit, I'd like to know how I can trigger each service's pipeline if its directory had any changes on commit, without having to split the services each into its own repository.
I don't think that there's a conditional source stage support per folder at code pipeline as we speak. Just finished checking this documentation about sources in CodePipeline. It does not seem to contain a folder-level filtering.
You could try this CDK-based template solution which showcases a mono-repository, which is composed of multiple services, have different CI/CD pipelines for each service. The solution detects which top level directory the modification happened and triggers the AWS CodePipeline configured to that directory.
This is sad but they might add it in the future. I've also wanted Quality gates, images from readme files in code-commit but these features seem too hard to implement haha.
It ended up being simpler than I had anticipated, there are github actions that do exactly what I needed.
This action checks whether a path had a change, and this action triggers a specific pipeline.

Azure DevOps Pipeline: AWS CLI Task, How to reference predefined variables

I am using the AWS CLI task to deploy a Lambda layer. The build pipeline upstream looks like this:
It zips up the code, publishes the artifact and then downloads that artifact.
Now in the release pipeline I'm deploying that artifact via an AWS CLI command. The release pipeline looks like this:
I'm trying to figure out a way to dynamically get the current working directory so I don't need to hardcode it. In the options and parameters section you can see I'm trying to use $(Pipeline.Workspace) but it doesn't resolve correctly.
Is this possible?
Correct me if I am wrong, but I looks like you are running this in Azure Release? Not Pipelines?
If that is the case I think the variable you are looking for is $(Release.PrimaryArtifactSourceAlias) .
See the section of the document that talks about release specific variables: https://learn.microsoft.com/en-us/azure/devops/pipelines/release/variables?view=azure-devops&tabs=batch#default-variables---release
Yes. This is completely achievable.
From your screenshot, you are using the Release Pipeline to deploy the Artifacts.
In your situation, the $(Pipeline.Workspace) can only be used in Build Pipeline.
Refer to this doc: Classic release and artifacts variables
You can use the variable: $(System.ArtifactsDirectory) or $(System.DefaultWorkingDirectory)
The directory to which artifacts are downloaded during deployment of a release. The directory is cleared before every deployment if it requires artifacts to be downloaded to the agent. Same as Agent.ReleaseDirectory and System.DefaultWorkingDirectory.

How can I create a pipeline as code in AWS codepipeline

I am using AWS codepipeline as my CI/CD tool. I have a code pipeline template yml file on my git and I wonder how I can link the file to AWS codepipeline. What I want to do is to let codepipeline to create/update the pipeline based on my pipeline yml file in github.
I have searched and tried on AWS console. All I can do is to manually create a pipeline via console and upload the template file. It works but it is not pipeline as code. If I want to change the stages in the pipeline, I will have to manually update the pipeline on AWS console or via cloudformation command.
Let me give an example, if I need to add a new stage in my pipeline. What I'd like to do is to update the yml file in github repo and commit it, then AWS codepipeline reads this yml file to update itself. I don't want to manually update the stage via AWS console.
Is there a way for me to sync the codepipeline to my pipeline yml file under source control?
I have seen lot of people wondering about this setup where everything is managed via code and I personally use this too with CodePipeline. I can see many people have replied but let me put it here with detials so that it can be help to anyone who wants to do this.
There are two ways to achieve this and let me try to explain both option here:
Option:1
Create two Seperate Pipeline:
"Pipeline -1" (Responsible for config change like adding extra stages to main pipeline "Pipeline -2", with two stage source and deploy (CloudFormation)
source_Config (gitrepo_config) --> deploy_Config_Cfn
"Pipeline -2" (Actual deployment Pipeline with stages like source, buid, deploy stage which will be created by using resource.yaml)
source_Resource (gitrepo_resource) --> build_Resource --> Deploy_Resource
Based on above config upload the template you use to create the main pipeline "resource.yaml" to repo "gitrepo_config".
Upload all the Code in repo "gitrepo_resource" based on the deployment provide you are using for "Deploy_Resource"
Once above setup is done when you want to put extra stages in pipeline you can make changes in file "resource.yaml" in git repo and "Pipeline -1" will do the rest.
Option:2 (Little Complex But let me see if I can explain)
I was using option 1 until I came up with this option.
This second way is like 100% code because even in above option I have to create the "Pipeline -1" either manually or via CFN for first time and later for update also I need to go to console.
To overcome this we can include both Pipeline in same CloudFormation template "resource.yaml" and only one time we have to execute that CloudFormation stack and later everything else is automatic.
I hope this will be helpful to everyone.
Note: Also we have to keep in mind in both option if during any config change if pipeline execution is in progress for resource pipeline "Pipeline -2 " then it might be marked as failed so to overcome this issue you can always set additional trigger which will trigger the "Pipeline -2" based on success state of "Pipeline -1" in addition to the source code trigger.

Use two sources in an AWS-CodePipeline pipeline

I have a specific case which I'm not sure if it's possible with AWS CodePipeline, and I didn't find any information about it in the documentation and event by googling....
So I would like to know if I can set two sources in a pipeline (it could be in the same stage or different stages).
Here is my use case :
I would like my pipeline to start when a file (a specific object) is modified in my s3 bucket
When this file changes and the pipeline is triggered, I would like to clone a codecommit repository and then process the build and other stages...
In the other hand when there is a commit on the master branch of my codecommit repository, I would like the pipeline to start and build my sources.
So The pipeline should be triggered either when the change comes from s3 or codecommit
I don't want to version the s3 file in my codecommit repository because it should be encrypted and used by others teams than dev team working with the git repository
And any time my pipeline starts either if it's from the s3 bucket change or the codecommit push, I should source the commit from the repository for build purposes...
I don't know if my objectives specifications are clear, if yes is it possible to use two source actions in a pipeline as described above and how to achieve this?
Thank you in advance.
Cheers,
Eugène NG
Yes. It is possible to have two sources for an AWS CodePipeline. Or many for that matter. The two sources have to be in your first stage.
Then in your build phase properties, you need to tell it that you are expecting two sources.
Then tell the build project which is your primary source. This is going to be the one that you want your build project to execute the codebuild.
From your buildspec or from any scripts you call, you can then access the source directories by referencing:
$CODEBUILD_SRC_DIR_SourceOutput1
$CODEBUILD_SRC_DIR_SourceOutput2
Just replace SourceOutputX above with what you call your output from the source stage.
I found the following link with more information:
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-multi-in-out.html
Yes, CodePipeline allows multiple source actions in a single pipeline. A change in either source will trigger a pipeline execution. The thing to know is that every pipeline execution will pull the latest source for both actions (not just the one with a change that triggered the pipeline execution).

Jenkins triggered code deploy is failing at ApplicationStop step even though same deployment group via code deploy directly is running successfully

When I trigger via Jenkins (code deploy plugin), I get the following error -
No such file or directory - /opt/codedeploy-agent/deployment-root/edbe4bd2-3999-4820-b782-42d8aceb18e6/d-8C01LCBMG/deployment-archive/appspec.yml
However, if I trigger deployment into the same deployment group via code deploy directly, and specify the same zip in S3 (obtained via Jenkins trigger), this step passes.
What does this mean, and how do I find a workaround to this? I am currently working on integrating a few things and so, will need to deploy via code deploy and via Jenkins simultaneously. I will run the code deploy triggered deployment when I will need to ensure that the smaller unit is functioning well.
Update
Just mentioning another point, in case it applies. I was previously using a different codedeploy "application" and "deployment group" on the same ec2 instances, and deplying using jenkins and code deploy directly as well. In order to fix some issue (not allowing to overwrite existing files due to failed deployments, allegedly), I had deleted everything inside the /opt/codedeploy-agent/deployment-root/<directory containing deployments> directory, trying to follow what was mentioned in this answer. However, note that I deleted only items inside that directory. Thereafter, I started getting this error appspec.yml not found in deployment archive. So, then I created a new application and deployment group and since then, I am working on it.
So, another point to consider is whether I should do some further cleanup, if the jenkins triggered deployment is somehow still affected by those deletions (even though it is referring to the new application and deployment group).
As part of its process, CodeDeploy needs to reference previous deployments for Redeployments and Deployment Rollbacks operations. These references are maintained outside of the deployment archive folders. If you delete these archives manually as you indicate, then a CodeDeploy install can get fatally corrupted: the references left to previous deployments are no longer correct or consistent, and deploys will fail.
The best thing at this point is to remove the old installation completely, and re-install. This will allow the code deploy agent to work correctly again.
I have learned the hard way not to remove/modify any of the CodeDeploy install folders or files manually. Even if you change apps or deployment groups, CodeDeploy will figure it out itself, without the need for any manual cleanup.
In order to do a deployment, the bundle needs to contain a appspec.yml file, and the file needs to be put at the top directory. Seems the error message is due to the host agent can't find the appspec.yml file.