Use S3 target in cloudformation for a codepipeline deploy - amazon-web-services

In CodePipeline in the AWS console its possible to specify a S3 deploy step https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-s3deploy.html I'd like to be able to do this exactly, but in cloudformation. I'm missing something obvious here. Any help appreciated.
I was able to get a source and build step in cloudformation for the pipeline, but not the deploy step. The provider for that step I would think would be s3, but I can't seem to get it to work.

Here is a sample deploy step for S3
{
"Name": "Deploy",
"Actions": [
{
"Name":"Push-Lambda-Artifacts",
"ActionTypeId": {
"Category": "Deploy",
"Owner": "AWS",
"Provider": "S3",
"Version": "1"
},
"InputArtifacts": [
{
"Name": "lambda"
}
],
"Configuration":{
"BucketName": {
"Ref": "BucketName"
},
"Extract": true
},
"RunOrder": 1
}
]
}
I think that will get you most of the way there. InputArtifacts is from an output from my codebuild step.

Related

How to export an existing CodePipeline to CloudFormation template

This is more of a lack of understanding on my part but I cannot seem to debug this.
I have created an codepipeline which runs terraform apply ( which internally creates the aws infrastructure for me ). the codepipeline seems to be working.
I need to implement the same codepipeline for another account, how can I do so.
I tried to get the json script using the below command.
aws codepipeline get-pipeline --name
I convert json script to yaml script.
When I try to run the yaml script on another account I get below error
Template format error: At least one Resources member must be defined.
ISSUES:
1.) Best Way I can export codepipeline to cloudformation template
2.) The approach which I used didn't work, how to solve it?
{
"pipeline": {
"name": "my-code-pipeline",
"roleArn": "arn:aws:iam::aws-account-id:role/service-role/AWSCodePipelineServiceRole-aws-region-my-code-pipeline",
"artifactStore": {
"type": "S3",
"location": "codepipeline-aws-region-45856771421"
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "ThirdParty",
"provider": "GitHub",
"version": "1"
},
"runOrder": 1,
"configuration": {
"Branch": "master",
"OAuthToken": "****",
"Owner": "github-account-name",
"PollForSourceChanges": "false",
"Repo": "repo-name"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"inputArtifacts": [],
"region": "aws-region",
"namespace": "SourceVariables"
}
]
},
{
"name": "codebuild-for-terraform-init-and-plan",
"actions": [
{
"name": "codebuild-for-terraform-init",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ProjectName": "my-code-pipeline-build-stage"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"region": "aws-region"
}
]
},
{
"name": "manual-approve",
"actions": [
{
"name": "approval",
"actionTypeId": {
"category": "Approval",
"owner": "AWS",
"provider": "Manual",
"version": "1"
},
"runOrder": 1,
"configuration": {
"NotificationArn": "arn:aws:sns:aws-region:aws-account-id:Email-Service"
},
"outputArtifacts": [],
"inputArtifacts": [],
"region": "aws-region"
}
]
},
{
"name": "codebuild-for-terraform-apply",
"actions": [
{
"name": "codebuild-for-terraform-apply",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ProjectName": "codebuild-project-for-apply"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"region": "aws-region"
}
]
}
],
"version": 11
},
"metadata": {
"pipelineArn": "arn:aws:codepipeline:aws-region:aws-account-id:my-code-pipeline",
"created": "2020-09-17T13:12:50.085000+05:30",
"updated": "2020-09-21T15:46:19.613000+05:30"
}
}
The given code is the yaml template that I used to create cloudformation template.
The aws codepipeline get-pipeline --name CLI command returns information about the pipeline structure and pipeline metadata, but it is not the same format as a CloudFormation template (or the resource part of it).
There is no built-in support for exporting existing AWS resources to create a CloudFormation template, though you do have a couple of options.
Use former2 (built and maintained by AWS Hero, Ian Mckay) to generate a CloudFormation template from the resources you select.
Take the JSON output from the aws codepipeline get-pipeline --name command you used and manually craft a CloudFormation template. The pipeline will be one resource in the list of resources in the full template. The info it contains is pretty close, but needs some adjustments to conform to the CloudFormation resource specification for a CodePipeline, which you can find here. You'll also need to do the same for other resources that you need to bring into the template, with aws <service name> describe.
If you go with option 2 (and even if you don't), I recommend using cfn-lint with your code editor to help adhere to the spec.

AWS CodePipeline "An AppSpec file is required, but could not be found in the revision"

I'm trying to set up a deployment pipeline using CodeCommit, ECR and ECS. My pipeline passes the source and build steps fine. I can deploy manually via CodeDeploy if I upload my appspec.yaml file to an s3 bucket. Deploys triggered by a change to my CodeCommit repository always fail with the error:
An AppSpec file is required, but could not be found in the revision
When I look at the details of the failed deployment, I can pull up the revision location, which shows this:
I see in the troubleshooting code deploy section that some editors can cause issues. I'm using vscode on linux, so I don't think that should be an issue. Also, if I upload the same appspec file to s3 and reference it from a manual deployment, it works fine.
I've also tried uploading the same file, but named appspec.yml. Still failed.
The role that this deployment uses has full s3 access, not sure if it could be any other permissions-related problem.
Here is my codepipeline definition:
{
"pipeline": {
"roleArn": "arn:aws:iam::690517313378:role/service-role/AWSCodePipelineServiceRole-us-east-1-blottermappertf",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"region": "us-east-1",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"version": "1",
"provider": "CodeCommit"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"configuration": {
"PollForSourceChanges": "false",
"BranchName": "master",
"RepositoryName": "blottermapper"
},
"runOrder": 1
}
]
},
{
"name": "Build",
"actions": [
{
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"name": "Build",
"region": "us-east-1",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"version": "1",
"provider": "CodeBuild"
},
"outputArtifacts": [
{
"name": "BuildArtifact"
}
],
"configuration": {
"ProjectName": "blottermapper",
"EnvironmentVariables": "[{\"name\":\"REPOSITORY_URI\",\"value\":\"690517313378.dkr.ecr.us-east-1.amazonaws.com/net.threeninetyfive\",\"type\":\"PLAINTEXT\"}]"
},
"runOrder": 1
}
]
},
{
"name": "Deploy",
"actions": [
{
"inputArtifacts": [
{
"name": "BuildArtifact"
}
],
"name": "Deploy",
"region": "us-east-1",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "blottermappertf",
"DeploymentGroupName": "blottermappertf"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": "codepipeline-us-east-1-634554346591"
},
"name": "blottermappertf",
"version": 1
},
"metadata": {
"pipelineArn": "arn:aws:codepipeline:us-east-1:690517313378:blottermappertf",
"updated": 1573712712.49,
"created": 1573712712.49
}
}
"An AppSpec file is required, but could not be found in the revision"
The above error is related to the wrong configuration for your codepipeline. To perform ECS codedeploy deployments, the provider in your codepipeline stage for deployment must be "ECS (blue/green)" not "Codedeploy" (codedeploy is used for EC2 deployments.)
Even though in the back-end it uses codedeploy, the name of the provider is "ECS (blue/green)".
I found the answer here:
The deployment specifies that the revision is a null file, but the revision provided is a zip file
I was using the wrong action provider when setting up my deployment. I chose ECS and I should have chosen ECS Blue/Green.
The ambiguous error message made debugging and searching for answers on stack overflow difficult for me.

AWS CodePipeLine :Execute deploy action in diffent region than the one codepipeline is triggered in

I'm setting up a pipeline to automate cloudformation stack templates deployment.
The pipeline itself is created in the aws eu-west-1 region, but cloudformation stacks templates would be deployed in any other region.
Actually I know and can execute pipeline action in a different account, but I don't see where to specify the region I would like my template to be deployed in, like we do with aws cli : aws --region cloudformation deploy.....
Is there anyway to trigger a pipeline in one region and execute a deploy action in another region please?
The action configuration properties don't offer such possibility...
A workaround would be to run aws cli deploy command from cli in the codebuild container and speficy the good region, But I would like to know if there is a more elegant way to do it
If you're looking to deploy to multiple regions, one after the other, you could create a Code Pipeline pipeline in every region you want to deploy to, and set up S3 cross-region replication so that the output of the first pipeline becomes the input to a pipeline in the next region.
Here's a blog post explaining this further: https://aws.amazon.com/blogs/devops/building-a-cross-regioncross-account-code-deployment-solution-on-aws/
Since late Nov 2018, CodePipeline supports cross regional deploys. However it still leaves a lot to be desired as you need to create artifact buckets in each region and copy over the deployment artifacts (e.g. in the codebuild container as you mentioned) to them before the Deploy action is triggered. So it's not as automated as it could be, but if you go through the process of setting it up, it works well.
CodePipeline now supports cross region deployment and for to trigger the pipeline in different region we can specify the "Region": "us-west-2" property in the action stage for CloudFormation which will trigger the deployment in that specific region.
Steps to follow for this setup:
Create two bucket in two different region which for example bucket in "us-east-1" and bucket in "us-west-2" (We can also use bucket already created by CodePipeline when you will setup pipeline first time in any region)
Configure the pipeline in such a way that is can use respective bucket while taking action in respective account.
specify the region in the action for CodePipeline.
Note: I have attached the sample CloudFormation template which will help you to do the cross region CloudFormation deployment.
{
"Parameters": {
"BranchName": {
"Description": "CodeCommit branch name for all the resources",
"Type": "String",
"Default": "master"
},
"RepositoryName": {
"Description": "CodeComit repository name",
"Type": "String",
"Default": "aws-account-resources"
},
"CFNServiceRoleDeployA": {
"Description": "CFN service role for create resourcecs for account-A",
"Type": "String",
"Default": "arn:aws:iam::xxxxxxxxxxxxxx:role/CloudFormation-service-role-cp"
},
"CodePipelineServiceRole": {
"Description": "Service role for codepipeline",
"Type": "String",
"Default": "arn:aws:iam::xxxxxxxxxxxxxx:role/AWS-CodePipeline-Service"
},
"CodePipelineArtifactStoreBucket1": {
"Description": "S3 bucket to store the artifacts",
"Type": "String",
"Default": "bucket-us-east-1"
},
"CodePipelineArtifactStoreBucket2": {
"Description": "S3 bucket to store the artifacts",
"Type": "String",
"Default": "bucket-us-west-2"
}
},
"Resources": {
"AppPipeline": {
"Type": "AWS::CodePipeline::Pipeline",
"Properties": {
"Name": {"Fn::Sub": "${AWS::StackName}-cross-account-pipeline" },
"ArtifactStores": [
{
"ArtifactStore": {
"Type": "S3",
"Location": {
"Ref": "CodePipelineArtifactStoreBucket1"
}
},
"Region": "us-east-1"
},
{
"ArtifactStore": {
"Type": "S3",
"Location": {
"Ref": "CodePipelineArtifactStoreBucket2"
}
},
"Region": "us-west-2"
}
],
"RoleArn": {
"Ref": "CodePipelineServiceRole"
},
"Stages": [
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "AWS",
"Version": 1,
"Provider": "CodeCommit"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"BranchName": {
"Ref": "BranchName"
},
"RepositoryName": {
"Ref": "RepositoryName"
},
"PollForSourceChanges": true
},
"RunOrder": 1
}
]
},
{
"Name": "Deploy-to-account-A",
"Actions": [
{
"Name": "stage-1",
"InputArtifacts": [
{
"Name": "SourceOutput"
}
],
"ActionTypeId": {
"Category": "Deploy",
"Owner": "AWS",
"Version": 1,
"Provider": "CloudFormation"
},
"Configuration": {
"ActionMode": "CREATE_UPDATE",
"StackName": "cloudformation-stack-name-account-A",
"TemplatePath":"SourceOutput::accountA.json",
"Capabilities": "CAPABILITY_IAM",
"RoleArn": {
"Ref": "CFNServiceRoleDeployA"
}
},
"RunOrder": 2,
"Region": "us-west-2"
}
]
}
]
}
}
}
}

EMR cluster created with CloudFormation not shown

I have added an EMR cluster to a stack. After updating the stack successfully (CloudFormation), I can see the master and slave nodes in EC2 console and I can SSH into the master node. But AWS console does not show the new cluster. Even aws emr list-clusters doesn't show the cluster. I have triple checked the region and I am certain I'm looking at the right region.
Relevant CloudFormation JSON:
"Spark01EmrCluster": {
"Type": "AWS::EMR::Cluster",
"Properties": {
"Name": "Spark01EmrCluster",
"Applications": [
{
"Name": "Spark"
},
{
"Name": "Ganglia"
},
{
"Name": "Zeppelin"
}
],
"Instances": {
"Ec2KeyName": {"Ref": "KeyName"},
"Ec2SubnetId": {"Ref": "PublicSubnetId"},
"MasterInstanceGroup": {
"InstanceCount": 1,
"InstanceType": "m4.large",
"Name": "Master"
},
"CoreInstanceGroup": {
"InstanceCount": 1,
"InstanceType": "m4.large",
"Name": "Core"
}
},
"Configurations": [
{
"Classification": "spark-env",
"Configurations": [
{
"Classification": "export",
"ConfigurationProperties": {
"PYSPARK_PYTHON": "/usr/bin/python3"
}
}
]
}
],
"BootstrapActions": [
{
"Name": "InstallPipPackages",
"ScriptBootstrapAction": {
"Path": "[S3 PATH]"
}
}
],
"JobFlowRole": {"Ref": "Spark01InstanceProfile"},
"ServiceRole": "MyStackEmrDefaultRole",
"ReleaseLabel": "emr-5.13.0"
}
}
The reason is missing VisibleToAllUsers property, which defaults to false. Since I'm using AWS Vault (i.e. using STS AssumeRole API to authenticate), I'm basically a different user every time, so I couldn't see the cluster. I couldn't update the stack to add VisibleToAllUsers either as I was getting Job flow ID does not exist.
The solution was to login as root user and fix things from there (I had to delete the cluster manually, but removing it from the stack template JSON and updating the stack would probably have worked if I hadn't messed things up already).
I then added the cluster back to the template (with VisibleToAllUsers set to true) and updated the stack as usual (AWS Vault).

Connect AWS CodeDeploy to Github without using my Github user?

AWS documentation describes how you authenticate to Github using your browser, and that you're currently logged into Github as a valid user with permission to the repository you want to deploy from:
http://docs.aws.amazon.com/codedeploy/latest/userguide/github-integ.html#github-integ-behaviors-auth
Is there any way to setup CodeDeploy without linking my user and having a browser? I'd love to do this using webhooks on each repository and AWS API calls, but I'll make a Github 'service user' if I have to.
More examples:
http://blogs.aws.amazon.com/application-management/post/Tx33XKAKURCCW83/Automatically-Deploy-from-GitHub-Using-AWS-CodeDeploy
I'd love to use webhooks on my repo, or set them up myself, than permit AWS access to every repository on my Github account.
There does not appear to be an alternative to doing the OAuth flow in your browser at this point. If you're concerned about opening your whole Github account up to Amazon, creating a service user is probably the best approach, unfortunately it seems this user still needs administrative access to your repos to set up the integration.
After more research I realized my first answer is wrong, you can use AWS CLI to create a CodePipeline using a Github OAuth token. Then you can plug in your CodeDeploy deployment from there. Here's an example configuration:
{
"pipeline": {
"roleArn": "arn:aws:iam::99999999:role/AWS-CodePipeline-Service",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "ThirdParty",
"version": "1",
"provider": "GitHub"
},
"outputArtifacts": [
{
"name": "MyApp"
}
],
"configuration": {
"Owner": "myusername",
"Repo": "myrepo",
"Branch": "master",
"OAuthToken": "**************"
},
"runOrder": 1
}
]
},
{
"name": "Beta",
"actions": [
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "CodePipelineDemoFleet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "CodePipelineDemoApplication",
"DeploymentGroupName": "CodePipelineDemoFleet"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": "codepipeline-us-east-1-99999999"
},
"name": "MySecondPipeline",
"version": 1
}
}
You can create the pipeline using the command:
aws codepipeline create-pipeline --cli-input-json file://input.json
Make sure that the Github OAuth token has permissions admin:repo_hook and repo.
Reference: http://docs.aws.amazon.com/cli/latest/reference/codepipeline/create-pipeline.html
CodeDeploy and Github integration works based on Github Oauth. So to use the CodeDeploy and Github integration, you will have to trust CodeDeploy github application using your github account. Currently this integration will only work in your browser with a valid github account cause CodeDeploy application will always redirect back to CodeDeploy console to verify&finish the OAuth authentication process.
You can do it using this bash command
FROM LOCAL TO REMOTE
rsync --delete -azvv -e "ssh -i /path/to/pem" /path/to/local/code/* ubuntu#66.66.66.66:/path/to/remote/code
FROM REMOTE TO LOCAL
rsync --delete -azvv -e "ssh -i /path/to/pem" ubuntu#66.66.66.66:/path/to/remote/code/* /path/to/local/code
rsync checks file versions and updates the files that need to be update