I used AWS CodeStar to create a new application with the "Express.js Aws Lambda Webservice" CodeStar template. This was great because it set me up with a simple CI/CD pipeline using AWS CodePipeline. By default the pipeline has 3 steps for grabbing the source code from a git repo, running the build step, and then deploying to "dev" environment.
My issue is that I can't set it up so that my pipeline has multiple environments: dev, staging, and prod.
My current deploy step has 2 actions: GenerateChangeSet and ExecuteChangeSet. Here are the configurations for the actions in original dev environment build step which work great:
I've created a new deploy stage at the end of my pipeline to deploy to staging, but honestly I'm not sure how to change the configurations. I'm thinking ultimately I want to be able to go into the AWS Lambda section of the AWS console and see three independent lambda functions: binance-bot-dev, binance-bot-staging, binance-bot-prod. Then each of these I could set as cloudwatch scheduled events or expose with their own api gateway url.
This is the configuration that I tried to use for a new deployment stage:
I'm really not sure if this configuration is correct and what exactly I should change in order to deploy in the way I want.
For example, should I be changing "Stack name", or should I keep that as "awscodestar-binance-bot-lambda" or change it for each environment as I am here?
Also, I'm pointing to a different template.yml file in the project. The original template.yml looks like this:
AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31
- AWS::CodeStar
Parameters:
ProjectId:
Type: String
Description: AWS CodeStar projectID used to associate new resources to team members
Resources:
Dev:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs4.3
Environment:
Variables:
NODE_ENV: dev
Role:
Fn::ImportValue:
!Join ['-', [!Ref 'ProjectId', !Ref 'AWS::Region', 'LambdaTrustRole']]
Events:
GetEvent:
Type: Api
Properties:
Path: /
Method: get
PostEvent:
Type: Api
Properties:
Path: /
Method: post
For template.staging.yml I use the exact same config except I changed "Dev:" to "Staging:" under "Resources", and I also changed the value of the NODE_ENV environment variable. So, I'm basically wondering is this the correct configuration for what I'm trying to achieve?
Assuming that everything in the configuration is correct, I then need to troubleshoot this error. With everything set as described above I can run my pipeline, but when it gets to my staging build step the GenerateChage_Staging action fails with this error message:
Action execution failed User:
arn:aws:sts::954459734159:assumed-role/CodeStarWorker-binance-bot-CodePipeline/1524253307698
is not authorized to perform: cloudformation:DescribeStacks on
resource:
arn:aws:cloudformation:us-east-1:954459734159:stack/awscodestar-binance-bot-lambda-staging/*
(Service: AmazonCloudFormation; Status Code: 403; Error Code:
AccessDenied; Request ID: dd801664-44d2-11e8-a2de-8fa6c42cbf86)
It seem to me from this error message that I need to add the "cloudformation:DescribeStacks" for my "CodeStarWorker-binance-bot-CodePipeline" so I go to IAM -> Roles and click on the CodeStarWorker-binance-bot-CodePipeline role. However, when I click on "CodeStarWorker-binance-bot-CodePipeline" and drill into the policy information for CloudFormation it looks like this role already has permissions for "DescribeStacks"!
If anyone could point out what I'm doing wrong or offer any guidance on understanding and thinking about how to do multiple environments with AWS CodePipeline that would be great. thanks!
UPDATE:
I changed the "Stack name" in my Deploy_To_Staging pipeline stage back to "awscodestar-binance-bot-lambda". However, I then get this error form the GenerateChange_Staging action:
Action execution failed Invalid TemplatePath:
binance-bot-BuildArtifact::template-export.staging.yml. Artifact
binance-bot-BuildArtifact doesn't exist
UPDATE 2:
In the root of my project I have the buildspec.yml file that was generated by CodeStar. It looks like this:
version: 0.2
phases:
install:
commands:
# Install dependencies needed for running tests
- npm install
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
pre_build:
commands:
# Discover and run unit tests in the 'tests' directory
- npm test
build:
commands:
# Use AWS SAM to package the application using AWS CloudFormation
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
- aws cloudformation package --template template.staging.yml --s3-bucket $S3_BUCKET --output-template template-export.staging.yml
- aws cloudformation package --template template.prod.yml --s3-bucket $S3_BUCKET --output-template template-export.prod.yml
artifacts:
type: zip
files:
- template-export.yml
I then added this to the CloudFormation section:
Then I add this to the "build: -> commands:" section:
- aws cloudformation package --template template.staging.yml --s3-bucket $S3_BUCKET --output-template template-export.staging.yml
- aws cloudformation package --template template.prod.yml --s3-bucket $S3_BUCKET --output-template template-export.prod.yml
And I added this to the "files:"
template-export.staging.yml
template-export.prod.yml
HOWEVER, I am still getting an error that "binance-bot-BuildArtifact does not exist".
Here is the full error after making the buildspec.yml change:
Action execution failed Invalid TemplatePath:
binance-bot-BuildArtifact::template-export.staging.yml. Artifact
binance-bot-BuildArtifact doesn't exist
It seems very strange to me that I can access "binance-bot-BuildArtifact" in one stage of the pipeline but not another. Could it be that the build artifact is only available to the one pipeline stage directly after the build stage? Can someone please help me to be able to access this "binance-bot-BuildArtifact"? Thanks!
For example, should I be changing "Stack name", or should I keep that as "awscodestar-binance-bot-lambda" or change it for each environment as I am here?
You should use a unique stack name for each environment. If you didn't, you would be replacing your 'dev' environment with your 'staging' environment, and so forth.
So, I'm basically wondering is this the correct configuration for what I'm trying to achieve?
I don't think so. You should use the exact same template for each environment. In order to change the environment name for each of your deploys, you can use the 'Parameter Overrides' field to choose the correct value for your 'Environment' parameter.
it looks like this role already has permissions for "DescribeStacks"!
Could the issue here be that your IAM role only has DescribeStacks permission for the dev stack? It looks like it does not have permission to describe the staging stack. Maybe you can add a 'wildcard'/asterisk to the policy so that it matches all of your stack names?
Could it be that the build artifact is only available to the one pipeline stage directly after the build stage?
No, that has not been my experience with CodePipeline. Unfortunately I don't know why it's telling you that your artifact can't be found.
robrtsql has already provided some good advice in terms of using the same template in both stages.
You might find this walkthrough useful.
Basically, it describes adding a Cloudformation "template configuration" which allows you to specify parameters to the Cloudformation stack.
This will allow you to deploy the same template in both your dev and prod environments, but also allow you to tell the difference between a dev deployment and a prod deployment, by choosing a different template configuration in each stage.
Related
I want to create a separate 'dev' AWS Lambda with my Serverless service.
I have deployed my production, 'prod', environment and tried to then deploy a development, 'dev', environment so that I can trial features without affecting customer experience.
In order to deploy the 'dev' environment I have:
Created a new serverless-dev.yml file
Updated the stage and profile fields in my .yml file:
provider:
name: aws
runtime: nodejs14.x
stage: dev
region: eu-west-2
profile: dev
memorySize: 128
timeout: 30
Also update the resources.Resources.<Logical Id>.Properties.RoleName value, as if I try to use the same role as my 'prod' Lambda, I get this message: clearbit-lambda-role-prod already exists in stack
resources:
Resources:
<My Logical ID>:
Type: AWS::IAM::Role
Properties:
Path: /my/cust/path/
RoleName: clearbit-lambda-role-dev # Change this name
Run command: sls deploy -c serverless-dev.yml
Is this the conventional method to achieve this? I can't find anything in the documentation.
Serverless Framework has support for stages out of the box. You don't need a separate configuration, you can just specify --stage <name-of-stage> when running .e.g sls deploy and it will automatically use that stage. All resources created by the Framework under the hood are including stage in it's names or identifiers. If you are defining some extra resources in resources section, you need to change them, or make sure they include stage in their names. You can get the current stage in configuration with ${sls:stage} and use that to construct names that are e.g. prefixed with stage.
I'm doing a serverless app in lambda using CloudFormation.
In my CodeBuild project, I set it to zip up the output and place it in "myBucket\AWSServerless1.zip" and it does correctly.
Now I'm working on my CodePipeline, I reference the original CodeBuild project. However, now instead, it puts it in codepipeline-us-west-#####. That's fine. The issue is that the .zip file has a RANDOM name. CodePipeline ignores the name I gave it in the CodeBuild project.
In the serverless.template, I have to specify the CodeUri (which seems to be the CodeBuild project output for some odd reason). If I reference the AWSServerless1.zip, it works fine (but its not building to there, so its stale code)... but...
Since CodePipeline calling CodeBuild gives it a random name, how am I supposed to reference the ACTUAL BuildArtifact in the serverless.template?
I know this is very weird, I was stuck with this behavior of CodePipeline and then had to rewrite the buildspec to make CodePipeline work. CodePipeline makes it's own zip file even if you create your own zip through CodeBuild as well and that too with a unique name.
But there is one way out, Codepipeline will create one zip file but it will unzip it while giving the artifact to CodeDeploy. So you need not worry about its name. CodeDeploy will get the unzipped version of your code. CodePipeline keeps track of the name and it will always point to the newest one.
Suppose :
CodePipeline creates artifact : some-random-name.zip
some-random-name
|- deploy/lib/lambda-code
|- some-file.yaml
Whenever CodePipeline gives artifact to CodeDeploy, it will unzip it so you can anytime refer the code under some-random-name.zip
So in your case when you give CodeUri in the SAM template just give the folder name which is deploy where your lambda code is present.
Resources:
Hello:
Type: 'AWS::Serverless::Function'
Properties:
Handler: example.MyHandler
Runtime: java8
CodeUri: deploy
Description: ''
MemorySize: 512
Timeout: 15
Hope this helps.
I was facing the same error and I managed to work around it by doing the following:
1- On the build specification (buildspec.yml) add a sam package command (this generates a package.yml that will be used by cloudformation to deploy the lambda).
build:
commands:
- sam package
--template-file ../template.yaml
--output-template-file ../package.yml
--s3-bucket onnera-ci-cd-bucketcode here
2- Add the package.yml to output artifacts
artifacts:
files:
- DeviceProvisioning/package.yml
3- On the template.yaml that will be deployed reference directly the CodeUri (internally this will be resolved to the bucket with the output artificats from codebuild).
Resources:
DeviceProvisioningFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: DeviceProvisioningFunction/target/DeviceProvisioningFunction-1.0.jar
4- On the pipeline make the output of the build phase avaialble on deployment phase:
const buildOutput = [new codepipeline.Artifact()];
const buildAction = new codepipeline_actions.CodeBuildAction({
actionName: 'CodeBuild',
project: deviceProvisioning,
input: sourceOutput,
outputs: buildOutput,
});
5- Use the build output to specify the templatePath on the deploy action:
const deployAction = new codepipeline_actions.CloudFormationCreateUpdateStackAction({
extraInputs: buildAction.actionProperties.outputs,
actionName: "UpdateLambda",
stackName: "DeviceProvisioningStack",
adminPermissions: true,
templatePath: buildOutput[0].atPath("package.yml"),
cfnCapabilities: [CfnCapabilities.AUTO_EXPAND, CfnCapabilities.NAMED_IAM]
});
Make sure that the output artifacts from the build phase are available on the deploy phase.
Usecase
I have a cloudformation Stack with more then 15 Lambdas in it. I can able to deploy the stack through Codepipeline which consists of two stages CodeCommit and CodeDeploy. In this approach all my lambda code is in cloudformation template(i.e.inline code). For Security concerns i want to change this Inline to S3 which inturn requires S3BucketName and S3Key.
As a temporary workaround
As of now i am zipping each lambda file and passing manually S3keyName and bucketname as a parameters to my stack .
Is there any way possible to do this step via Codepipeline ?
My Assumption on CodeBuild
I Know we can use the CodeBuild for it. But upto now i have seen CodeBuild is only used to build package.json file. But in my usecase i dont have any . And also i can see it is possible to specify cloudformation package command to wrap my lambda from local to S3 this command will generate S3 codeuri`, but this is for Serverless Applications where there will be single lambda but in my case i have 15.
What i had tried
I know that as soon as you give a git push to codecommit it will keep you code in S3. So what i thought is to get the S3BucketName and S3KeyName from the codecommit pushed file and pass these parameters to my CFN template. I can able to get the S3BucketName but S3KeyName i dont know how to get that ? And i dont know whether this tried apporach is a workable one ?
BTW i know i can use shell script just to automate this process. But is there a way possible to do it via CODE PIPELINE ?
Update--Tried Serverless Approach
Basically i run two build actions with two different runtimes(i.e.Nodejs,Python) which runs independently. So when i use serverless approach each build will create a template-export.yml file with codeuri of bucketlocation , that means i will have two template-export.yml files . One problem with Serverless approach it must have to create changeset and then it trigger Execute changeset. Because of that i need to merge those two template-export.yml files and run this create changeset action followed by execute changeset. But i didn't know is there a command to merge two SAM templates.Otherwise one template-export.yml stack will replace other template-export.yml stack.
Any help is appreciated
Thanks
If I'm understanding you right, you just need an S3 Bucket and Key to be piped into your Lambda CF template. To do this I'm using the ParameterOverrides declaration in my pipeline.
Essentially, the pipeline is a separate stack and picks up a CF template located in the root of my source. It then overrides two parameters in that template that point it to the appropriate S3 bucket/key.
- Name: LambdaDeploy
Actions:
- Name: CreateUpdateLambda
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: 1
Configuration:
ActionMode: CREATE_UPDATE
Capabilities: CAPABILITY_IAM
RoleArn: !GetAtt CloudFormationRole.Arn
StackName: !Join
- ''
- - Fn::ImportValue: !Sub '${CoreStack}ProjectName'
- !Sub '${ModuleName}-app'
TemplatePath: SourceOut::cfn-lambda.yml
ParameterOverrides: '{ "DeploymentBucketName" : { "Fn::GetArtifactAtt" : ["BuildOut", "BucketName"]}, "DeploymentPackageKey": {"Fn::GetArtifactAtt": ["BuildOut", "ObjectKey"]}}'
Now, the fact that you have fifteen Lambda functions in this might throw a wrench in it. For that I do not exactly have an answer since I'm actually trying to do the exact same thing and package up multiple Lambdas in this kind of way.
There's documentation on deploying multiple Lambda functions via CodePipeline and CloudFormation here: https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
I believe this will still upload the function code to S3, but it will leverage AWS tooling to make this process simpler.
How would I go about automating the deployment of an AWS API Gateway via a Python script using Boto3? For example, if I have created a stage named "V1" in the AWS Console for API Gateway, how would I write a script to deploy that stage ("V1")?
The current process involves deploying the stage manually from the AWS Console and is not scriptable. For purposes of automation, I would like to have a script to do the same.
Consulting the Boto3 documentation, I see there's a method for creating a stage (http://boto3.readthedocs.io/en/latest/reference/services/apigateway.html#APIGateway.Client.create_stage), but none for deploying one.
If you want to stick with deploying via specific boto3 API calls, then you want to follow this rough sequence of boto3 API calls:
Use get_rest_apis to retrieve the API ID.
Possibly check if it's deployed already using get_deployments.
Use create_deployment to create the deployment. Use the stageName parameter to specify the stage to create.
Consider using create_base_path_mapping if needed.
Also consider using update_stage if you need to turn on something like logging.
To deploy a typical (API Gateway/Lambda) I would recommend AWS SAM, instead of writing own code.
It even supports Swagger and you can define your stages in SAM definition files.
e.g.
ApiGatewayApi:
Type: AWS::Serverless::Api
Properties:
StageName: v1
CacheClusterEnabled: true
CacheClusterSize: "0.5"
DefinitionUri: "swagger.yaml"
Variables:
[...]
[...]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Handler: ratings.handler
Runtime: python3.6
Events:
Api:
Type: Api
Properties:
Path: /here
Method: get
RestApiId: !Ref ApiGatewayApi
Deployment is easily integrable into CD pipelines using AWS CLI
aws cloudformation package \
--template-file path/example.yaml \
--output-template-file serverless-output.yaml \
--s3-bucket s3-bucket-name
aws cloudformation deploy \
--template-file serverless-output.yaml \
--stack-name new-stack-name \
--capabilities CAPABILITY_IAM
See also: Deploying Lambda-based Applications
Yes, your current way of creating and deploying the apis manually through the AWS browser console is not very scriptable, but pretty much anything you can click in the console can be done with the AWS cli. It sounds to me like you want an automated CI / CD pipeline. Once you figure out what commands you would run with the aws cli, just add them to your CI pipeline and you should be good to go.
But actually, there's an even easier way. Go to AWS Codestar. Click "create new project" and check "Web Service", "Python", and "AWS Lambda". As of today there's only one Codestar template that fits all three, so choose that one. This will scaffold a full CI / CD pipeline (AWS CodePipeline) with one dev environment, hooked up to a git project. I think would be a good way for you so you can leverage the dev-opsy automated deployment stuff without having to worry about setting up and maintaining this on top of your main project.
I have an Issue with CodeDeploy and AWS Lambda when they work inside AWS CodePipeline. This is my setup:
Source GitHub
AWS CodeBuild
AWS CodeDeploy
The Issue
Step 1. and 2. work without a problem, but when it comes to CodeDeploy I get the following error:
Action execution failed BundleType must be either YAML or JSON
If I unzip the Artifact generated by CodeBuild all the files are in place.
If I try to manually deploy to AWS Lambda from CodeDeploy I then get a different message...
Deployment Failed The deployment failed because either the target
Lambda function FUNCTION_NAME does not exist or the specified function
version or alias cannot be found
This is very confusion as to which Error message is valid, or if they are the same but have a different Error message.
The Setup
The ARN of the function is:
arn:aws:lambda:us-east-1:239748505547:function:email_submition
The ARN for the Alias is:
arn:aws:lambda:us-east-1:239748505547:function:email_submition:default
And my appspec.yml file has the following content
version: 0.0
Resources:
- email_submition:
Type: AWS::Lambda::Function
Properties:
Name: "email_submition"
Alias: "default"
CurrentVersion: "1"
TargetVersion: "2"
And the folder structure of the project is:
.gitignore
appspec.yml
buildspec.yml
index.js
README.md
Question
What am I missing in this configuration?
So really this should be a comment not an answer. I do not have 50 rep yet so it's here.
I am having the same issues as you. I'm not sure if you found a solution or not. I was able to successfully execute a deployment with the following appspec.yml:
version: 0.0
Resources:
- mylambdafunction:
Type: AWS::Lambda::Function
Properties:
Name: "mylambdafunction"
Alias: "staging"
CurrentVersion: "2"
TargetVersion: "3"
Both the current version and target version had to exist before CodeDeploy would work. Of course I've tested this by doing a manual deployment.
I think what is needed here is something that actually updates the code and creates a new version. Which is what I would have thought CodeDeploy would do.
Edit: Further research has yielded information about CodePipeline I hadn't realized.
Per here it looks like to run through the Pipeline you need your buildspec, appspec, and a cft. The reason the pipeline fails is because you need to include a CloudFormation Template for the lambda function, this is what deploys the actual code. The appspec.yml is there to migrate traffic from the old version to the new version but the cft is what does the deployment of new code.
Edit2: This example app got me squared away.
Use CodeBuild to build your app but also to generate your CFT for doing actual deployment. This means you build your CFT with the lambda resource.
This removes appspec completely from the resources and instead you use a CFT to define the Lambda function. Here is a link to the SAM docs.
I can not help you with the CodeBuild part as I use a 3rd party CI solution but maybe I can help with the rest.
I think there is a mistake in the AWS documentation as I've never been able to get this to work either. They say to call "aws deploy push" on the command line and give it your appspec.yml file instead of a zip for Lambda, but no matter what you do, you will always get the error:
Action execution failed BundleType must be either YAML or JSON
I think this is because push automatically calls "register-application-revision" after it uploads. If you split this into separate parts, this will work.
Your appspec.yml should look like the
version: 0.0
Resources:
- YourFunctionName:
Type: "AWS::Lambda::Function"
Properties:
Name: "YourFunctionName"
Alias: "YourFunctionNameAlias"
CurrentVersion: "CurrentAliasVersionGoesHere"
TargetVersion: "NewlyPublishedVersionGoesHere"
The version you use should be the version the current alias is attached to. The target version should be the new version you just published (see below) This part still confusing me a bit. I don't understand why it can't figure out what the current version the alias is pointing to by itself.
Also, note that you can always just upload new code for your Lambda code with update-function-code and it will overwrite the latest version. Or you can publish which will create a new version and always just call the latest version. CodeDeploy is only necessary if you want to do some fancy gradually deployment or have different versions for test and live code.
I'd try the following:
Publish your lambda function:
aws lambda update-function-code --function-name YourFunction --zip-file fileb://~/your-code.zip --publish
Take note of the version number it created
Upload your appspec.yml file to S3
aws s3 cp appspec.yml s3://your-deploy-bucket/your-deploy-dir/appspec.yml
Register your application revision:
aws deploy register-application-revision --application-name YourApplcationName --s3-location bucket=your-deploy-bucket,key=your-deploy-dir/appspec.yml,bundleType=YAML
From the CLI this won't appear to do anything, but it did.
Get the application revision to make sure it worked
aws deploy get-application-revision --application-name YourApplcationName --s3-location bucket=your-deploy-bucket,key=your-deploy-dir/appspec.yml,bundleType=YAML
Create a deployment to deploy your code
aws deploy create-deployment --s3-location bucket=your-deploy-bucket,key=your-deploy-dir/appspec.yml,bundleType=YAML