I am experimenting with different containers for training and inference based on tutorial on aws sagemaker documentation. I'm using deep learning containers provided by aws here https://github.com/aws/deep-learning-containers/blob/master/available_images.md
, such as this 763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-training:1.10.2-transformers4.17.0-gpu-py38-cu113-ubuntu20.04 to create models.
Model:
Type: "AWS::SageMaker::Model"
Properties:
PrimaryContainer:
Image: '763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-training:1.10.2-transformers4.17.0-gpu-py38-cu113-ubuntu20.04'
ExecutionRoleArn: !GetAtt ExecutionRole.Arn
I am trying to see , if i can download this image locally via docker pull, but what account do i need to download this ?
i have an aws account, would i be able to download from 763104351884.dkr.ecr.us-east-1.amazonaws.com this url via my free account?
Usecase
I have a cloudformation Stack with more then 15 Lambdas in it. I can able to deploy the stack through Codepipeline which consists of two stages CodeCommit and CodeDeploy. In this approach all my lambda code is in cloudformation template(i.e.inline code). For Security concerns i want to change this Inline to S3 which inturn requires S3BucketName and S3Key.
As a temporary workaround
As of now i am zipping each lambda file and passing manually S3keyName and bucketname as a parameters to my stack .
Is there any way possible to do this step via Codepipeline ?
My Assumption on CodeBuild
I Know we can use the CodeBuild for it. But upto now i have seen CodeBuild is only used to build package.json file. But in my usecase i dont have any . And also i can see it is possible to specify cloudformation package command to wrap my lambda from local to S3 this command will generate S3 codeuri`, but this is for Serverless Applications where there will be single lambda but in my case i have 15.
What i had tried
I know that as soon as you give a git push to codecommit it will keep you code in S3. So what i thought is to get the S3BucketName and S3KeyName from the codecommit pushed file and pass these parameters to my CFN template. I can able to get the S3BucketName but S3KeyName i dont know how to get that ? And i dont know whether this tried apporach is a workable one ?
BTW i know i can use shell script just to automate this process. But is there a way possible to do it via CODE PIPELINE ?
Update--Tried Serverless Approach
Basically i run two build actions with two different runtimes(i.e.Nodejs,Python) which runs independently. So when i use serverless approach each build will create a template-export.yml file with codeuri of bucketlocation , that means i will have two template-export.yml files . One problem with Serverless approach it must have to create changeset and then it trigger Execute changeset. Because of that i need to merge those two template-export.yml files and run this create changeset action followed by execute changeset. But i didn't know is there a command to merge two SAM templates.Otherwise one template-export.yml stack will replace other template-export.yml stack.
Any help is appreciated
Thanks
If I'm understanding you right, you just need an S3 Bucket and Key to be piped into your Lambda CF template. To do this I'm using the ParameterOverrides declaration in my pipeline.
Essentially, the pipeline is a separate stack and picks up a CF template located in the root of my source. It then overrides two parameters in that template that point it to the appropriate S3 bucket/key.
- Name: LambdaDeploy
Actions:
- Name: CreateUpdateLambda
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: 1
Configuration:
ActionMode: CREATE_UPDATE
Capabilities: CAPABILITY_IAM
RoleArn: !GetAtt CloudFormationRole.Arn
StackName: !Join
- ''
- - Fn::ImportValue: !Sub '${CoreStack}ProjectName'
- !Sub '${ModuleName}-app'
TemplatePath: SourceOut::cfn-lambda.yml
ParameterOverrides: '{ "DeploymentBucketName" : { "Fn::GetArtifactAtt" : ["BuildOut", "BucketName"]}, "DeploymentPackageKey": {"Fn::GetArtifactAtt": ["BuildOut", "ObjectKey"]}}'
Now, the fact that you have fifteen Lambda functions in this might throw a wrench in it. For that I do not exactly have an answer since I'm actually trying to do the exact same thing and package up multiple Lambdas in this kind of way.
There's documentation on deploying multiple Lambda functions via CodePipeline and CloudFormation here: https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
I believe this will still upload the function code to S3, but it will leverage AWS tooling to make this process simpler.
I have a Lambda function which I've verified to work correctly. I'm able to update the function by hand on the command line using "update-function-code" but I've been trying to get it working with Code Pipeline and Cloud Formation.
Here are the steps I have so far:
Source - fetch the code from github. This works correctly.
Build - test the code in Solano (3rd party CI). This works too and on the last stage it zips up the repo and uploads it to my S3 bucket.
Deploy - This is the "deploy" action category with the action mode "create or replace a change set". This doesn't work if the Lambda function already exists.
Beta - Execute the changeset. This works if the change set was generated correctly.
My samTemplate.yml looks like this:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: My Lambda function
Resources:
LambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: MyLambdaExecute
Description: My Lambda function
Handler: myhandler.handler
Runtime: nodejs6.10
CodeUri: s3://mybucket/mydirectory/mylambdacode.zip
AutoPublishAlias: Staging
Timeout: 30
DeploymentPreference:
Type: AllAtOnce
If the lambda function with the name "MyLambdaExecute" doesn't exist and I push up code to github, it works perfectly. But if I modify some code and push again it runs the first two steps, but then generates an empty change set with the status:
FAILED - No updates are to be performed.
I'm not sure what I have to do to get it to publish a new version. How do I get it to realize it needs to create a new changeset?
I believe you are receiving the "No updates" message because technically nothing is changing in your CloudFormation template. When attempting to build the changeset, the contents of the S3 file are not examined. It just sees that none of the CloudFormation resource properties have changed since the previous deployment.
Instead, you may use a local relative path for the CodeUri, and aws cloudformation package can upload the file to a unique S3 path for you. This ensures that the template changes each time and the new Lambda code is recognized/deployed. For example:
aws cloudformation package --template-file samTemplate.yml --output-template-file sam-output-dev.yml --s3-bucket "$CodePipelineS3Bucket" --s3-prefix "$CloudFormationPackageS3Prefix"
This command can be put into the build step before your create/execute changeset steps.
To see an demonstration of this entire flow in practice, you can look at this repository, although I will warn that it's a bit outdated thanks to the new features released at the end of 2017. (For example, I was publishing Lambda aliases manually via extra steps because it was written pre-AutoPublishAlias.)
I have an Issue with CodeDeploy and AWS Lambda when they work inside AWS CodePipeline. This is my setup:
Source GitHub
AWS CodeBuild
AWS CodeDeploy
The Issue
Step 1. and 2. work without a problem, but when it comes to CodeDeploy I get the following error:
Action execution failed BundleType must be either YAML or JSON
If I unzip the Artifact generated by CodeBuild all the files are in place.
If I try to manually deploy to AWS Lambda from CodeDeploy I then get a different message...
Deployment Failed The deployment failed because either the target
Lambda function FUNCTION_NAME does not exist or the specified function
version or alias cannot be found
This is very confusion as to which Error message is valid, or if they are the same but have a different Error message.
The Setup
The ARN of the function is:
arn:aws:lambda:us-east-1:239748505547:function:email_submition
The ARN for the Alias is:
arn:aws:lambda:us-east-1:239748505547:function:email_submition:default
And my appspec.yml file has the following content
version: 0.0
Resources:
- email_submition:
Type: AWS::Lambda::Function
Properties:
Name: "email_submition"
Alias: "default"
CurrentVersion: "1"
TargetVersion: "2"
And the folder structure of the project is:
.gitignore
appspec.yml
buildspec.yml
index.js
README.md
Question
What am I missing in this configuration?
So really this should be a comment not an answer. I do not have 50 rep yet so it's here.
I am having the same issues as you. I'm not sure if you found a solution or not. I was able to successfully execute a deployment with the following appspec.yml:
version: 0.0
Resources:
- mylambdafunction:
Type: AWS::Lambda::Function
Properties:
Name: "mylambdafunction"
Alias: "staging"
CurrentVersion: "2"
TargetVersion: "3"
Both the current version and target version had to exist before CodeDeploy would work. Of course I've tested this by doing a manual deployment.
I think what is needed here is something that actually updates the code and creates a new version. Which is what I would have thought CodeDeploy would do.
Edit: Further research has yielded information about CodePipeline I hadn't realized.
Per here it looks like to run through the Pipeline you need your buildspec, appspec, and a cft. The reason the pipeline fails is because you need to include a CloudFormation Template for the lambda function, this is what deploys the actual code. The appspec.yml is there to migrate traffic from the old version to the new version but the cft is what does the deployment of new code.
Edit2: This example app got me squared away.
Use CodeBuild to build your app but also to generate your CFT for doing actual deployment. This means you build your CFT with the lambda resource.
This removes appspec completely from the resources and instead you use a CFT to define the Lambda function. Here is a link to the SAM docs.
I can not help you with the CodeBuild part as I use a 3rd party CI solution but maybe I can help with the rest.
I think there is a mistake in the AWS documentation as I've never been able to get this to work either. They say to call "aws deploy push" on the command line and give it your appspec.yml file instead of a zip for Lambda, but no matter what you do, you will always get the error:
Action execution failed BundleType must be either YAML or JSON
I think this is because push automatically calls "register-application-revision" after it uploads. If you split this into separate parts, this will work.
Your appspec.yml should look like the
version: 0.0
Resources:
- YourFunctionName:
Type: "AWS::Lambda::Function"
Properties:
Name: "YourFunctionName"
Alias: "YourFunctionNameAlias"
CurrentVersion: "CurrentAliasVersionGoesHere"
TargetVersion: "NewlyPublishedVersionGoesHere"
The version you use should be the version the current alias is attached to. The target version should be the new version you just published (see below) This part still confusing me a bit. I don't understand why it can't figure out what the current version the alias is pointing to by itself.
Also, note that you can always just upload new code for your Lambda code with update-function-code and it will overwrite the latest version. Or you can publish which will create a new version and always just call the latest version. CodeDeploy is only necessary if you want to do some fancy gradually deployment or have different versions for test and live code.
I'd try the following:
Publish your lambda function:
aws lambda update-function-code --function-name YourFunction --zip-file fileb://~/your-code.zip --publish
Take note of the version number it created
Upload your appspec.yml file to S3
aws s3 cp appspec.yml s3://your-deploy-bucket/your-deploy-dir/appspec.yml
Register your application revision:
aws deploy register-application-revision --application-name YourApplcationName --s3-location bucket=your-deploy-bucket,key=your-deploy-dir/appspec.yml,bundleType=YAML
From the CLI this won't appear to do anything, but it did.
Get the application revision to make sure it worked
aws deploy get-application-revision --application-name YourApplcationName --s3-location bucket=your-deploy-bucket,key=your-deploy-dir/appspec.yml,bundleType=YAML
Create a deployment to deploy your code
aws deploy create-deployment --s3-location bucket=your-deploy-bucket,key=your-deploy-dir/appspec.yml,bundleType=YAML
When I run CloudFormation deploy using a template with API Gateway resources, the first time I run it, it creates and deploys to stages. The subsequent times I run it, it updates the resources but doesn't deploy to stages.
Is that behaviour as intended? If yes, how'd I get it to deploy to stages whenever it updates?
(Terraform mentions a similar issue: https://github.com/hashicorp/terraform/issues/6613)
Seems like there is no way to easily create a new Deployment whenever one of your Cloudformation Resources changes.
One way to work around that would be to use a Lambda-backed Custom Resource (see http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html).
The Lambda should create the new Deployment, only if one of your Resources has been updated. To determine if one of your Resources has been updated,
you will probably have to implement custom logic around this API call: http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_DescribeStackEvents.html
In order to trigger updates on your Custom Resource, I suggest you supply a Cloudformation Parameter that will be used to force an update of your Custom Resource (e.g. the current time, or a version number).
Note that you will have to add a DependsOn clause to your Custom Resource that will include all Resources relevant to your API. Otherwise, your deployment might be created before all your API Resources are updated.
Hope this helps.
When your template specifies a deployment, CloudFormation will create that deployment only if it doesn't already exist. When you attempt to run it again, it observes that the deployment still exists so it won't recreate it, thus no deployment. You need a new resource id for the deployment so that it will create a new deployment. Read this for more information: https://currentlyunnamed-theclassic.blogspot.com/2018/12/mastering-cloudformation-for-api.html
CloudFormation in Amazon's words is:
AWS CloudFormation takes care of provisioning and configuring those resources for you
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html
Redeployment of APIs is not a provisioning task... It is a promotion activity which is part of a stage in your software release process.
AWS CodePipeline is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software.
http://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html
CodePipeline also supports execution of Lambda functions from Actions in the pipeline. So, as advised before, create a Lambda function to deploy your API but call it from Codepipeline instead of CloudFormation.
Consult this page for details:
http://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html
I was using above approach but it looks to complicated to me just to deploy API gateway. If we are changing name of the resources then it takes time to delete and recreate the resources which increases down time for you application.
I'm following below approach to deploy API gateway to the stage using AWS CLI and it is not affecting the deployment with Cloudformation stack.
What I'm doing is, running below AWS CLI command after deployment is completed for API Gateway. It will update the existing stage with latest updates.
aws apigateway create-deployment --rest-api-id tztstixfwj --stage-name stg --description 'Deployed from CLI'
The answer here is to use the AutoDeploy property of the Stage:
Stage:
Type: AWS::ApiGatewayV2::Stage
Properties:
StageName: v1
Description: 'API Version 1'
ApiId: !Ref: myApi
AutoDeploy: true
Note that the 'DeploymentId' property must be unspecified when using 'AutoDeploy'.
See documentation, here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html
From the blogspot post linked by TheClassic (best answer so far!), you have to keep in mind that if you aren't generating your templates with something that can insert a valid timestamp in place of $TIMESTAMP$, you must update that manually with a time stamp or otherwise unique ID. Here is my functional example, it successfully deletes the existing deployment and creates a new one, but I will have to update those unique values manually when I want to create another change set:
rDeployment05012019355:
Type: AWS::ApiGateway::Deployment
DependsOn: rApiGetMethod
Properties:
RestApiId:
Fn::ImportValue:
!Sub '${pApiCoreStackName}-RestApi'
StageName: !Ref pStageName
rCustomDomainPath:
Type: AWS::ApiGateway::BasePathMapping
DependsOn: [rDeployment05012019355]
Properties:
BasePath: !Ref pPathPart
Stage: !Ref pStageName
DomainName:
Fn::ImportValue:
!Sub '${pApiCoreStackName}-CustomDomainName'
RestApiId:
Fn::ImportValue:
!Sub '${pApiCoreStackName}-RestApi'
I may be late, but here are the options which which you do a redeployment if a API resources changes, may be helpful to people who still looking for options -
Try AutoDeploy to true. If you are using V2 version of deployment. Note that you need to have APIGW created through V2. V1 and V2 are not compatible to each other. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-autodeploy
Lambda backed custom resource, Lambda inturn call createDeployment API - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html
CodePipeline that has an action that calls a Lambda Function much like the Custom Resource would - https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html
SAM(Serverless Application Model) follows a similar syntax to CloudFormation which simplifies the resource creation into abstractions and uses those to build and deploy a normal CloudFormation template. https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html
If you are using any abstraction layer to cloudformation like Sceptre, you can have a hook to call createDeployment after any-update to the resource https://sceptre.cloudreach.com/2.3.0/docs/hooks.html
I gone with third option since I kept using Sceptre for Cloudformation deployment. Implementing hooks in sceptre is easy as well.
Reading through this article, I did not come to a conclusion right away, as the information here is stretched through multiple sources. I try to sum up all the findings from here (and linked source) as my personal testing to help others avoid the hunt.
Important to know is that each API always has a dedicated URL. The associated stages only get a separate suffix. Updating the deployment does not change the URL, recreating the API does.
API
├─ RestAPI (incl. Resource, Methods etc)
├─ Deployment
├─ Stage - v1 https://6s...com/v1
├─ Stage - v2 https://6s...com/v2
Relation stage and deployment:
To deploy AWS API Gateway through CloudFormation (Cfn) you need a RestApi-Cfn-Resource and a Deployment-Cfn-Resource. If you give the Deployment-Resource a stage name, the deployment automatically creates a deployment on top of the "normal" creation. If you leave this out, the API is created without any stage. Either way, if you have a deployment, you can add n-stages to a deployment by linking the two, but a stage and its API always has only one deployment.
Updating simple API:
Now if you want to update this "simple API" just consisting of a RestAPI plus a deployment you face the issue, that if the deployment has a stage name - it can not be updated as it already "exists". To detect that the deployment has to be updated in the first place, you have to either add a timestamp or hash to the deployment resource name in CloudFormation else there is even no update triggered.
Solving the deployment update:
To now enable updating the deployment, you have to split deployment and stage up into separate Cfn-Resources. Meaning, you remove the stage name from the Deployment-Cfn-Resource and create a new Stage-Cfn-Resource which references the deployment resource. This way you can update the deployment. Still, the stage - the part you reference via URL - is not automatically updated.
Propagating the update from the deployment to your stages:
Now that we can update the deployment - aka the blueprint of the API - we can propagate the change to its respective stage. This step AS OF MY KNOWLEDGE is not possible using CloudFormation. Therefore, to trigger the update you either need to add a "custom resource" our you do it manually. Other "none" CloudFormation ways are summed up by #Athi's answer above, but no solution for me as I want to limit the used tooling.
If anybody has an example for the Lambda update, please feel free to ping me - then I would add it here. The links I found so far only reference a plain template.
I hope this helped others understanding the context a bit better.
Sources:
Problem description with Cfn-template, 2
Adding timestamp to deployment resource, 2
Using CodePipeline as a solution
Related question and CLI update answer
Related terraform issue
Related AWS forum thread
This worked for me :
cfn.yml
APIGatewayStage:
Type: 'AWS::ApiGateway::Stage'
Properties:
StageName: !Ref Environment
DeploymentId: !Ref APIGatewayDeployment$TIMESTAMP$
RestApiId: !Ref APIGatewayRestAPI
Variables:
lambdaAlias: !Ref Environment
MethodSettings:
- ResourcePath: '/*'
DataTraceEnabled: true
HttpMethod: "*"
LoggingLevel: INFO
MetricsEnabled: true
DependsOn:
- liveLocationsAPIGatewayMethod
- testJTAPIGatewayMethod
APIGatewayDeployment$TIMESTAMP$:
Type: 'AWS::ApiGateway::Deployment'
Properties:
RestApiId: !Ref APIGatewayRestAPI
DependsOn:
- liveLocationsAPIGatewayMethod
- testJTAPIGatewayMethod
bitbucket-pipelines.yml
script:
- python3 deploy_api.py
deploy_api.py
import time
file_name = 'infra/cfn.yml'
ts = str(time.time()).split(".")[0]
print(ts)
with open(file_name, 'r') as file :
filedata = file.read()
filedata = filedata.replace('$TIMESTAMP$', ts)
with open(file_name, 'w') as file:
file.write(filedata)
========================================================================
Read this for more information: https://currentlyunnamed-theclassic.blogspot.com/2018/12/mastering-cloudformation-for-api.html
If you have something to do the $TIMESTAMP$ replacement, I'd probably go with that as it's cleaner and you don't have to do any manual API Gateway management.
I have found that the other solutions posted here mostly do the job with one major caveat - you can't manage your Stage and Deployment separately in CloudFormation because whenever you deploy your API Gateway, you have some sort of downtime between when you deploy the API and when the secondary process (custom resource / lambda, code pipeline, what have you) creates your new deployment. This downtime is because CloudFormation only ever has the initial deployment tied to the Stage. So when you make a change to the Stage and deploy, it reverts back to the initial deployment until your secondary process creates your new deployment.
*** Note that if you are specifying a StageName on your Deployment resource, and not explicitly managing a Stage resource, the other solutions will work.
In my case, I don't have that $TIMESTAMP$ replacement piece, and I needed to manage my Stage separately so I could do things like enable caching, so I had to find another way. So the workflow and relevant CF pieces are as follows
Before triggering the CF update, see if the stack you're about to update already exists. Set stack_exists: true|false
Pass that stack_exists variable in to your CF template(s), all the way down to the stack that creates the Deployment and Stage
The following condition:
Conditions:
StackExists: !Equals [!Ref StackAlreadyExists, "True"]
The following Deployment and Stage:
# Only used for initial creation, secondary process re-creates this
Deployment:
DeletionPolicy: Retain
Type: AWS::ApiGateway::Deployment
Properties:
Description: "Initial deployment"
RestApiId: ...
Stage:
Type: AWS::ApiGateway::Stage
Properties:
DeploymentId: !If
- StackExists
- !Ref AWS::NoValue
- !Ref Deployment
RestApiId: ...
StageName: ...
Secondary process that does the following:
# looks up `apiId` and `stageName` and sets variables
CURRENT_DEPLOYMENT_ID=$(aws apigateway get-stage --rest-api-id <apiId> --stage-name <stageName> --query 'deploymentId' --output text)
aws apigateway create-deployment --rest-api-id <apiId> --stage-name <stageName>
aws apigateway delete-deployment --rest-api-id <apiId> --deployment-id ${CURRENT_DEPLOYMENT_ID}
Use SAM
AWS::Serverless::Api
This does the deployment for you when it does the Transformation