I have this issue:
I have deployed a function with CF that is build with the aws cloudformation package command. The first deployment works perfect. But when it try to update the code nothing happens. This is what I do:
Save changes to my local code and and CF and run aws cloudformation package (aws cloudformation package --template-file mycf.yml --s3-bucket mybucket --output-template-file packaged-mycf.yml --profile myprofile)
I see in the packaged file that where is a new path under CodeUri (CodeUri: s3://mybucket/319bd03cb3cc8d50ceb80e52bf51c53c)
I deploy the update (I do it in the console) I see under update that where is updates to the function
The CF events says update to function is complete
I go to code, the same old code nothing has changed
Have anyone else experienced the same?
I have tried to rename the CF file, script file and the packaged CF file. But it still get the same result.
Does anyone here have an ide on what i may try?
This is how the unpackaged function part of the CF looks like:
CRFunction:
Type: AWS::Serverless::Function
Properties:
Description: Convert IAM Policy into SCP
Handler: scpfunction.lambda_handler
Runtime: python3.9
Timeout: 30
MemorySize: 128
FunctionName: !Sub SCP-Function-${SCPName}
CodeUri: src/
Environment:
Variables:
SCPName: !Ref SCPName
Policy: !Ref IAMPolicyToConvertToSCP
OUs: !Ref OUs
Description: !Ref Description
Policies:
Statement:
- Effect: Allow
Action:
- organizations:CreatePolicy
- organizations:AttachPolicy
- organizations:List*
- iam:get*
Resource: '*'
Have anyone else experienced the same?
That's how it works by design. Changes to the source code are not detected. You have to change either Key or Version in your CloudFormation template to deploy the new code.
Update
s3://mybucket/319bd03cb3cc8d50ceb80e52bf51c53c is only your function code, not the template. To update your function using s3://mybucket/319bd03cb3cc8d50ceb80e52bf51c53c you have to use AWS Lambda console's Amazon S3 location:
I got this solved.
It was not an issue related to AWS.
After restarting VisualStudioCode I got the deployment to work. So it had to do with some local caching or something.
Really weird never experienced anything like that before.
Related
I am looking to deploy a standard node / express app to a lamda function and use it as the back end of my code. I have the API Gateway set up, and everything seems to be working fine as I get a "hello world" response when I hit the API end point for the back end.
The issue is that I need to upload new iterations of the back end and I don't know how to push the code from my local repo or github or anywhere else onto the lambda server/function.
This page says that you should zip it and push it, but that isn't real descriptive. It almost looks like it would create a new lambda each time that you use it.
zip function.zip index.js
aws lambda create-function --function-name my-function \
--zip-file fileb://function.zip --handler index.handler --runtime nodejs12.x \
--role arn:aws:iam::123456789012:role/lambda-ex
Then, there's using same to build and deploy the node app. This seems like a lot of work and setup for a simple app like this. I can set up the front end to deploy to an S3 bucket each time I push to master. Can I do something like that with lambda?
I personally would use the SAM cli. This allows you do build, package and deploy your lambda locally with use of a Cloud Formation template - IMHO a better way to go for your application stack.
You define all of your resources -api gateway, resources, routes, methods, integrations and permissions in the template. Then use sam deploy to not only create your infrastructure, but deploy your application code when it changes.
Here is an example SAM Cloud Formation template that you place in the root of your repo (this assumes a lambda function that needs to access a dynamo db table).
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Description: Your Stack Name
Resources:
DynamoTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: my-table
KeySchema:
AttributeName: id
KeyType: HASH
BillingMode: PAY_PER_REQUEST
Lambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: your-lambda-name
CodeUri: app/
Handler: app.handler
Runtime: nodejs14.x
Environment:
Variables:
DYNAMO_TABLE: !Ref DynamoTable
Policies:
- AWSLambdaExecute
- Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- dynamodb:GetItem
- dynamodb:Query
- dynamodb:Scan
- dynamodb:PutItem
- dynamodb:UpdateItem
Resource:
- !Sub 'arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${DynamoTable}'
Events:
ApiEvent:
Type: Api
Properties:
Path: /path
Method: get
Now deploying your code (or updated stack as you add / change resources) is as simple as:
sam deploy --template-file template.yml --stack-name your-stack-name --resolve-s3 --region your-region
the sam deploy does a lot in one line:
I include the --template-file parameter - but it is really not necessary in this case because the file is called "template.yml" - which the deploy command will look for.
the "--resolve-s3" option will automatically create an s3 bucket to upload your lambda function code to versus you having to define a bucket (and create it) outside of the template. The alternative would be to specify a bucket "--s3-bucket". However you would have to create that BEFORE attempting to create the stack. This is fine - but you should take this into account when you go to delete your stack. The bucket will not be included in that process and you need to ensure the bucket is deleted in addition to the stack.
I typically add in a Makefile and do my build, prune, test, etc as part of my build and deploy.
i.e.
publish:
npm run build && \
npm test && \
npm prune --production && \
sam deploy --stack-name my-stack resolve-s3 --region us-east-1
then
make publish
More than what you asked for I realize - but perhaps someone else will find utility in this.
Instead of using the aws lambda create-function command, you can use aws lambda update-function-code to update an existing lambda:
aws lambda update-function-code \
--function-name <function-name> \
--zip-file fileb://function.zip
I have created a CloudFormation template where I am deploying a Lambda function which will take code from S3 bucket from a zipped file. The bucket is in us-west-2 region.
My issue here is that the template deployment is failing to create Lambda function if the user deploys it in another region apart from us-west-2 in another account since it's not able to find the bucket in the above mentioned region.
Also, i cannot directly add the code in the template since the code has some dependency files.
Sharing the snippet of the code. Any suggestion is highly appreciated.
Thanks
Resources:
DMARCFunction:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: lambda-code-bucket
S3Key: Lambda.zip
Runtime: python3.8
Role: !GetAtt LambdaRole.Arn
Handler: lambda_function.lambda_handler
Timeout: 15
TracingConfig:
Mode: Active
Its not CloudFormation issue. That's how lambda works. the ZIP and the function must be in same region. You have to replicate your ZIP to all regions and accounts were you want to create your function.
I'm trying to deploy my Serverless project for several environments. I would like to run a develop, staging and production environment. To make this work I'm using serverless-dotenv-plugin with a NODE_ENV=development or NODE_ENV=acceptation (in this case). Everything related to the plugin works.
Everything related to the plugin seems to work. When I deploy for development or acceptance it loads the correct .env file, as well it does try to create the related S3 buckets.
As you can see in the attached image there are two buckets for each environment which I want to link to a Route53 domain. The initial deployment created the correct buckets. When I now deploy again, for development there is no issue. Although when I deploy for acceptance I get the error An error occurred: BucketGatsbySite - project-bucket-acc-www-gatsby already exists., so the build breaks.
Of course the bucket already exists, but because it's already created it shouldn't be re-created. This seems to work for development, but not for acceptance and I have no clue why. In this AWS documentation I can't find anything related to this. Although as you can see below I do have the DeletionPolicy: Retain, which I think should mean there shouldn't be a new one created, but the old one should be retained.
So to summarise, I want to create a bucket but not overwrite it. Only create it once and after that retain the old ones and don't try to create new ones.
My config is as follows:
service: project
package:
individually: true
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
stage: ${env:STAGE}
region: ${env:REGION}
environment:
REGION: ${env:REGION}
STAGE: ${env:STAGE}
NODE_ENV: ${env:NODE_ENV}
CLIENT_ID: ${env:AWS_CLIENT_ID}
TABLE: "project-db-${env:STAGE}"
BUCKET: "project-bucket-${env:STAGE}"
POOL: "project-userpool-${env:STAGE}"
iam:
role:
statements:
- Effect: Allow
Action:
- dynamodb:*
Resource:
- !GetAtt projectTable.Arn
BucketReactApp:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
AccessControl: PublicRead
BucketName: "${self:provider.environment.BUCKET}-www-react"
BucketGatsbySite:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
AccessControl: PublicRead
BucketName: "${self:provider.environment.BUCKET}-www-gatsby"
Every suggestion would be really appreciated, since I'm kinda stuck on this..
Some changes in CloudFormation (CFN) require update of the resource. It's mentioned on the "AWS::S3::Bucket" documentation page as Update requires property for each statement.
And here is the list of all "Update behaviors of stack resources" and Replacement will means that the bucket will be recreated.
But it's still strange, because the only two statements that require replacement on update are:
BucketName
ObjectLockEnabled
So maybe some intermediate operation on CFN stack requires recreation of S3 bucket.
maybe you should be looking at the UpdateReplacePolicy attribute:
BucketGatsbySite:
Type: AWS::S3::Bucket
...
UpdateReplacePolicy:
link: link
I'm creating a Nodejs microservice for AWS Lambda. I scaffolded by project using AWS Codestar, and that set me up with a CI/CD pipeline that automatically deploys the lambda function. Nice.
The issue is that every time it deploys the lambda function it must delete and recreate the function, thus deleting any versions or aliases I made.
This means I really can't roll back to other releases. I basically have use git to actually revert the project, push to git, wait for the super-slow AWS Code Pipeline to flow through successfully, and then have it remake the function. To me that sounds like a pretty bad DR strategy, and I would think the right way to rollback should be simple and fast.
Unfortunately, it looks like the CloudFormation section of AWS doesn't offer any help here. When you drill into your stack on the first CloudFormation page it only shows you information about the latest formation that occurred. Dear engineers of AWS CloudFormation: if there was a page for each stack that showed a history of CloudFormation for this stack and an option to rollback to it, that would be really awesome. For now, though, there's not. There's just information about the latest formation that's been clouded. One initially promising option was "Rollback Triggers", but this is actually just something totally different that lets you send a SNS notification if your build doesn't pass.
When I try to change the CodePipeline stage for deploy from CREATE_CHANGE_SET to CREATE_UPDATE I then get this error when it tries to execute:
Action execution failed UpdateStack cannot be used with templates
containing Transforms. (Service: AmazonCloudFormation; Status Code:
400; Error Code: ValidationError; Request ID:
bea5f687-470b-11e8-a616-c791ebf3e8e1)
My template.yml looks like this by the way:
AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31
- AWS::CodeStar
Parameters:
ProjectId:
Type: String
Description: AWS CodeStar projectID used to associate new resources to team members
Resources:
HelloWorld:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs8.10
Environment:
Variables:
NODE_ENV: staging
Role:
Fn::ImportValue:
!Join ['-', [!Ref 'ProjectId', !Ref 'AWS::Region', 'LambdaTrustRole']]
Events:
GetEvent:
Type: Api
Properties:
Path: /
Method: get
PostEvent:
Type: Api
Properties:
Path: /
Method: post
The only options in the CodePipeline "Deploy" action are these:
It would be really great if someone could help me to see how in AWS you can make Lambda functions with CodePipeline in a way that they are easy and fast to rollback. Thanks!
I used AWS CodeStar to create a new application with the "Express.js Aws Lambda Webservice" CodeStar template. This was great because it set me up with a simple CI/CD pipeline using AWS CodePipeline. By default the pipeline has 3 steps for grabbing the source code from a git repo, running the build step, and then deploying to "dev" environment.
My issue is that I can't set it up so that my pipeline has multiple environments: dev, staging, and prod.
My current deploy step has 2 actions: GenerateChangeSet and ExecuteChangeSet. Here are the configurations for the actions in original dev environment build step which work great:
I've created a new deploy stage at the end of my pipeline to deploy to staging, but honestly I'm not sure how to change the configurations. I'm thinking ultimately I want to be able to go into the AWS Lambda section of the AWS console and see three independent lambda functions: binance-bot-dev, binance-bot-staging, binance-bot-prod. Then each of these I could set as cloudwatch scheduled events or expose with their own api gateway url.
This is the configuration that I tried to use for a new deployment stage:
I'm really not sure if this configuration is correct and what exactly I should change in order to deploy in the way I want.
For example, should I be changing "Stack name", or should I keep that as "awscodestar-binance-bot-lambda" or change it for each environment as I am here?
Also, I'm pointing to a different template.yml file in the project. The original template.yml looks like this:
AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31
- AWS::CodeStar
Parameters:
ProjectId:
Type: String
Description: AWS CodeStar projectID used to associate new resources to team members
Resources:
Dev:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs4.3
Environment:
Variables:
NODE_ENV: dev
Role:
Fn::ImportValue:
!Join ['-', [!Ref 'ProjectId', !Ref 'AWS::Region', 'LambdaTrustRole']]
Events:
GetEvent:
Type: Api
Properties:
Path: /
Method: get
PostEvent:
Type: Api
Properties:
Path: /
Method: post
For template.staging.yml I use the exact same config except I changed "Dev:" to "Staging:" under "Resources", and I also changed the value of the NODE_ENV environment variable. So, I'm basically wondering is this the correct configuration for what I'm trying to achieve?
Assuming that everything in the configuration is correct, I then need to troubleshoot this error. With everything set as described above I can run my pipeline, but when it gets to my staging build step the GenerateChage_Staging action fails with this error message:
Action execution failed User:
arn:aws:sts::954459734159:assumed-role/CodeStarWorker-binance-bot-CodePipeline/1524253307698
is not authorized to perform: cloudformation:DescribeStacks on
resource:
arn:aws:cloudformation:us-east-1:954459734159:stack/awscodestar-binance-bot-lambda-staging/*
(Service: AmazonCloudFormation; Status Code: 403; Error Code:
AccessDenied; Request ID: dd801664-44d2-11e8-a2de-8fa6c42cbf86)
It seem to me from this error message that I need to add the "cloudformation:DescribeStacks" for my "CodeStarWorker-binance-bot-CodePipeline" so I go to IAM -> Roles and click on the CodeStarWorker-binance-bot-CodePipeline role. However, when I click on "CodeStarWorker-binance-bot-CodePipeline" and drill into the policy information for CloudFormation it looks like this role already has permissions for "DescribeStacks"!
If anyone could point out what I'm doing wrong or offer any guidance on understanding and thinking about how to do multiple environments with AWS CodePipeline that would be great. thanks!
UPDATE:
I changed the "Stack name" in my Deploy_To_Staging pipeline stage back to "awscodestar-binance-bot-lambda". However, I then get this error form the GenerateChange_Staging action:
Action execution failed Invalid TemplatePath:
binance-bot-BuildArtifact::template-export.staging.yml. Artifact
binance-bot-BuildArtifact doesn't exist
UPDATE 2:
In the root of my project I have the buildspec.yml file that was generated by CodeStar. It looks like this:
version: 0.2
phases:
install:
commands:
# Install dependencies needed for running tests
- npm install
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
pre_build:
commands:
# Discover and run unit tests in the 'tests' directory
- npm test
build:
commands:
# Use AWS SAM to package the application using AWS CloudFormation
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
- aws cloudformation package --template template.staging.yml --s3-bucket $S3_BUCKET --output-template template-export.staging.yml
- aws cloudformation package --template template.prod.yml --s3-bucket $S3_BUCKET --output-template template-export.prod.yml
artifacts:
type: zip
files:
- template-export.yml
I then added this to the CloudFormation section:
Then I add this to the "build: -> commands:" section:
- aws cloudformation package --template template.staging.yml --s3-bucket $S3_BUCKET --output-template template-export.staging.yml
- aws cloudformation package --template template.prod.yml --s3-bucket $S3_BUCKET --output-template template-export.prod.yml
And I added this to the "files:"
template-export.staging.yml
template-export.prod.yml
HOWEVER, I am still getting an error that "binance-bot-BuildArtifact does not exist".
Here is the full error after making the buildspec.yml change:
Action execution failed Invalid TemplatePath:
binance-bot-BuildArtifact::template-export.staging.yml. Artifact
binance-bot-BuildArtifact doesn't exist
It seems very strange to me that I can access "binance-bot-BuildArtifact" in one stage of the pipeline but not another. Could it be that the build artifact is only available to the one pipeline stage directly after the build stage? Can someone please help me to be able to access this "binance-bot-BuildArtifact"? Thanks!
For example, should I be changing "Stack name", or should I keep that as "awscodestar-binance-bot-lambda" or change it for each environment as I am here?
You should use a unique stack name for each environment. If you didn't, you would be replacing your 'dev' environment with your 'staging' environment, and so forth.
So, I'm basically wondering is this the correct configuration for what I'm trying to achieve?
I don't think so. You should use the exact same template for each environment. In order to change the environment name for each of your deploys, you can use the 'Parameter Overrides' field to choose the correct value for your 'Environment' parameter.
it looks like this role already has permissions for "DescribeStacks"!
Could the issue here be that your IAM role only has DescribeStacks permission for the dev stack? It looks like it does not have permission to describe the staging stack. Maybe you can add a 'wildcard'/asterisk to the policy so that it matches all of your stack names?
Could it be that the build artifact is only available to the one pipeline stage directly after the build stage?
No, that has not been my experience with CodePipeline. Unfortunately I don't know why it's telling you that your artifact can't be found.
robrtsql has already provided some good advice in terms of using the same template in both stages.
You might find this walkthrough useful.
Basically, it describes adding a Cloudformation "template configuration" which allows you to specify parameters to the Cloudformation stack.
This will allow you to deploy the same template in both your dev and prod environments, but also allow you to tell the difference between a dev deployment and a prod deployment, by choosing a different template configuration in each stage.